Image Title

Search Results for Excelero:

Madhu Matta, Lenovo & Dr. Daniel Gruner, SciNet | Lenovo Transform 2018


 

>> Live from New York City it's theCube. Covering Lenovo Transform 2.0. Brought to you by Lenovo. >> Welcome back to theCube's live coverage of Lenovo Transform, I'm your host Rebecca Knight along with my co-host Stu Miniman. We're joined by Madhu Matta; He is the VP and GM High Performance Computing and Artificial Intelligence at Lenovo and Dr. Daniel Gruner the CTO of SciNet at University of Toronto. Thanks so much for coming on the show gentlemen. >> Thank you for having us. >> Our pleasure. >> So, before the cameras were rolling, you were talking about the Lenovo mission in this area to use the power of supercomputing to help solve some of society's most pressing challenges; and that is climate change, and curing cancer. Can you talk a little bit, tell our viewers a little bit about what you do and how you see your mission. >> Yeah so, our tagline is basically, Solving humanity's greatest challenges. We're also now the number one supercomputer provider in the world as measured by the rankings of the top 500 and that comes with a lot of responsibility. One, we take that responsibility very seriously, but more importantly, we work with some of the largest research institutions, universities all over the world as they do research, and it's amazing research. Whether it's particle physics, like you saw this morning, whether it's cancer research, whether it's climate modeling. I mean, we are sitting here in New York City and our headquarters is in Raleigh, right in the path of Hurricane Florence, so the ability to predict the next anomaly, the ability to predict the next hurricane is absolutely critical to get early warning signs and a lot of survival depends on that. So we work with these institutions jointly to develop custom solutions to ensure that all this research one it's powered and second to works seamlessly, and all their researchers have access to this infrastructure twenty-four seven. >> So Danny, tell us a little bit about SciNet, too. Tell us what you do, and then I want to hear how you work together. >> And, no relation with Skynet, I've been assured? Right? >> No. Not at all. It is also no relationship with another network that's called the same, but, it doesn't matter. SciNet is an organization that's basically the University of Toronto and the associated research hospitals, and we happen to run Canada's largest supercomputer. We're one of a number of computer sites around Canada that are tasked with providing resources and support, support is the most important, to academia in Canada. So, all academics, from all the different universities, in the country, they come and use our systems. From the University of Toronto, they can also go and use the other systems, it doesn't matter. Our mission is, as I said, we provide a system or a number of systems, we run them, but we really are about helping the researchers do their research. We're all scientists. All the guys that work with me, we're all scientists initially. We turned to computers because that was the way we do the research. You can not do astrophysics other than computationally, observationally and computationally, but nothing else. Climate science is the same story, you have so much data and so much modeling to do that you need a very large computer and, of course, very good algorithms and very careful physics modeling for an extremely complex system, but ultimately it needs a lot of horsepower to be able to even do a single simulation. So, what I was showing with Madhu at that booth earlier was results of a simulation that was done just prior us going into production with our Lenovo system where people were doing ocean circulation calculations. The ocean is obviously part of the big Earth system, which is part of the climate system as well. But, they took a small patch of the ocean, a few kilometers in size in each direction, but did it at very, very high resolution, even vertically going down to the bottom of the ocean so that the topography of the ocean floor can be taken into account. That allows you to see at a much smaller scale the onset of tides, the onset of micro-tides that allow water to mix, the cold water from the bottom and the hot water from the top; The mixing of nutrients, how life goes on, the whole cycle. It's super important. Now that, of course, gets coupled with the atmosphere and with the ice and with the radiation from the sun and all that stuff. That calculation was run by a group from, the main guy was from JPL in California, and he was running on 48,000 cores. Single runs at 48,000 cores for about two- to three-weeks and produced a petabyte of data, which is still being analyzed. That's the kind of resolution that's been enabled... >> Scale. >> It gives it a sense of just exactly... >> That's the scale. >> By a system the size of the one we have. It was not possible to do that in Canada before this system. >> I tell you both, when I lived on the vendor side and as an analyst, talking to labs and universities, you love geeking out. Because first of all, you always have a need for newer, faster things because the example you just gave is like, "Oh wait." "If I can get the next generation chipset." "If the networking can be improved." You know you can take that petabyte of data and process it so much faster. >> If I could only get more money to buy a bigger one. >> We've talked to the people at CERN and JPL and things like that. - Yeah. >> And it's like this is where most companies are it's like, yeah it's a little bit better, and it might make things a little better and make things nice, but no, this is critical to move along the research. So talk a little bit more about the infrastructure and what you look for and how that connects to the research and how you help close that gap over time. >> Before you go, I just want to also highlight a point that Danny made on solving humanity's greatest challenges which is our motto. He talked about the data analysis that he just did where they are looking at the surface of the ocean, as well as, going down, what is it, 264 nautical layers underneath the ocean? To analyze that much data, to start looking at marine life and protecting marine life. As you start to understand that level of nautical depth, they can start to figure out the nutrients value and other contents that are in that water to be able to start protecting the marine life. There again, another of humanity's greatest challenge right there that he's giving you... >> Nothing happens in isolation; It's all interconnected. >> Yeah. >> When you finally got a grant, you're able to buy a computer, how do you buy the computer that's going to give you the most bang for your buck? The best computer to do the science that we're all tasked with doing? It's tough, right? We don't fancy ourselves as computer architects; we engage the computer companies who really know about architecture to help us do it. The way we did our procurement was, 'Ok vendors, we have a set pot of money, we're willing to spend every last penny of this money, you give us the biggest and the baddest for our money." Now, it has to have a certain set of criteria. You have to be able to solve a number of benchmarks, some sample calculations that we provided. The ones that give you the best performance that's a bonus. It also has to be able to do it with the least amount of power, so we don't have to heat up the world and pay through the nose with power. Those are objective criteria that anybody can understand. But then, there's also the other criteria, so, how well will it run? How is it architected? How balanced is it? Did we get the iOS sub-system for all the storage that was the one that actually meets the criteria? What other extras do we have that will help us make the system run in a much smoother way and for a wide variety of disciplines because we run the biologists together with the physicists and the engineers and the humanitarians, the humanities people. Everybody uses the system. To make a long story short, the proposal that we got from Lenovo won the bid both in terms of what we got for in terms of hardware and also the way it was put together, which was quite innovative. >> Yeah. >> I want to hear about, you said give us the biggest, the baddest, we're willing to empty our coffers for this, so then where do you go from there? How closely do you work with SciNet, how does the relationship evolve and do you work together to innovate and kind of keep going? >> Yeah. I see it as not a segment or a division. I see High Performance Computing as a practice, and with any practice, it's many pieces that come together; you have a conductor, you have the orchestra, but the end of the day the delivery of that many systems is the concert. That's the way to look at it. To deliver this, our practice starts with multiple teams; one's a benchmarking team that understands the application that Dr. Gruner and SciNet will be running because they need to tune to the application the performance of the cluster. The second team is a set of solution architects that are deep engineers and understand our portfolio. Those two work together to say against this application, "Let's build," like he said, "the biggest, baddest, best-performing solution for that particular application." So, those two teams work together. Then we have the third team that kicks in once we win the business, which is coming on site to deploy, manage, and install. When Dr. Gruner talks about the infrastructure, it's a combination of hardware and software that all comes together and the software is open-source based that we built ourselves because we just felt there weren't the right tools in the industry to manage this level of infrastructure at that scale. All this comes together to essentially rack and roll onto their site. >> Let me just add to that. It's not like we went for it in a vacuum. We had already talked to the vendors, we always do. You always go, and they come to you and 'when's your next money coming,' and it's a dog and pony show. They tell you what they have. With Lenovo, at least the team, as we know it now, used to be the IBM team, iXsystems team, who built our previous system. A lot of these guys were already known to us, and we've always interacted very well with them. They were already aware of our thinking, where we were going, and that we're also open to suggestions for things that are non-conventional. Now, this can backfire, some data centers are very square they will only prescribe what they want. We're not prescriptive at all, we said, "Give us ideas about what can make this work better." These are the intangibles in a procurement process. You also have to believe in the team. If you don't know the team or if you don't know their track record then that's a no-no, right? Or, it takes points away. >> We brought innovations like DragonFly, which Dr. Dan will talk about that, as well as, we brought in for the first time, Excelero, which is a software-defined storage vendor and it was a smart part of the bid. We were able to flex muscles and be more creative versus just the standard. >> My understanding, you've been using water cooling for about a decade now, maybe? - Yes. >> Maybe you could give us a little bit about your experiences, how it's matured over time, and then Madhu will talk and bring us up to speed on project Neptune. >> Okay. Our first procurement about 10 years ago, again, that was the model we came up with. After years of wracking our brains, we could not decide how to build a data center and what computers to buy, it was like a chicken and egg process. We ended up saying, 'Okay, this is what we're going to do. Here's the money, here's is our total cost of operation that we can support." That included the power bill, the water, the maintenance, the whole works. So much can be used for infrastructure, and the rest is for the operational part. We said to the vendors, "You guys do the work. We want, again, the biggest and the baddest that we can operate within this budget." So, obviously, it has to be energy efficient, among other things. We couldn't design a data center and then put in the systems that we didn't know existed or vice-versa. That's how it started. The initial design was built by IBM, and they designed the data center for us to use water cooling for everything. They put rear door heat exchanges on the racks as a means of avoiding the use of blowing air and trying to contain the air which is less efficient, the air, and is also much more difficult. You can flow water very efficiently. You open the door of one of these racks. >> It's amazing. >> And it's hot air coming out, but you take the heat, right there in-situ, you remove it through a radiator. It's just like your car radiator. >> Car radiator. >> It works very well. Now, it would be nice if we could do even better by doing the hot water cooling and all that, but we're not in a university environment, we're in a strip mall out in the boonies, so we couldn't reuse the heat. Places like LRZ they're reusing the heat produced by the computers to heat their buildings. >> Wow. >> Or, if we're by a hospital, that always needs hot water, then we could have done it. But, it's really interesting how the option of that design that we ended up with the most efficient data center, certainly in Canada, and one of the most efficient in North America 10 years ago. Our PUE was 1.16, that was the design point, and this is not with direct water cooling through the chip. >> Right. Right. >> All right, bring us up to speed. Project Neptune, in general? >> Yes, so Neptune, as the name suggests, is the name of the God of the Sea and we chose that to brand our entire suite of liquid cooling products. Liquid cooling products is end to end in the sense that it's not just hardware, but, it's also software. The other key part of Neptune is a lot of these, in fact, most of these, products were built, not in a vacuum, but designed and built in conjunction with key partners like Barcelona Supercomputer, LRZ in Germany, in Munich. These were real-life customers working with us jointly to design these products. Neptune essentially allows you, very simplistically put, it's an entire suite of hardware and software that allows you to run very high-performance processes at a level of power and cooling utilization that's like using a much lower processor, it dissipates that much heat. The other key part is, you know, the normal way of cooling anything is run chilled water, we don't use chilled water. You save the money of chillers. We use ambient temperature, up to 50 degrees, 90% efficiency, 50 degree goes in, 60 degree comes out. It's really amazing, the entire suite. >> It's 50 Celsius, not Fahrenheit. >> It's Celsius, correct. >> Oh. >> Dr. Bruner talked about SciNet with the rado-heat exchanger. You actually got to stand in front of it to feel the magic of this, right? As geeky as that is. You open the door and it's this hot 60-, 65-degree C air. You close the door it's this cool 20-degree air that's coming out. So, the costs of running a data center drop dramatically with either the rado-heat exchanger, our direct to node product, which we just got released the SE650, or we have something call the thermal-transfer module, which replaces a normal heat sink. Where for an air cool we bring water cool goodness to an air cool product. >> Danny, I wonder if you can give us the final word, just the climate science in general, how's the community doing? Any technological things that are holding us back right now or anything that excites you about the research right now? >> Technology holds you back by the virtual size of the calculations that you need to do, but, it's also physics that hold you back. >> Yes. Because doing the actual modeling is very difficult and you have to be able to believe that the physics models actually work. This is one of the interesting things that Dick Peltier, who happens to be our scientific director and he's also one of the top climate scientists in the world, he's proven through some of his calculations that the models are actually pretty good. The models were designed for current conditions, with current data, so that they would reproduce the evolution of the climate that we can measure today. Now, what about climate that started happening 10,000 years ago, right? The climate was going on; it's been going on forever and ever. There's been glaciations; there's been all these events. It turns out that it has been recorded in history that there are some oscillations in temperature and other quantities that happen about every 1,000 years and nobody had been able to prove why they would happen. It turns out that the same models that we use for climate calculations today, if you take them back and do what's called paleoclimate, you start with approximating the conditions that happened 10,000 years ago, and then you move it forward, these things reproduce, those oscillations, exactly. It's very encouraging that the climate models actually make sense. We're not talking in a vacuum. We're not predicting the end of the world, just because. These calculations are right. They're correct. They're predicting the temperature of the earth is climbing and it's true, we're seeing it, but it will continue unless we do something. Right? It's extremely interesting. Now he's he's beginning to apply those results of the paleoclimate to studies with anthropologists and archeologists. We're trying to understand the events that happened in the Levant in the Middle East thousands of years ago and correlate them with climate events. Now, is that cool or what? >> That's very cool. >> So, I think humanity's greatest challenge is again to... >> I know! >> He just added global warming to it. >> You have a fun job. You have a fun job. >> It's all the interdisciplinarity that now has been made possible. Before we couldn't do this. Ten years ago we couldn't run those calculations, now we can. So it's really cool. - Amazing. Great. Well, Madhu, Danny, thank you so much for coming on the show. >> Thank you for having us. >> It was really fun talking to you. >> Thanks. >> I'm Rebecca Knight for Stu Miniman. We will have more from the Lenovo Transform just after this. (tech music)

Published Date : Sep 13 2018

SUMMARY :

Brought to you by Lenovo. and Dr. Daniel Gruner the CTO of SciNet and that is climate change, and curing cancer. so the ability to predict the next anomaly, and then I want to hear how you work together. and the hot water from the top; The mixing of nutrients, By a system the size of the one we have. and as an analyst, talking to labs and universities, to buy a bigger one. and things like that. and what you look for and how that connects and other contents that are in that water and the humanitarians, the humanities people. of that many systems is the concert. With Lenovo, at least the team, as we know it now, and it was a smart part of the bid. for about a decade now, maybe? and then Madhu will talk and bring us up to speed and the rest is for the operational part. And it's hot air coming out, but you take the heat, by the computers to heat their buildings. that we ended up with the most efficient data center, Right. Project Neptune, in general? is the name of the God of the Sea You open the door and it's this hot 60-, 65-degree C air. by the virtual size of the calculations that you need to do, of the paleoclimate to studies with anthropologists You have a fun job. It's all the interdisciplinarity We will have more from the Lenovo Transform just after this.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dick PeltierPERSON

0.99+

Rebecca KnightPERSON

0.99+

CanadaLOCATION

0.99+

LenovoORGANIZATION

0.99+

DannyPERSON

0.99+

60QUANTITY

0.99+

IBMORGANIZATION

0.99+

RaleighLOCATION

0.99+

SciNetORGANIZATION

0.99+

48,000 coresQUANTITY

0.99+

MadhuPERSON

0.99+

90%QUANTITY

0.99+

BrunerPERSON

0.99+

New York CityLOCATION

0.99+

Stu MinimanPERSON

0.99+

GermanyLOCATION

0.99+

University of TorontoORGANIZATION

0.99+

20-degreeQUANTITY

0.99+

SkynetORGANIZATION

0.99+

MunichLOCATION

0.99+

50 degreeQUANTITY

0.99+

CERNORGANIZATION

0.99+

two teamsQUANTITY

0.99+

CalifoLOCATION

0.99+

North AmericaLOCATION

0.99+

JPLORGANIZATION

0.99+

Madhu MattaPERSON

0.99+

twoQUANTITY

0.99+

DanPERSON

0.99+

third teamQUANTITY

0.99+

60 degreeQUANTITY

0.99+

50 CelsiusQUANTITY

0.99+

second teamQUANTITY

0.99+

iOSTITLE

0.99+

65-degree CQUANTITY

0.99+

iXsystemsORGANIZATION

0.99+

LRZORGANIZATION

0.99+

Ten years agoDATE

0.99+

10,000 years agoDATE

0.98+

thousands of years agoDATE

0.98+

Daniel GrunerPERSON

0.98+

bothQUANTITY

0.98+

264 nautical layersQUANTITY

0.98+

Middle EastLOCATION

0.98+

oneQUANTITY

0.98+

earthLOCATION

0.98+

first timeQUANTITY

0.98+

SingleQUANTITY

0.98+

each directionQUANTITY

0.98+

EarthLOCATION

0.98+

10 years agoDATE

0.98+

GrunerPERSON

0.98+

twenty-four sevenQUANTITY

0.97+

three-weeksQUANTITY

0.97+

NeptuneLOCATION

0.96+

Barcelona SupercomputerORGANIZATION

0.96+

single simulationQUANTITY

0.96+

todayDATE

0.95+

SE650COMMERCIAL_ITEM

0.94+

Dr.PERSON

0.94+

theCubeCOMMERCIAL_ITEM

0.94+

Hurricane FlorenceEVENT

0.94+

this morningDATE

0.93+

up to 50 degreesQUANTITY

0.92+

LevantLOCATION

0.92+

Day 2 Intro with Stephen Foskett, TechFieldDay - DockerCon 2017 - #theCUBE - #DockerCon


 

>> Announcer: Live from Austin, Texas. It's The Cube. Covering DockerCon 2017, brought to you by Docker and support from it's ecosystem partners. (techno music) Hi, I'm Stu Miniman and this is Silicon Angle Media's production of The Cube, the worldwide leader in enterprise tech coverage and this is DockerCon 2017. We're here at the Austin Convention Center, just had the day two kick-off at the keynote. Really, yesterday was the developer day, today is the enterprise day. And to help me break down the latest news and what's happening in the ecosystem, I grabbed just some guy. (laughter) And of course, that's actually in his Twitter bio, which is why I do this, and I happen to have a good friend of mine, a good friend of the community, Stephen Foskett, who is the organizer of TechFieldDay. Stephen, always great to see ya and thanks for taking time out to get a little casual and dig into some open-source developer stuff. >> Yeah, you know, these are the developers, I'm used to wearing my fancy clothes, but I figured I would try to blend in a little bit here with the DevOps crowd at DockerCon. >> Yeah, I saw one of the demo guys had like a flashy jacket. I figured you'd come in in tails and- >> Yeah, I do usually have flashy shirts and stuff on, but yesterday I felt a little out of place, I mean these guys are, well, a lot of t-shirts here. >> Yeah, so today not as many announcements but it's always interesting. Shows like Amazon, shows like this, it's like, okay, one day let's talk to the developers and one day let's talk to the enterprise. What's your take on that? How is Docker doing with their maturation and what do you see in the marketplace? >> Yeah, I think that's really the key to what they're planning, so yesterday, I don't want to say developer because it was developer and ops but it was basically traditional Docker day, yesterday. And today is all about the enterprise. And I think that Docker had a very clear goal from today and that was to really plant their flag and say, not just Docker Day the center like last year, but that Docker is not only ready to be in the enterprise and not only has the tools to be in the enterprise, but is already there with some major customers. >> Yeah and great customers, had Visa and MetLife up onstage and no better way to say we're ready for enterprise applications than say, hey, Oracle is in the store there. What's your take, anything on the customer case studies Oracle? >> Well, let's take the customer case studies first. So clearly the takeaway from the Visa presentation and the MetLife presentation was nothing more than Visa is using Docker and Anchore, MetLife is using Docker and Anchore. I mean, basically these are massive, traditional companies with absolutely critical workloads, huge security requirements, and they're using Docker in production. I think that, if we would have all listened, Ben could have stood up there and said, "Hey everybody, Docker, MetLife, enterprise, production" and that would have been a substitute for 45 minutes of discussion. Because it's not like Visa's really going to tell us the secret ins and outs of their infrastructure, but they told us the most important thing, which is that a lot of those transactions are running through Docker containers. And that's what Docker wanted us to hear. >> It's interesting. Ben kind of blew up the myth of bimodal IT. And one of the things we'd kind of been looking at and want to get your opinion on, is taking my older applications and just kind of wrapping and moving them. Without changing a line of code, I can bring this into this environment what, you know many of us called for years, 'lift and shift'. What do you think about the modern, building new applications versus the old applications and, of course, customers don't have two IT environments. They usually need to move things together and have kind of a whole strategy. >> Yeah and well, I'm ambivalent about this whole concept of bimodal IT, but I'm not ready to reject it. I think it still matters from an app perspective, from an app-to-app perspective, and I think it's absolutely true that there are multiple kinds of apps. In fact, I think there's probably more than two kinds. I think that's maybe the real problem. You've got the real traditional applications, you know Southwest just announced that they're moving their reservation system forward from some old mainframe to some new mainframe, and that's causing all sorts of disruption in travel. Those kind of applications and then there's the more open systems packaged applications from the '90s and the 2000s and those things can be moved forward. And then there's sort of the applications that can be really modernized with containers and then there's the applications that you can 'microservice-cize' and then there's real cloud applications. So it's not just bimodal IT, it's really octomodal IT. >> And I like that Ben put it up there, it was a journey that they talked about. It's let's get everything on kind of a shared platform and have a way that we can do it the old way, start breaking it apart into more pieces, or totally rewrite. Because we know the migration cost of having to rewrite an application, it's really tough. >> Stephen: It's huge. >> But it's something that, for too long, people were like, 'oh well, I'll just run on that really old application that kind of sucked for way too long,' so I know sometimes I'm getting on my soap box and being like, please, your users hate that application and they'd like to be a little bit more modern. But it's not an easy thing and there's multiple paths to get there. There was an announcement, they called it the 'modernized traditional applications'. Any take on that and how that fits into the discussion we were just having? >> Well they talked about that a little bit today, not to put in too much of a plug, but we actually had a 45 minute discussion of that with TechFieldDay on Monday and it was embargoed. But the video is actually uploaded now and so if you just Google 'TechFieldDay Docker modernized tradition applications', there's a much deeper dive into that and really what that means and essentially, it's a take on the old P2V strategy that we saw in virtualization that it is possible to literally just scoop up a traditional application and put it in a container. But it's doing more than that and there's all sorts of things that are going on here there identifying which components are part of the application, they're helping you set up the network so that the application will connect still the right way. And I think by choice, Docker didn't really want to emphasize all the real nuts and bolts. I mean, they showed a great, well, an amusing demo of this in action with Ben playing the straight man at the keynote, and that's worth watching as well, but it remains to be seen to what extent they're going to be able to modernize traditional applications and containerize traditional applications. >> Okay, so Stephen, one of the things that is probably the least mature in the Docker ecosystem is storage. I know it's something you've spent some time digging into, what's your take on where we are with storage and containers, where it needs to go, what's the truth and reality? >> Yeah, well my, as you say, my background is storage. And I love storage, I really do. But absolutely, Docker, when I first started experimenting with Docker, I was really blown away by the sort of amateur hour storage approach that they took, I mean, it was essentially, here's a company that knows nothing about storage or networking, building a storage and a networking system. You know, what's wrong with these people? But over time, I've kind of, my view has become a little more nuanced. Because I see that Docker wasn't trying to build an enterprise-grade storage infrastructure, they were trying to build a storage layer that would allow you efficiently to deploy containers. The whole idea always was that storage would be external to the container. And if you're using internal container storage, if you're using the layered file systems, you're doing it wrong if you're doing any kind of real IO. And so, you know we saw a proliferation of plug-ins to allow you to use real storage systems, enterprise storage systems. Ben mentions Nimble and NetApp and companies like that. And in addition, we're starting now to see a whole raft of really interesting, basically container storage arrays. So you've got companies like Storage OS and Portworx developing real enterprise-concept storage specifically targeted at containers. And I think that that's really what's going to happen, is we're going to have the containers using the layered Docker storage but real heavy IO and enterprise applications are either going to us plugged-in enterprise storage or Dockerized enterprise storage. >> Reminds us a lot of what we saw with virtualization- >> Stephen: Absolutely. >> We spent a decade fixing that, I actually remember at Intel Developer Forum, gosh was it like two years ago, Nick Weaver, good friend of ours, works over at Intel, used to work at EMZ, goes to this presentation, I get up at the end, I'm like, 'hey Nick, how are we going to solve all these issues like we did for VMware?" And he was like, 'oh my gosh.' >> And it's pretty much the same story, isn't it? >> It is that same story. >> You know, we're seeing basically the same thing, like virtual storage appliances equals container storage appliances. >> The oversimplified thing of it for me is I felt like we moved along faster with storage and networking took a long time in the virtualization layer, and here it's flipped. Networking seems to move along a little bit faster and storage is there and it's a little nuanced as to what that storage solution looks like, it's not just like, 'oh, we put it all in the hypervisor and eventually it works and we do everything in VM layer.' It's like, well, containers are a little bit different. >> Yeah and some of these container storage solutions are really clever. They've take the lessons from virtualization, from cloud storage, they're building distributed storage, it's really cool. But I think there's another thing to think about there too and that's that Docker invested pretty heavily in creating a, I don't want to say a real enterprise networking layer, but a better networking layer for Swarm. And I think that that may be a road sign of what they may do for storage as well. I think we may see Docker developing a more advanced storage layer, maybe not an enterprise storage layer, but at least something scalable, something distributed for Swarm customers. >> Yeah, I want to get just a little broader from you, just your take on storage these days. I look at adoption of Amazon, VMware's going to go on Amazon. Look, Azure Stack's coming out this summer and you know, we're going to have the S2D as the storage layer for what that's built on. What's the storage market look like from the Foskett viewpoint? >> Well storage is really conservative and when you talk about the market and you talk about the technology, these are two very different things. So the technology is rapidly advancing, we're seeing the world is right now being blown away by the current wave, which is distributed, NVMe, ultra high-performance flash storage, exemplified by a company like Excelero, for example. That's absolutely the coolest stuff out there right now. But then the market is still adopting SAN. You know what I mean? The market is still, you know, 'hey, should we implement iSCSI? Hey, should we look at NFSv4?' Things like that and it's a real kind of facepalm thing because you look at the reality of storage and it doesn't keep up with the promise of enterprise storage. But it's, yeah and then there's the whole aspect of sort of cloud storage, off-premises storage and that is also a potentially game-changer for the market. But overall, I would say that you'd be a fool to bet on radical transformation of storage. It's just not going to happen. You know, that's why HP's going to get tremendous value out of buying Nimble. That's why NetApp and Dell EMC are going to be selling a lot of product for a long time. Because although they're innovating and advancing and keeping up with some of these new waves of storage, the truth is most buyers are buying very calm, boring stuff still. >> Alright, Stephen, unfortunately we're running low on time. Why don't you be the final word, let's talk about the community aspect. I loved, you come into a lot of these open-source shows, it's just got a great vibe, enthusiastic, really people that want to learn. And I know that always excites me, it's the kind of thing that you love, hanging out with those people too. What's your take on kind of the Docker ecosystem and community? >> It's wonderful. I mean, it reminds me of how VMware was back, well, the last decade. It's a warm, inviting, exciting community. And one of the things that I really want to highlight here at DockerCon that I've seen is that it's a lot more of a diverse community than I've seen traditionally in IT. I'm more of in enterprise IT and so there's a lot of people walking around that look like me. And looking here, there's a lot of people that don't. And that is fantastic. Docker has done a great job of emphasizing diversity, they've got onsite child care, they've got, I mean, Solomon tweeted that there's 20% women attendees at DockerCon. To me, yeah, the vibe is great, but wow! Talk about broadening IT and talk about modernizing IT. That's modernizing IT. >> Alright, well Stephen Foskett, always great to catch up with you. I'm sure I will see you at many conferences throughout our travels throughout the year and we've got a full day of coverage here from DockerCon 2017. Solomon Hykes is coming on, we do have Visa who did the case study, many other partners, Oracle who made an announcement today. I've got a couple of service providers who actually participated in Stephen's TFDX event here before the event. So stay tuned for all our coverage and thank you for watching The Cube. (techno music)

Published Date : Apr 19 2017

SUMMARY :

and thanks for taking time out to get a little casual Yeah, you know, these are the developers, Yeah, I saw one of the demo guys had like a flashy jacket. and stuff on, but yesterday I felt a little out of place, and one day let's talk to the enterprise. and not only has the tools to be in the enterprise, Yeah and great customers, had Visa and MetLife up onstage and the MetLife presentation was nothing more than and just kind of wrapping and moving them. and then there's the applications And I like that Ben put it up there, and there's multiple paths to get there. and essentially, it's a take on the old P2V strategy and containers, where it needs to go, And I think that that's really what's going to happen, I get up at the end, I'm like, You know, we're seeing basically the same thing, and it's a little nuanced as to what But I think there's another thing to think about there too and you know, we're going to have the S2D as the storage layer and that is also a potentially game-changer for the market. And I know that always excites me, And one of the things that I really want to highlight and thank you for watching The Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stephen FoskettPERSON

0.99+

StephenPERSON

0.99+

SolomonPERSON

0.99+

SouthwestORGANIZATION

0.99+

OracleORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Solomon HykesPERSON

0.99+

45 minutesQUANTITY

0.99+

BenPERSON

0.99+

Nick WeaverPERSON

0.99+

HPORGANIZATION

0.99+

MondayDATE

0.99+

Stu MinimanPERSON

0.99+

45 minuteQUANTITY

0.99+

todayDATE

0.99+

NickPERSON

0.99+

yesterdayDATE

0.99+

last yearDATE

0.99+

DockerORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

DockerCon 2017EVENT

0.99+

IntelORGANIZATION

0.99+

#DockerConEVENT

0.99+

MetLifeORGANIZATION

0.99+

Silicon Angle MediaORGANIZATION

0.99+

two years agoDATE

0.99+

DockerConEVENT

0.99+

more than two kindsQUANTITY

0.99+

Austin Convention CenterLOCATION

0.98+

The CubeTITLE

0.98+

TechFieldDayEVENT

0.98+

NimbleORGANIZATION

0.98+

VMwareORGANIZATION

0.98+

oneQUANTITY

0.97+

NetAppORGANIZATION

0.97+

Dell EMCORGANIZATION

0.97+

2000sDATE

0.97+

VisaORGANIZATION

0.97+

ExceleroORGANIZATION

0.96+

Azure StackTITLE

0.95+

EMZORGANIZATION

0.94+

this summerDATE

0.93+

TFDXEVENT

0.93+

SwarmORGANIZATION

0.93+

firstQUANTITY

0.92+

'90sDATE

0.91+

a decadeQUANTITY

0.91+

Docker DayEVENT

0.91+

Day 2QUANTITY

0.9+

PortworxORGANIZATION

0.89+

GoogleORGANIZATION

0.88+