The Impact of Exascale on Business | Exascale Day
>>from around the globe. It's the Q with digital coverage of exa scale day made possible by Hewlett Packard Enterprise. Welcome, everyone to the Cube celebration of Exa Scale Day. Shaheen Khan is here. He's the founding partner, an analyst at Orion X And, among other things, he is the co host of Radio free HPC Shaheen. Welcome. Thanks for coming on. >>Thanks for being here, Dave. Great to be here. How are you >>doing? Well, thanks. Crazy with doing these things, Cove in remote interviews. I wish we were face to face at us at a supercomputer show, but, hey, this thing is working. We can still have great conversations. And And I love talking to analysts like you because you bring an independent perspective. You're very wide observation space. So So let me, Like many analysts, you probably have sort of a mental model or a market model that you look at. So maybe talk about your your work, how you look at the market, and we could get into some of the mega trends that you see >>very well. Very well. Let me just quickly set the scene. We fundamentally track the megatrends of the Information Age And, of course, because we're in the information age, digital transformation falls out of that. And the megatrends that drive that in our mind is Ayotte, because that's the fountain of data five G. Because that's how it's gonna get communicated ai and HBC because that's how we're gonna make sense of it Blockchain and Cryptocurrencies because that's how it's gonna get transacted on. That's how value is going to get transferred from the place took place and then finally, quantum computing, because that exemplifies how things are gonna get accelerated. >>So let me ask you So I spent a lot of time, but I D. C and I had the pleasure of of the High Performance computing group reported into me. I wasn't an HPC analyst, but over time you listen to those guys, you learning. And as I recall, it was HPC was everywhere, and it sounds like we're still seeing that trend where, whether it was, you know, the Internet itself were certainly big data, you know, coming into play. Uh, you know, defense, obviously. But is your background mawr HPC or so that these other technologies that you're talking about it sounds like it's your high performance computing expert market watcher. And then you see it permeating into all these trends. Is that a fair statement? >>That's a fair statement. I did grow up in HPC. My first job out of school was working for an IBM fellow doing payroll processing in the old days on and and And it went from there, I worked for Cray Research. I worked for floating point systems, so I grew up in HPC. But then, over time, uh, we had experiences outside of HPC. So for a number of years, I had to go do commercial enterprise computing and learn about transaction processing and business intelligence and, you know, data warehousing and things like that, and then e commerce and then Web technology. So over time it's sort of expanded. But HPC is a like a bug. You get it and you can't get rid of because it's just so inspiring. So supercomputing has always been my home, so to say >>well and so the reason I ask is I wanted to touch on a little history of the industry is there was kind of a renaissance in many, many years ago, and you had all these startups you had Kendall Square Research Danny Hillis thinking machines. You had convex trying to make many supercomputers. And it was just this This is, you know, tons of money flowing in and and then, you know, things kind of consolidate a little bit and, uh, things got very, very specialized. And then with the big data craze, you know, we've seen HPC really at the heart of all that. So what's your take on on the ebb and flow of the HPC business and how it's evolved? >>Well, HBC was always trying to make sense of the world, was trying to make sense of nature. And of course, as much as we do know about nature, there's a lot we don't know about nature and problems in nature are you can classify those problems into basically linear and nonlinear problems. The linear ones are easy. They've already been solved. The nonlinear wants. Some of them are easy. Many of them are hard, the nonlinear, hard, chaotic. All of those problems are the ones that you really need to solve. The closer you get. So HBC was basically marching along trying to solve these things. It had a whole process, you know, with the scientific method going way back to Galileo, the experimentation that was part of it. And then between theory, you got to look at the experiment and the data. You kind of theorize things. And then you experimented to prove the theories and then simulation and using the computers to validate some things eventually became a third pillar of off science. On you had theory, experiment and simulation. So all of that was going on until the rest of the world, thanks to digitization, started needing some of those same techniques. Why? Because you've got too much data. Simply, there's too much data to ship to the cloud. There's too much data to, uh, make sense of without math and science. So now enterprise computing problems are starting to look like scientific problems. Enterprise data centers are starting to look like national lab data centers, and there is that sort of a convergence that has been taking place gradually, really over the past 34 decades. And it's starting to look really, really now >>interesting, I want I want to ask you about. I was like to talk to analysts about, you know, competition. The competitive landscape is the competition in HPC. Is it between vendors or countries? >>Well, this is a very interesting thing you're saying, because our other thesis is that we are moving a little bit beyond geopolitics to techno politics. And there are now, uh, imperatives at the political level that are driving some of these decisions. Obviously, five G is very visible as as as a piece of technology that is now in the middle of political discussions. Covert 19 as you mentioned itself, is a challenge that is a global challenge that needs to be solved at that level. Ai, who has access to how much data and what sort of algorithms. And it turns out as we all know that for a I, you need a lot more data than you thought. You do so suddenly. Data superiority is more important perhaps than even. It can lead to information superiority. So, yeah, that's really all happening. But the actors, of course, continue to be the vendors that are the embodiment of the algorithms and the data and the systems and infrastructure that feed the applications. So to say >>so let's get into some of these mega trends, and maybe I'll ask you some Colombo questions and weaken geek out a little bit. Let's start with a you know, again, it was one of this when I started the industry. It's all it was a i expert systems. It was all the rage. And then we should have had this long ai winter, even though, you know, the technology never went away. But But there were at least two things that happened. You had all this data on then the cost of computing. You know, declines came down so so rapidly over the years. So now a eyes back, we're seeing all kinds of applications getting infused into virtually every part of our lives. People trying to advertise to us, etcetera. Eso So talk about the intersection of AI and HPC. What are you seeing there? >>Yeah, definitely. Like you said, I has a long history. I mean, you know, it came out of MIT Media Lab and the AI Lab that they had back then and it was really, as you mentioned, all focused on expert systems. It was about logical processing. It was a lot of if then else. And then it morphed into search. How do I search for the right answer, you know, needle in the haystack. But then, at some point, it became computational. Neural nets are not a new idea. I remember you know, we had we had a We had a researcher in our lab who was doing neural networks, you know, years ago. And he was just saying how he was running out of computational power and we couldn't. We were wondering, you know what? What's taking all this difficult, You know, time. And it turns out that it is computational. So when deep neural nets showed up about a decade ago, arm or it finally started working and it was a confluence of a few things. Thalib rhythms were there, the data sets were there, and the technology was there in the form of GPS and accelerators that finally made distractible. So you really could say, as in I do say that a I was kind of languishing for decades before HPC Technologies reignited it. And when you look at deep learning, which is really the only part of a I that has been prominent and has made all this stuff work, it's all HPC. It's all matrix algebra. It's all signal processing algorithms. are computational. The infrastructure is similar to H B. C. The skill set that you need is the skill set of HPC. I see a lot of interest in HBC talent right now in part motivated by a I >>mhm awesome. Thank you on. Then I wanna talk about Blockchain and I can't talk about Blockchain without talking about crypto you've written. You've written about that? I think, you know, obviously supercomputers play a role. I think you had written that 50 of the top crypto supercomputers actually reside in in China A lot of times the vendor community doesn't like to talk about crypto because you know that you know the fraud and everything else. But it's one of the more interesting use cases is actually the primary use case for Blockchain even though Blockchain has so much other potential. But what do you see in Blockchain? The potential of that technology And maybe we can work in a little crypto talk as well. >>Yeah, I think 11 simple way to think of Blockchain is in terms off so called permission and permission less the permission block chains or when everybody kind of knows everybody and you don't really get to participate without people knowing who you are and as a result, have some basis to trust your behavior and your transactions. So things are a lot calmer. It's a lot easier. You don't really need all the supercomputing activity. Whereas for AI the assertion was that intelligence is computer herbal. And with some of these exa scale technologies, we're trying to, you know, we're getting to that point for permission. Less Blockchain. The assertion is that trust is computer ble and, it turns out for trust to be computer ble. It's really computational intensive because you want to provide an incentive based such that good actors are rewarded and back actors. Bad actors are punished, and it is worth their while to actually put all their effort towards good behavior. And that's really what you see, embodied in like a Bitcoin system where the chain has been safe over the many years. It's been no attacks, no breeches. Now people have lost money because they forgot the password or some other. You know, custody of the accounts have not been trustable, but the chain itself has managed to produce that, So that's an example of computational intensity yielding trust. So that suddenly becomes really interesting intelligence trust. What else is computer ble that we could do if we if we had enough power? >>Well, that's really interesting the way you described it, essentially the the confluence of crypto graphics software engineering and, uh, game theory, Really? Where the bad actors air Incentive Thio mined Bitcoin versus rip people off because it's because because there are lives better eso eso so that so So Okay, so make it make the connection. I mean, you sort of did. But But I want to better understand the connection between, you know, supercomputing and HPC and Blockchain. We know we get a crypto for sure, like in mind a Bitcoin which gets harder and harder and harder. Um and you mentioned there's other things that we can potentially compute on trust. Like what? What else? What do you thinking there? >>Well, I think that, you know, the next big thing that we are really seeing is in communication. And it turns out, as I was saying earlier, that these highly computational intensive algorithms and models show up in all sorts of places like, you know, in five g communication, there's something called the memo multi and multi out and to optimally manage that traffic such that you know exactly what beam it's going to and worth Antenna is coming from that turns out to be a non trivial, you know, partial differential equation. So next thing you know, you've got HPC in there as and he didn't expect it because there's so much data to be sent, you really have to do some data reduction and data processing almost at the point of inception, if not at the point of aggregation. So that has led to edge computing and edge data centers. And that, too, is now. People want some level of computational capability at that place like you're building a microcontroller, which traditionally would just be a, you know, small, low power, low cost thing. And people want victor instructions. There. People want matrix algebra there because it makes sense to process the data before you have to ship it. So HPCs cropping up really everywhere. And then finally, when you're trying to accelerate things that obviously GP use have been a great example of that mixed signal technologies air coming to do analog and digital at the same time, quantum technologies coming so you could do the you know, the usual analysts to buy to where you have analog, digital, classical quantum and then see which, you know, with what lies where all of that is coming. And all of that is essentially resting on HBC. >>That's interesting. I didn't realize that HBC had that position in five G with multi and multi out. That's great example and then I o t. I want to ask you about that because there's a lot of discussion about real time influencing AI influencing at the edge on you're seeing sort of new computing architectures, potentially emerging, uh, video. The acquisition of arm Perhaps, you know, amore efficient way, maybe a lower cost way of doing specialized computing at the edge it, But it sounds like you're envisioning, actually, supercomputing at the edge. Of course, we've talked to Dr Mark Fernandez about space born computers. That's like the ultimate edge you got. You have supercomputers hanging on the ceiling of the International space station, but But how far away are we from this sort of edge? Maybe not. Space is an extreme example, but you think factories and windmills and all kinds of edge examples where supercomputing is is playing a local role. >>Well, I think initially you're going to see it on base stations, Antenna towers, where you're aggregating data from a large number of endpoints and sensors that are gathering the data, maybe do some level of local processing and then ship it to the local antenna because it's no more than 100 m away sort of a thing. But there is enough there that that thing can now do the processing and do some level of learning and decide what data to ship back to the cloud and what data to get rid of and what data to just hold. Or now those edge data centers sitting on top of an antenna. They could have a half a dozen GPS in them. They're pretty powerful things. They could have, you know, one they could have to, but but it could be depending on what you do. A good a good case study. There is like surveillance cameras. You don't really need to ship every image back to the cloud. And if you ever need it, the guy who needs it is gonna be on the scene, not back at the cloud. So there is really no sense in sending it, Not certainly not every frame. So maybe you can do some processing and send an image every five seconds or every 10 seconds, and that way you can have a record of it. But you've reduced your bandwidth by orders of magnitude. So things like that are happening. And toe make sense of all of that is to recognize when things changed. Did somebody come into the scene or is it just you know that you know, they became night, So that's sort of a decision. Cannot be automated and fundamentally what is making it happen? It may not be supercomputing exa scale class, but it's definitely HPCs, definitely numerically oriented technologies. >>Shane, what do you see happening in chip architectures? Because, you see, you know the classical intel they're trying to put as much function on the real estate as possible. We've seen the emergence of alternative processors, particularly, uh, GP use. But even if f b g A s, I mentioned the arm acquisition, so you're seeing these alternative processors really gain momentum and you're seeing data processing units emerge and kind of interesting trends going on there. What do you see? And what's the relationship to HPC? >>Well, I think a few things are going on there. Of course, one is, uh, essentially the end of Moore's law, where you cannot make the cycle time be any faster, so you have to do architectural adjustments. And then if you have a killer app that lends itself to large volume, you can build silicon. That is especially good for that now. Graphics and gaming was an example of that, and people said, Oh my God, I've got all these cores in there. Why can't I use it for computation? So everybody got busy making it 64 bit capable and some grass capability, And then people say, Oh, I know I can use that for a I And you know, now you move it to a I say, Well, I don't really need 64 but maybe I can do it in 32 or 16. So now you do it for that, and then tens, of course, come about. And so there's that sort of a progression of architecture, er trumping, basically cycle time. That's one thing. The second thing is scale out and decentralization and distributed computing. And that means that the inter communication and intra communication among all these notes now becomes an issue big enough issue that maybe it makes sense to go to a DPU. Maybe it makes sense to go do some level of, you know, edge data centers like we were talking about on then. The third thing, really is that in many of these cases you have data streaming. What is really coming from I o t, especially an edge, is that data is streaming and when data streaming suddenly new architectures like F B G. A s become really interesting and and and hold promise. So I do see, I do see FPG's becoming more prominent just for that reason, but then finally got a program all of these things on. That's really a difficulty, because what happens now is that you need to get three different ecosystems together mobile programming, embedded programming and cloud programming. And those are really three different developer types. You can't hire somebody who's good at all three. I mean, maybe you can, but not many. So all of that is challenges that are driving this this this this industry, >>you kind of referred to this distributed network and a lot of people you know, they refer to this. The next generation cloud is this hyper distributed system. When you include the edge and multiple clouds that etcetera space, maybe that's too extreme. But to your point, at least I inferred there's a There's an issue of Leighton. See, there's the speed of light s So what? What? What is the implication then for HBC? Does that mean I have tow Have all the data in one place? Can I move the compute to the data architecturally, What are you seeing there? >>Well, you fundamentally want to optimize when to move data and when to move, Compute. Right. So is it better to move data to compute? Or is it better to bring compute to data and under what conditions? And the dancer is gonna be different for different use cases. It's like, really, is it worth my while to make the trip, get my processing done and then come back? Or should I just developed processing capability right here? Moving data is really expensive and relatively speaking. It has become even more expensive, while the price of everything has dropped down its price has dropped less than than than like processing. So it is now starting to make sense to do a lot of local processing because processing is cheap and moving data is expensive Deep Use an example of that, Uh, you know, we call this in C two processing like, you know, let's not move data. If you don't have to accept that we live in the age of big data, so data is huge and wants to be moved. And that optimization, I think, is part of what you're what you're referring to. >>Yeah, So a couple examples might be autonomous vehicles. You gotta have to make decisions in real time. You can't send data back to the cloud flip side of that is we talk about space borne computers. You're collecting all this data You can at some point. You know, maybe it's a year or two after the lived out its purpose. You ship that data back and a bunch of disk drives or flash drives, and then load it up into some kind of HPC system and then have at it and then you doom or modeling and learn from that data corpus, right? I mean those air, >>right? Exactly. Exactly. Yeah. I mean, you know, driverless vehicles is a great example, because it is obviously coming fast and furious, no pun intended. And also, it dovetails nicely with the smart city, which dovetails nicely with I o. T. Because it is in an urban area. Mostly, you can afford to have a lot of antenna, so you can give it the five g density that you want. And it requires the Layton sees. There's a notion of how about if my fleet could communicate with each other. What if the car in front of me could let me know what it sees, That sort of a thing. So, you know, vehicle fleets is going to be in a non opportunity. All of that can bring all of what we talked about. 21 place. >>Well, that's interesting. Okay, so yeah, the fleets talking to each other. So kind of a Byzantine fault. Tolerance. That problem that you talk about that z kind of cool. I wanna I wanna sort of clothes on quantum. It's hard to get your head around. Sometimes You see the demonstrations of quantum. It's not a one or zero. It could be both. And you go, What? How did come that being so? And And of course, there it's not stable. Uh, looks like it's quite a ways off, but the potential is enormous. It's of course, it's scary because we think all of our, you know, passwords are already, you know, not secure. And every password we know it's gonna get broken. But give us the give us the quantum 101 And let's talk about what the implications. >>All right, very well. So first off, we don't need to worry about our passwords quite yet. That that that's that's still ways off. It is true that analgesic DM came up that showed how quantum computers can fact arise numbers relatively fast and prime factory ization is at the core of a lot of cryptology algorithms. So if you can fact arise, you know, if you get you know, number 21 you say, Well, that's three times seven, and those three, you know, three and seven or prime numbers. Uh, that's an example of a problem that has been solved with quantum computing, but if you have an actual number, would like, you know, 2000 digits in it. That's really harder to do. It's impossible to do for existing computers and even for quantum computers. Ways off, however. So as you mentioned, cubits can be somewhere between zero and one, and you're trying to create cubits Now there are many different ways of building cubits. You can do trapped ions, trapped ion trapped atoms, photons, uh, sometimes with super cool, sometimes not super cool. But fundamentally, you're trying to get these quantum level elements or particles into a superimposed entanglement state. And there are different ways of doing that, which is why quantum computers out there are pursuing a lot of different ways. The whole somebody said it's really nice that quantum computing is simultaneously overhyped and underestimated on. And that is that is true because there's a lot of effort that is like ways off. On the other hand, it is so exciting that you don't want to miss out if it's going to get somewhere. So it is rapidly progressing, and it has now morphed into three different segments. Quantum computing, quantum communication and quantum sensing. Quantum sensing is when you can measure really precise my new things because when you perturb them the quantum effects can allow you to measure them. Quantum communication is working its way, especially in financial services, initially with quantum key distribution, where the key to your cryptography is sent in a quantum way. And the data sent a traditional way that our efforts to do quantum Internet, where you actually have a quantum photon going down the fiber optic lines and Brookhaven National Labs just now demonstrated a couple of weeks ago going pretty much across the, you know, Long Island and, like 87 miles or something. So it's really coming, and and fundamentally, it's going to be brand new algorithms. >>So these examples that you're giving these air all in the lab right there lab projects are actually >>some of them are in the lab projects. Some of them are out there. Of course, even traditional WiFi has benefited from quantum computing or quantum analysis and, you know, algorithms. But some of them are really like quantum key distribution. If you're a bank in New York City, you very well could go to a company and by quantum key distribution services and ship it across the you know, the waters to New Jersey on that is happening right now. Some researchers in China and Austria showed a quantum connection from, like somewhere in China, to Vienna, even as far away as that. When you then put the satellite and the nano satellites and you know, the bent pipe networks that are being talked about out there, that brings another flavor to it. So, yes, some of it is like real. Some of it is still kind of in the last. >>How about I said I would end the quantum? I just e wanna ask you mentioned earlier that sort of the geopolitical battles that are going on, who's who are the ones to watch in the Who? The horses on the track, obviously United States, China, Japan. Still pretty prominent. How is that shaping up in your >>view? Well, without a doubt, it's the US is to lose because it's got the density and the breadth and depth of all the technologies across the board. On the other hand, information age is a new eyes. Their revolution information revolution is is not trivial. And when revolutions happen, unpredictable things happen, so you gotta get it right and and one of the things that these technologies enforce one of these. These revolutions enforce is not just kind of technological and social and governance, but also culture, right? The example I give is that if you're a farmer, it takes you maybe a couple of seasons before you realize that you better get up at the crack of dawn and you better do it in this particular season. You're gonna starve six months later. So you do that to three years in a row. A culture has now been enforced on you because that's how it needs. And then when you go to industrialization, you realize that Gosh, I need these factories. And then, you know I need workers. And then next thing you know, you got 9 to 5 jobs and you didn't have that before. You don't have a command and control system. You had it in military, but not in business. And and some of those cultural shifts take place on and change. So I think the winner is going to be whoever shows the most agility in terms off cultural norms and governance and and and pursuit of actual knowledge and not being distracted by what you think. But what actually happens and Gosh, I think these exa scale technologies can make the difference. >>Shaheen Khan. Great cast. Thank you so much for joining us to celebrate the extra scale day, which is, uh, on 10. 18 on dso. Really? Appreciate your insights. >>Likewise. Thank you so much. >>All right. Thank you for watching. Keep it right there. We'll be back with our next guest right here in the Cube. We're celebrating Exa scale day right back.
SUMMARY :
he is the co host of Radio free HPC Shaheen. How are you to analysts like you because you bring an independent perspective. And the megatrends that drive that in our mind And then you see it permeating into all these trends. You get it and you can't get rid And it was just this This is, you know, tons of money flowing in and and then, And then you experimented to prove the theories you know, competition. And it turns out as we all know that for a I, you need a lot more data than you thought. ai winter, even though, you know, the technology never went away. is similar to H B. C. The skill set that you need is the skill set community doesn't like to talk about crypto because you know that you know the fraud and everything else. And with some of these exa scale technologies, we're trying to, you know, we're getting to that point for Well, that's really interesting the way you described it, essentially the the confluence of crypto is coming from that turns out to be a non trivial, you know, partial differential equation. I want to ask you about that because there's a lot of discussion about real time influencing AI influencing Did somebody come into the scene or is it just you know that you know, they became night, Because, you see, you know the classical intel they're trying to put And then people say, Oh, I know I can use that for a I And you know, now you move it to a I say, Can I move the compute to the data architecturally, What are you seeing there? an example of that, Uh, you know, we call this in C two processing like, it and then you doom or modeling and learn from that data corpus, so you can give it the five g density that you want. It's of course, it's scary because we think all of our, you know, passwords are already, So if you can fact arise, you know, if you get you know, number 21 you say, and ship it across the you know, the waters to New Jersey on that is happening I just e wanna ask you mentioned earlier that sort of the geopolitical And then next thing you know, you got 9 to 5 jobs and you didn't have that before. Thank you so much for joining us to celebrate the Thank you so much. Thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Shaheen Khan | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
Vienna | LOCATION | 0.99+ |
Austria | LOCATION | 0.99+ |
MIT Media Lab | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
Orion X | ORGANIZATION | 0.99+ |
New Jersey | LOCATION | 0.99+ |
50 | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
9 | QUANTITY | 0.99+ |
Shane | PERSON | 0.99+ |
Long Island | LOCATION | 0.99+ |
AI Lab | ORGANIZATION | 0.99+ |
Cray Research | ORGANIZATION | 0.99+ |
Brookhaven National Labs | ORGANIZATION | 0.99+ |
Japan | LOCATION | 0.99+ |
Kendall Square Research | ORGANIZATION | 0.99+ |
5 jobs | QUANTITY | 0.99+ |
Cove | PERSON | 0.99+ |
2000 digits | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Danny Hillis | PERSON | 0.99+ |
a year | QUANTITY | 0.99+ |
half a dozen | QUANTITY | 0.98+ |
third thing | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
64 | QUANTITY | 0.98+ |
Exa Scale Day | EVENT | 0.98+ |
32 | QUANTITY | 0.98+ |
six months later | DATE | 0.98+ |
64 bit | QUANTITY | 0.98+ |
third pillar | QUANTITY | 0.98+ |
16 | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
HBC | ORGANIZATION | 0.97+ |
one place | QUANTITY | 0.97+ |
87 miles | QUANTITY | 0.97+ |
tens | QUANTITY | 0.97+ |
Mark Fernandez | PERSON | 0.97+ |
zero | QUANTITY | 0.97+ |
Shaheen | PERSON | 0.97+ |
seven | QUANTITY | 0.96+ |
first job | QUANTITY | 0.96+ |
HPC Technologies | ORGANIZATION | 0.96+ |
two | QUANTITY | 0.94+ |
three different ecosystems | QUANTITY | 0.94+ |
every 10 seconds | QUANTITY | 0.94+ |
every five seconds | QUANTITY | 0.93+ |
Byzantine | PERSON | 0.93+ |
Exa scale day | EVENT | 0.93+ |
second thing | QUANTITY | 0.92+ |
Moore | PERSON | 0.9+ |
years ago | DATE | 0.89+ |
HPC | ORGANIZATION | 0.89+ |
three years | QUANTITY | 0.89+ |
three different developer | QUANTITY | 0.89+ |
Exascale Day | EVENT | 0.88+ |
Galileo | PERSON | 0.88+ |
three times | QUANTITY | 0.88+ |
a couple of weeks ago | DATE | 0.85+ |
exa scale day | EVENT | 0.84+ |
D. C | PERSON | 0.84+ |
many years ago | DATE | 0.81+ |
a decade ago | DATE | 0.81+ |
about | DATE | 0.81+ |
C two | TITLE | 0.81+ |
one thing | QUANTITY | 0.8+ |
10. 18 | DATE | 0.8+ |
Dr | PERSON | 0.79+ |
past 34 decades | DATE | 0.77+ |
two things | QUANTITY | 0.76+ |
Leighton | ORGANIZATION | 0.76+ |
11 simple way | QUANTITY | 0.75+ |
21 place | QUANTITY | 0.74+ |
three different segments | QUANTITY | 0.74+ |
more than 100 m | QUANTITY | 0.73+ |
FPG | ORGANIZATION | 0.73+ |
decades | QUANTITY | 0.71+ |
five | QUANTITY | 0.7+ |
Maurizio Davini, University of Pisa and Thierry Pellegrino, Dell Technologies | VMworld 2020
>> From around the globe, it's theCUBE, with digital coverage of VMworld 2020, brought to you by the VMworld and its ecosystem partners. >> I'm Stu Miniman, and welcome back to theCUBES coverage of VMworld 2020, our 11th year doing this show, of course, the global virtual event. And what do we love talking about on theCUBE? We love talking to customers. It is a user conference, of course, so really happy to welcome to the program. From the University of Pisa, the Chief Technology Officer Maurizio Davini and joining him is Thierry Pellegrini, one of our theCUBE alumni. He's the vice president of worldwide, I'm sorry, Workload Solutions and HPC with Dell Technologies. Thierry, thank you so much for joining us. >> Thanks too. >> Thanks to you. >> Alright, so let, let's start. The University of Pisa, obviously, you know, everyone knows Pisa, one of the, you know, famous city iconic out there. I know, you know, we all know things in Europe are a little bit longer when you talk about, you know, some of the venerable institutions here in the United States, yeah. It's a, you know, it's a couple of hundred years, you know, how they're using technology and everything. I have to imagine the University of Pisa has a long storied history. So just, if you could start before we dig into all the tech, give us our audience a little bit, you know, if they were looking up on Wikipedia, what's the history of the university? >> So University of Pisa is one of the oldest in the world because there has been founded in 1343 by a pope. We were authorized to do a university teaching by a pope during the latest Middle Ages. So it's really one of the, is not the oldest of course, but the one of the oldest in the world. It has a long history, but as never stopped innovating. So anything in Pisa has always been good for innovating. So either for the teaching or now for the technology applied to a remote teaching or a calculation or scientific computing, So never stop innovating, never try to leverage new technologies and new kind of approach to science and teaching. >> You know, one of your historical teachers Galileo, you know, taught at the university. So, you know, phenomenal history help us understand, you know, you're the CTO there. What does that encompass? How, you know, how many students, you know, are there certain areas of research that are done today before we kind of get into the, you know, the specific use case today? >> So consider that the University of Pisa is a campus in the sense that the university faculties are spread all over the town. Medieval like Pisa poses a lot of problems from the infrastructural point of view. So, we have bought a lot in the past to try to adapt the Medieval town to the latest technologies advancement. Now, we have 50,000 students and consider that Pisa is a general partners university. So, we cover science, like we cover letters in engineering, medicine, and so on. So, during the, the latest 20 years, the university has done a lot of effort to build an infrastructure that was able to develop and deploy the latest technologies for the students. So for example, we have a private fiber network covering all the town, 65 kilometers of a dark fiber that belongs to the university, four data centers, one big and three little center connected today at 200 gigabit ethernet. We have a big data center, big for an Italian University, of course, and not Poland and U.S. university, where is, but also hold infrastructure for the enterprise services and the scientific computing. >> Yep, Maurizio, it's great that you've had that technology foundation. I have to imagine the global pandemic COVID-19 had an impact. What's it been? You know, how's the university dealing with things like work from home and then, you know, Thierry would love your commentary too. >> You know, we, of course we were not ready. So we were eaten by the pandemic and we have to adapt our service software to transform from imperson to remote services. So we did a lot of work, but we are able, thanks to the technology that we have chosen to serve almost a 100% of our curriculum studies program. We did a lot of work in the past to move to virtualization, to enable our users to work for remote, either for a workstation or DC or remote laboratories or remote calculation. So virtualization has designed in the past our services. And of course when we were eaten by the pandemic, we were almost ready to transform our service from in person to remote. >> Yeah, I think it's, it's true, like Maurizio said, nobody really was preparing for this pandemic. And even for, for Dell Technologies, it was an interesting transition. And as you can probably realize a lot of the way that we connect with customers is in person. And we've had to transition over to modes or digitally connecting with customers. We've also spent a lot of our energy trying to help the community HPC and AI community fight the COVID pandemic. We've made some of our own clusters that we use in our HPC and AI innovation center here in Austin available to genomic research or other companies that are fighting the the virus. And it's been an interesting transition. I can't believe that it's already been over six months now, but we've found a new normal. >> Detailed, let's get in specifically to how you're partnering with Dell. You've got a strong background in the HPC space, working with supercomputers. What is it that you're turning to Dell in their ecosystem to help the university with? >> So we are, we have a long history in HPC. Of course, like you can imagine not to the biggest HPC like is done in the U.S. so in the biggest supercomputer center in Europe. We have several system for doing HPC. Traditionally, HPC that are based on a Dell Technologies offer. We typically host all kind of technology's best, but now it's available, of course not in a big scale but in a small, medium scale that we are offering to our researcher, student. We have a strong relationship with Dell Technologies developing together solution to leverage the latest technologies, to the scientific computing, and this has a lot during the research that has been done during this pandemic. >> Yeah, and it's true. I mean, Maurizio is humble, but every time we have new technologies that are to be evaluated, of course we spend time evaluating in our labs, but we make it a point to share that technology with Maurizio and the team at the University of Pisa, That's how we find some of the better usage models for customers, help tuning some configurations, whether it's on the processor side, the GPU side, the storage and the interconnect. And then the topic of today, of course, with our partners at VMware, we've had some really great advancements Maurizio and the team are what we call a center of excellence. We have a few of them across the world where we have a unique relationship sharing technology and collaborating on advancement. And recently Maurizio and the team have even become one of the VMware certified centers. So it's a great marriage for this new world where virtual is becoming the norm. >> But well, Thierry, you and I had a conversation to talk earlier in the year when VMware was really geering their full kind of GPU suite and, you know, big topic in the keynote, you know, Jensen, the CEO of Nvidia was up on stage. VMware was talking a lot about AI solutions and how this is going to help. So help us bring us in you work with a lot of the customers theory. What is it that this enables for them and how to, you know, Dell and VMware bring, bring those solutions to bear? >> Yes, absolutely. It's one statistic I'll start with. Can you believe that only on average, 15 to 20% of GPU are fully utilized? So, when you think about the amount of technology that's are at our fingertips and especially in a world today where we need that technology to advance research and scientistic discoveries. Wouldn't it be fantastic to utilize those GPU's to the best of our ability? And it's not just GPU's , I think the industry has in the IT world, leverage virtualization to get to the maximum recycles for CPU's and storage and networking. Now you're bringing the GPU in the fold and you have a perfect utilization and also flexibility across all those resources. So what we've seen is that convergence between the IT world that was highly virtualized, and then this highly optimized world of HPC and AI because of the resources out there and researchers, but also data scientists and company want to be able to run their day to day activities on that infrastructure. But then when they have a big surge need for research or a data science use that same environment and then seamlessly move things around workload wise. >> Yeah, okay I do believe your stat. You know, the joke we always have is, you know, anybody from a networking background, there's no such thing as eliminating a bottleneck, you just move it. And if you talk about utilization, we've been playing the shell game for my entire career of, let's try to optimize one thing and then, oh, there's something else that we're not doing. So,you know, so important. Retail, I want to hear from your standpoint, you know, virtualization and HPC, you know, AI type of uses there. What value does this bring to you and, you know, and key learnings you've had in your organization? >> So, we as a university are a big users of the VMware technologies starting from the traditional enterprise workload and VPI. We started from there in the sense that we have an installation quite significant. But also almost all the services that the university gives to our internal users, either personnel or staff or students. At a certain point that we decided to try to understand the, if a VMware virtualization would be good also for scientific computing. Why? Because at the end of the day, their request that we have from our internal users is flexibility. Flexibility in the sense of be fast in deploying, be fast to reconfiguring, try to have the latest beats on the software side, especially on the AI research. At the end of the day we designed a VMware solution like you, I can say like a whiteboard. We have a whiteboard, and we are able to design a new solution of this whiteboard and to deploy as fast as possible. Okay, what we face as IT is not a request of the maximum performance. Our researchers ask us for flexibility then, and want to be able to have the maximum possible flexibility in configuring the systems. How can I say I, we can deploy as more test cluster on the visual infrastructure in minutes or we can use GPU inside the infrastructure tests, of test of new algorithm for deep learning. And we can use faster storage inside the virtualization to see how certain algorithm would vary with our internal developer can leverage the latest, the beat in storage like NVME, MVMS or so. And this is why at the certain point, we decided to try visualization as a base for HPC and scientific computing, and we are happy. >> Yeah, I think Maurizio described it it's flexibility. And of course, if you think optimal performance, you're looking at the bare medal, but in this day and age, as I stated at the beginning, there's so much technology, so much infrastructure available that flexibility at times trump the raw performance. So, when you have two different research departments, two different portions, two different parts of the company looking for an environment. No two environments are going to be exactly the same. So you have to be flexible in how you aggregate the different components of the infrastructure. And then think about today it's actually fantastic. Maurizio was sharing with me earlier this year, that at some point, as we all know, there was a lot down. You could really get into a data center and move different cables around or reconfigure servers to have the right ratio of memory, to CPU, to storage, to accelerators, and having been at the forefront of this enablement has really benefited University of Pisa and given them that flexibility that they really need. >> Wonderful, well, Maurizio my understanding, I believe you're giving a presentation as part of the activities this week. Give us a final glimpses to, you know, what you want your peers to be taking away from what you've done? >> What we have done that is something that is very simple in the sense that we adapt some open source software to our infrastructure in order to enable our system managers and users to deploy HPC and AI solution fastly and in an easy way to our VMware infrastructure. We started doing a sort of POC. We designed the test infrastructure early this year and then we go fastly to production because we had about the results. And so this is what we present in the sense that you can have a lot of way to deploy Vitola HPC, Barto. We went for a simple and open source solution. Also, thanks to our friends of Dell Technologies in some parts that enabled us to do the works and now to go in production. And that's theory told before you talked to has a lot during the pandemic due to the effect that stay at home >> Wonderful, Thierry, I'll let you have the final word. What things are you drawing customers to, to really dig in? Obviously there's a cost savings, or are there any other things that this unlocks for them? >> Yeah, I mean, cost savings. We talked about flexibility. We talked about utilization. You don't want to have a lot of infrastructure sitting there and just waiting for a job to come in once every two months. And then there's also the world we live in, and we all live our life here through a video conference, or at times through the interface of our phone and being able to have this web based interaction with a lot of infrastructure. And at times the best infrastructure in the world, makes things simpler, easier, and hopefully bring science at the finger tip of data scientists without having to worry about knowing every single detail on how to build up that infrastructure. And with the help of the University of Pisa, one of our centers of excellence in Europe, we've been innovating and everything that's been accomplished for, you know at Pisa can be accomplished by our customers and our partners around the world. >> Thierry, Maurizio, thank you much for so much for sharing and congratulations on all I know you've done building up that COE. >> Thanks to you. >> Thank you. >> Stay with us, lots more covered from VMworld 2020. I'm Stu Miniman as always. Thank you for watching the theCUBE. (soft music)
SUMMARY :
brought to you by the VMworld of course, the global virtual event. here in the United States, yeah. So either for the teaching or you know, you're the CTO there. So consider that the University of Pisa and then, you know, Thierry in the past our services. that are fighting the the virus. background in the HPC space, so in the biggest Maurizio and the team are the keynote, you know, Jensen, because of the resources You know, the joke we in the sense that we have an and having been at the as part of the activities this week. and now to go in production. What things are you drawing and our partners around the world. Thierry, Maurizio, thank you much Thank you for watching the theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Maurizio | PERSON | 0.99+ |
Thierry | PERSON | 0.99+ |
Thierry Pellegrini | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
15 | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Austin | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
University of Pisa | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Jensen | PERSON | 0.99+ |
Maurizio Davini | PERSON | 0.99+ |
1343 | DATE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
United States | LOCATION | 0.99+ |
65 kilometers | QUANTITY | 0.99+ |
50,000 students | QUANTITY | 0.99+ |
U.S. | LOCATION | 0.99+ |
200 gigabit | QUANTITY | 0.99+ |
Pisa | LOCATION | 0.99+ |
three little center | QUANTITY | 0.99+ |
Galileo | PERSON | 0.99+ |
today | DATE | 0.99+ |
11th year | QUANTITY | 0.99+ |
VMworld 2020 | EVENT | 0.99+ |
over six months | QUANTITY | 0.99+ |
20% | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
two different parts | QUANTITY | 0.97+ |
Thierry Pellegrino | PERSON | 0.97+ |
pandemic | EVENT | 0.97+ |
four data centers | QUANTITY | 0.96+ |
one big | QUANTITY | 0.96+ |
earlier this year | DATE | 0.96+ |
this week | DATE | 0.96+ |
Middle Ages | DATE | 0.96+ |
COVID pandemic | EVENT | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
VMworld | ORGANIZATION | 0.95+ |
100% | QUANTITY | 0.95+ |
early this year | DATE | 0.95+ |
20 years | QUANTITY | 0.91+ |
HPC | ORGANIZATION | 0.9+ |
two different research departments | QUANTITY | 0.9+ |
two different portions | QUANTITY | 0.89+ |
Poland | LOCATION | 0.88+ |
one thing | QUANTITY | 0.87+ |
Wikipedia | ORGANIZATION | 0.86+ |
Bill Philbin, HPE - HPE Discover 2017
>> Announcer: Live from Las Vegas, it's theCUBE. Covering HPE Discover 2017. Brought to you by Hewlett-Packard Enterprise. >> Okay, welcome back everyone. We're here live in Las Vegas for HPE, Hewlett-Packard Enterprise, Discover 2017. I'm John Furrier, co-host of theCUBE with Dave Vellante, and our next guest is Bill Philbin, who's the general manager of storage and big data for Hewlett-Packard Enterprise. Bill, welcome to theCUBE. Again, good to see you. I think you've been on since 2012, '13, '15. >> Is that right? What, are we carbon dating ourselves now or something? >> We've been tracking our CUBE alumni, but you're heading up the storage business-- >> Do I get a pen? >> We're working on that, Jerry Chen-- >> Seven of them >> Jerry Chen at Greylock wants to have, now, badge values. So, welcome back. >> Thank you, thank you for having me. >> You were just on theCUBE at VeeamON, which is an event Dave was hosting, I missed it in New Orleans. But a lot of stuff going on around stores, certainly. Virtualization has been around for a while, but now with Cloud; whole new ballgame. Programmable infrastructure, hybrid IT, Wikibond's true private Cloud report came out showing that private Cloud on Prim is $250 billion market. So nothing's really changing radically in the enterprise, per se, certainly maybe servers and storage, but people got to store their data. >> Bill: That's right What's the update from your perspective, what's the story here at HPE Discover? >> So I think there's really three things we're talking about amongst a number of announcements. One is sort of the extension of our All Flash environment for customers, who, as I was saying at Veeam, have the always-on. New world order is we expect everything to be available at a moment's notice, so I was in the middle of the Indian Ocean, using Google Voice over satellite IP on the boat, talking to San Jose, and it worked. That's always-on environment, and the best way to get that is, you know, with an All Flash [unknown], so that's number one. Number two, going back to the story about programmable infrastructures, storage also needs to be programmable, and so, if you've had Rick Lewis or Rick Lewis is coming he'll talk about composable infrastructures with Synergy, but the flip side of that is our belief that storage really needs to be invisible. And the acquisition of Nimble gets us a lot closer to sort of doing that in the same way that you have a safe self-driving car is all the rage. All that rich telemetry comes back, it's analyzed, fingerprinted, and sent out to customers to a point where it's, I call it the Rule of 85. 85% of the customers, the cases are raised by InfoSight and closed by InfoSight, and they have an 85 net promoter score. We're getting to a point where storage can be invisible, cause that's the experience you get on Amazon or as you swipe your credit card, say I want ten terabytes of storage, and that's the last time you have to think about it. We need to have the economics of the web, we need to have the programmability of the web, that's number two, and number three of what we talked about, and this is a big issue, a big thing we talked about with VeeamON, was data protection. The rules of data protection are also changing. Conventional backup does not protect data. I was with a customer a couple weeks ago in London. 120 petabytes; this is a financial services customer now. 120 petabytes of storage: not unusual. 40 of it was Hadoop, and they were surprised because it's unprotected, it's on servers, it's sort of the age of the client-server, and the age of Excel spreadsheets all over again. We realized that most businesses were running on Excel, so All Flash, a different way of supporting our customer support experience, and number three, it's all around how do you protect your data differently. >> What's the big trend from your standpoint, because a lot of that self-driving storage concept, or self-driving car analogy, it speaks to simplicity and automation. >> That's right >> The other thing that's going on is data is becoming more irrelevant, certainly in the Cloud. Whether that's a data protection impact or having data availability for Cloud-native apps, or in memory, or all kinds of cool stuff going on. So you got to lot of stuff happening, so to be invisible, and be programmable, customer's architectures are changing. What's the big trend that you're seeing from a customer standpoint? Are there new ways to lay out storage so that they can be invisible? Certainly a lot of people were looking at their simplification in IT operationally, and then have to prepare for the Cloud, whether that's Multicloud or hybrid or true private Cloud. What architects are you seeing changing, what are people doubling down on, what's the big trends in storage, kind of laying out storage as a strategy? >> So I think the thing about storage in the large, one of the trends obviously that we're seeing is sort of storage co-located with the server. When I started at HP now seven years ago, gen six to gen ten, which we've announced here at this show, the amount of locally attached storage in the box itself is massive. And then the applications are now becoming more and more responsible for data placement, and data replication. And so, even while capacities are growing, I think six or seven percent is what I saw from the latest IDC survey, the actual storage landscape, from a shared storage company, they're actually going down. And the reason is, application provisioning, application-aware storage is really the trend, that's sort of number one. Number two, you see customers looking at deploying the right storage for the right applications. hyperconverge with SimpliVity's a really good example of that, which is they're trying to find the right sort of storage to sort of serve up the right application. And that's where, if you're a single-PoINT provider company now in storage, and you don't have a software-only, a hyperconverge, an All Flash in a couple different flavors, including XP at the top, you're going to find it very, very difficult to sort of continue to compete in this market, and frankly, we're driving a lot of that consolidation, we put some bookends around what we're prepared to pay for. But if you're a PoINT providing storage company now? Life is a lot harder for you than it was a couple years ago. When we started with All Flash, I think it was like 94 All Flash companies. There are not 94 All Flash companies today. And so, I think that's sort of what we see. >> Well, to your point about PoINT companies are going to have a hard time remaining independent, and that's why a lot of 'em are in business to basically sell to a company like yours, cause they fill a need. So my question relates to R&D strategy. As the GM, relatively new GM, you know well that a large company like HPE has to participate in multiple markets, and in order to expand your team, you have to have the right product at the right time. One size does not fit all. So the Nimble acquisition brings in a capability at the lower end of the market, lower price spans, but it also has some unique attributes with regard to the way it uses data and analytics. You've got 3PAR Legendary at the high end. What's the strategy in terms of, and is there one, to bring the best of both of those worlds together, or is it sort of let 20 flowers bloom? >> So, I don't know if it's going to be 'let 20 flowers bloom', but I would probably answer a couple different ways. One is that InfoSight, you're right, is unique value proposition, is part of Nimble. I would bet if I come see you in Madrid, if you have me back for the, whatever, 13th time, [Laughing] that we'll be talking about how InfoSight and 3PAR can come together. So that's sort of the answer to number one. The answer to number two is, even though within the Nimble acquisition, one party acquired the other party, what we're really looking at is the best breed of both organizations. Whether that's a process, a person, a technology, we don't feel wedded to, "Just because we do it a certain way at HP, that means the Nimble team must conform." It's really, "Bring us the best and brightest." That's what we got. At the end of the day, we got a company, we got revenue, but we got the people, and in this storage business, these are serial entrepreneurs who have actually developed a product, we want to keep those people, and the way you do that is you bring 'em in and you use the best and greatest of all the technologies. There's probably other optimizations we'll look at, but looking at InfoSight across the entire portfolio, and one day maybe across the server portfolio, is the right thing to do. >> And just to follow up on that, Tom, if I may, so that's a hard core of sort of embedded technology, and then you've got a capability, we talk about the API economy all the time. How are you, and are you able to leverage other HPE activities to create infrastructure as code, specifically within the storage group? >> So if you look at us, at our converged systems appliances like our SAP HANA appliance, databases greater than six terabytes, we have 85% market share at Hewlett-Packard. And the way we do that, and that's all on 3PAR by the way, and the way we do that is we've got a fixed system that is designed solely to deliver HANA. On the flip of that, you have Synergy, which is a composable programmable infrastructure from the start, where it's all template-based and based on application provisioning. You provision storage, you provision the fabric, you provision compute. That programmable infrastructure also is supported by HP storage. And so, you have-- You can roll it the way you want to, and to some degree I think it's all about choice. If you want to go along, and build your own programmable infrastructure and OpenStack or VCloud Director, whatever it is, we have one of those. If you think simplicity is key, and app and server integration is important part of how you want to roll it out, we have one of those, that's called SimpliVity. If you want a traditional shared storage environment, we have one of those in 3PAR and Nimble, and if you want composable we have that. Now, choice means more than one, I don't know what it means in Latin or Italian, but I'm pretty sure choice means more than one. What we don't want to do is introduce, however, the complexity of what owning more than one is. And that's where things like Synergy make sense, or federation between SerVirtual and 3PAR, and soon we'll have federation between Nimble and 3PAR. So to help customers with that operational complexity problem, but we actually believe that choice is the most important thing we can provide our customers. >> I've always been a big fan of that compose thing, going back a couple years when you guys came and brought it out to the market. We're first, by the way, props to HP, also first on converged infrastructure way back in the day. I got to ask you, one of the things I love doing with theCUBE interviews is that we get to kind of get inspiration around some of the things that you're working on in your business unit. Back in 2010, Dave and I really kind of saw storage move from being boring storage, provisioning storage, to really the center of the action, and really since 2010 you've seen storage really at the center of all these converging trends. Virtualization, and hyperconverges, all this great stuff, now Cloud, so storage is kind of like the center point of all the action, so I got to ask you the question on virtualization, certainly changed the game with storage. Containerization is also changing the game, so I was telling some HP Labs guys last night that I've been looking at provisioning containers in microseconds. Where virtualization is extending and continuing to have a nice run, on the heels of that we got containerization, where apps are going to start working with storage. What's your vision and how do you guys look at that trend? How are you riding that next wave? >> It all comes down to an application-driven approach. As we were saying a little earlier, our view is that storage will be silent. You're going to provision an application. That's really the-- see, look at the difference between us and, let's say, Nutanix with SimpliVity. It's all about the application being provisioned into the hyperconverged environment. And if you look at the virtualization business alone, VMware's going to have a tough go because Hyper-V has actually gotten good enough, and it's cheaper, but people are really giving Hyper-V a much better look at than we've seen over the course of the last couple years. But guess what? That tool will commoditize, and the next commoditization point is going to be containers. Our vantage point, and if you look at 3PAR, you look at Nimble, we're already got it, we've already supported containers within the product, we've actually invested companies that are container-rich. I think it's all about, "What's the next--" >> And we at Dacron last year said, "We know you're parting with all the guys." But this is a big wave. You see containers as-- >> I see containers as sort of the place that virtualization sort of didn't ever get to. If you look at-- >> John: Well, the apps. >> On the apps absolutely, positively. And also it's a much simpler way to deploy an application over a conventional VM. I think containers will be important. Is it going to be important as the technology inflection point around All Flash? >> John: Flash is certainly very-- >> That I don't know, but I think as far as limiting costs in your datacenter, making it easier to deploy your applications, et cetera, I think containers is the one. >> What's the big news here, at HPE Discover 2017, for you guys? What's the story that you're telling, what's going on in the booth? Share some insight into what's happening here on the ground in Las Vegas from your standpoint. >> So I would say a couple of things. I think if you look out on the show floor, it seems more intimate and smaller this year. And there's a lot of concern, I think, that HP is chopping itself off into various pieces and parts, but I think the story that maybe we're not telling well enough, or that it gets missed, is out of that is actually a brand new company called Hewlett-Packard Enterprise, which is uniquely focused on serving enterprise infrastructure customers. And so I think, if I was going to encourage a news story, it's about the phoenix of that, and not the fact that we've taken the yes guys, and the software guys, and the PC guys. It's that company, maybe in Madrid we'll do this, and that company, that's really, really, really exciting. And as you said, storage; sort of in a Ptolemy versus Galileo approach. We believe everything, first of all, revolves around storage. We don't believe in Galileo. So if you look in here at the booth, we've announced the next generation of MSA platforms of 2052, we've got the 9450 3PAR -- three times as fast, more connectivity for All Flash solutions. We've talked about the secondary Flash array for Nimble, most effective place to protect your data is on an array, is on a type where the data came from, and that is the secondary Flash market. We're big into Cloud, we've talked about CloudBank here, which is the ability to keep a copy of your store-once data in any S3-compliant interface, including Scality. I don't know if I'm forgetting, I'm sure I'm forgetting something. >> John: There's a lot there. >> There's a lot there. >> I mean, you guys, I love your angle on the phoenix. We've been seeing that, we've been covering seven years now, and it is a phoenix. And the point that I think the news media is not getting on HP, there's a lot of fud out there, is that this is not a divested strategy. There's some things that went away that were the outsourcing business, but that was just natural. But this is HP-owned, it's not like it's like we're getting out of that, it's just how you're organizing it. >> And with a balance sheet that now is really a competitive weapon, if you will, you're going to see HP both grow organically and inorganically, and I think as the market continues to consolidate, the thing to remember also is there's fewer places to consolidate to. And so if you're a start-up, there's a handful of companies that you can go to now, and probably the best-equipped, right-sized, great balance sheet, great company, is Hewlett-Packard Enterprise. >> Well we had hoped to get Chris Hsu on, but I've always said the day we talk about the debates on management style, but I've always been a big believer as a computer science undergraduate, decouple highly cohesive strategy is a really viable one, I think that's a great one. >> Yeah, and there's still a good partnership with DXC, there'll be a great partnership with Micro Focus, and there's both financially as well as from a business perspective. But it's really an opportunity to focus, and if I was at another company, I would wonder whether or not if their strategy continues to be appropriate. >> Bill Philbin, senior Vice President and general manager of storage and big data at Hewlett-Packard Enterprises, theCUBE more live coverage after the short break. From Las Vegas, HPE Discover 2017, I'm John Furrier with Dave Vellante with theCUBE, we'll be right back after this short break.
SUMMARY :
Brought to you by Hewlett-Packard Enterprise. Again, good to see you. Jerry Chen at Greylock wants to have, now, badge values. So nothing's really changing radically in the enterprise, and that's the last time you have to think about it. What's the big trend from your standpoint, and then have to prepare for the Cloud, And the reason is, application provisioning, As the GM, relatively new GM, you know well and the way you do that is you bring 'em in And just to follow up on that, Tom, if I may, and the way we do that is we've got a fixed system on the heels of that we got containerization, and the next commoditization point is going to be containers. And we at Dacron last year said, I see containers as sort of the place as the technology inflection point around All Flash? in your datacenter, making it easier to deploy on the ground in Las Vegas from your standpoint. and that is the secondary Flash market. And the point that I think the news media is not getting the thing to remember also is but I've always said the day we talk But it's really an opportunity to focus, of storage and big data at Hewlett-Packard Enterprises,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Bill Philbin | PERSON | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
New Orleans | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
Hewlett-Packard Enterprise | ORGANIZATION | 0.99+ |
Rick Lewis | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Nimble | ORGANIZATION | 0.99+ |
Madrid | LOCATION | 0.99+ |
Hewlett-Packard | ORGANIZATION | 0.99+ |
2010 | DATE | 0.99+ |
Excel | TITLE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Chris Hsu | PERSON | 0.99+ |
Bill | PERSON | 0.99+ |
$250 billion | QUANTITY | 0.99+ |
Hewlett-Packard Enterprises | ORGANIZATION | 0.99+ |
PoINT | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Tom | PERSON | 0.99+ |
85% | QUANTITY | 0.99+ |
San Jose | LOCATION | 0.99+ |
Indian Ocean | LOCATION | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
DXC | ORGANIZATION | 0.99+ |
InfoSight | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Micro Focus | ORGANIZATION | 0.99+ |
Dacron | ORGANIZATION | 0.99+ |
2052 | DATE | 0.99+ |
Seven | QUANTITY | 0.99+ |
seven years | QUANTITY | 0.99+ |
2012 | DATE | 0.99+ |
HANA | TITLE | 0.99+ |
120 petabytes | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Wikibond | ORGANIZATION | 0.99+ |
40 | QUANTITY | 0.99+ |
seven percent | QUANTITY | 0.99+ |
seven years ago | DATE | 0.98+ |
One size | QUANTITY | 0.98+ |
85. 85% | QUANTITY | 0.98+ |
All Flash | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
both organizations | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
13th time | QUANTITY | 0.98+ |
ten terabytes | QUANTITY | 0.98+ |
VeeamON | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
one party | QUANTITY | 0.97+ |
HPE Discover | ORGANIZATION | 0.97+ |
HP Labs | ORGANIZATION | 0.97+ |
20 flowers | QUANTITY | 0.97+ |