Leah Bibbo, AWS | AWS re:Invent 2022
>>Hello everyone. Welcome back to the Cube's Live coverage. I'm John Fur, host of the Cube. We got two sets here, three sets total. Another one in the executive center. It's our 10th year covering AWS Reinvent. I remember 2013 like it was yesterday. You know, now it's a massive of people buying out restaurants. 35,000 people now it's 55,000, soon to be 70,000 back. Great event. Continuing to set the standard in the industry. We had an amazing guest here, Leah Bibo, vice President of Product Marketing. She's in charge of the messaging, the product, overseeing how these products gonna market. Leah, great to see you. Thanks for joining me on the Cube today. >>Absolutely. It's great to be here. It's also my 10 reinvent, so it's, it's been a wild ride. >>Absolutely. Yeah. You and I were talking before we came on camera, how much we love products and yes, this is a product-centric company, has been from day one and you know, over the years watching the announcements, the tsunami of announcements, just all the innovation that's come out from AWS over the years has been staggering to say the least. Everyone always jokes about, oh my God, 5,000 new announcements, over 200 services you're managing and you're marketing them. It's pretty crazy right now. And Adam, as he comes on, as I called them, the solutions CEO on my piece I wrote on Friday, we're in an era where solutions, the products are enabling more solutions. Unpack the messaging around this cuz this is really big moment for aws. >>Absolutely. Well, I'll say first of all that we are a customer focused company that happens to be really good at innovating incredible products and services for our customers. So today the, the energy in the room and what Adam talked about, I think is focused on a few great things for customers that are really important for transformation. So we talked a lot about best price performance for workloads and we talked about extreme workloads, but if you think about the work that we've been doing to innovate on the silicon side, we're really talking about with Graviton all your workloads and getting really great price performance for all of them. You know, we came out with graviton three 25% faster than graviton two, also 60% more energy efficient. We talked about something that is emerging that I think is gonna be really big, which is simulation and really the ability to model these complex worlds and all the little interactions, which I think, you know, in the future as we have more complex environments like 3D simulation is gonna be a bigger part of every, every business's >>Business. You know, just as an aside, we were talking on the analyst segment that speeds and feeds are back and the old days and the data center days was like, we don't wanna talk about speeds and feeds about solutions and you know, the outcomes when you get the cloud, it was like, okay, get the workloads over there, but people want faster and lower cost performance workloads gotta be running at at high performance. And, and there's a real discussion around those. Let's unpack security data performance. What, what does that mean for customers? Because again, I get the workloads run fast. That's great. What else is behind the curtain, so to speak from a customer standpoint? >>Absolutely. Well I think if you're gonna move all your workloads to the cloud, you know, security is a really big area that's important. It's important to every one of our enterprise companies customers. Actually it's important to all of our customers and we've been working, you know, since the beginning of AWS to really create and build the most secure global infrastructure. And you know, as our customers have moved mission critical workloads, we've built out a lot more capabilities and now we have a whole portfolio of security services. And what we announced today is kind of game changing. The service called Security Lake, which brings together, you know, an ecosystem of security data in a format that's open. So you can share data between all of these sources and it's gonna give folks the opportunity to really be able to analyze data, find threats faster, and just kind of know their security posture. And I think, you know, as we talked about today, you don't wanna think about the cloud as unfathomable, the unfathomable, you really need to know that security. And I think that like a lot of things we discussed, security is a data opportunity, right? And I think we, we had a section on on data, but really if you look at the keynote across security, across solutions, across the purpose built things we made, it's all, it all comes down to data and it's really the, the transformational element that our customers >>Are. I mean the data secured is very integral part good call out there. And I, I wanna just double down on that real quick because I remember in 2014 I interviewed Steven Schmidt when he was the CSOs and back then in 2014, if you remember the conversation was this, the clouds not secure, gotta be on premises. Now in today's keynote, Adam says, and he laid out the whole global security footprint. There's a lot going on that Amazon has now become more secure than on-prem. He actually made that statement. So, and then plus you got thousands of security partners, third party partners, you got the open cyber security framework which you guys co-found with all the other, so you got securities not as a team sport, this is what they, they said yes, yes. What does that mean for customers? Because now this is a big deal. >>Well I think for customers, I mean it means nothing but goodness, right? But all of these thousands of security partners have really innovated and created solutions that our customers are using. But they all have different types of data in different silos. And to really get a full picture bringing all that data together is really important. And it's not easy today. You know, log data from different sources, data from detection services and really what customers want is an easier way to get it all together. Which is why we have the open OCS F and really analyze using the tools of their choice. And whether that's AWS tools for analytics or it's tools from our partners, customers need to be able to make that choice so that they can feel like their applications and their workloads are the most secure on aws. >>You know, I've been very impressed with guard duty and I've been following Merit Bear's blogs on online. She's in the security team, she's amazing. Shout out to her. She's been pushing guard duty for a long time now there's big news around guard duty. So you got EKS protection, you know, at Coan this was the biggest cloud native issue, the runtime of Kubernetes and inside the container and outside the container detection of threats, right? As a real software supply chain concern. How are you guys marketing that? This is a huge announcement. EKS protection I know is very nuanced but it's pretty big deal. >>It is a big deal. It is a big deal. And guard duty has been kind of like a quiet service that maybe you don't hear a lot about, but has been really, really popular with our customers. Adam mentioned that 85% of, you know, our top 2000 customers are using guard duty today. And it was a big moment. We launched EKS protection, you know, a little bit earlier and the customer uptake on that has been really incredible. And it is because you can protect your Kubernetes cluster, which is really important because so many customers are, you know, part of their migration to the cloud is containers. Yeah. And so we're pretty excited that now we can answer that question of what's going on inside the container. And so you have both, yeah, right. You know that your Kubernetes pluses are good and you know what's going on inside the container and it's just more threats that you can detect and protect >>Yourself from. You know, as an aside, I'm sure you're watching this, but you know, we go to a lot of events, you know, the C I C D pipeline as developers are getting higher velocity coding, it has moved in because of DevOps on the cloud into the C I C D pipeline. So you're seeing that developer takes some of those IT roles in the coding workflow, hence the, the shift left and or container security, which you guys now, now and are driving towards. But the security and the data teams are emerging as a very key element inside the organizational structure. When I sat down with Adam, one of the things he was very adamant about in my conversation was not just digital transformation, business transformation, structural organizational moves are making where it's not a department anymore, it is the company, a technology is the company when you transform. Absolutely. So digital is the process, business is the outcome. This is a really huge message. What's your reaction to that? What's, what can you share extra cuz that's, this is a big part of the thing. He hit it right outta the gate on the front end of the keynote. >>Absolutely. Absolutely. I mean I think, you know, companies have been migrating to the cloud for a while, but I think that this time that we're going through has really accelerated that migration And as part of that, you know, digital transformation has become real for a lot of companies. And it is true what Adam said there is technology transformation involved, there's data transformation involved, but it, it is transforming businesses. And I think if you look at some of the things that Adam talked about, you know, aws, supply chain, security Lake, aws clean rooms, and Omic, aws, omic, you know, those are all examples of data and the ability to work with data transforming different lines of business within a company, transforming horizontal processes like contact centers and like supply chain and also, you know, going into vertical specific solutions. So what it means is that as technology becomes more pervasive, as data becomes more pervasive, businesses are transforming and that means that a lot more people are going to use the cloud and interact with the cloud and they might not want to or be able to kind of use our building blocks. And so what's really exciting that what we're able to do is make cloud more accessible to lines of business folks to analysts, to security folks. So >>It's, yeah, and that's, and that's why I was calling my this this new trend I see as Amazon Classic, my words, not your words, I call the, hey there was classic cloud and then you got the next gen clown, the new next generation. And I was talking with Adrian Cockcroft, former aws, so he's now retired, he's gonna come on later today. He and I were talking, he use this thing of you got a bag of Legos aka primitives or a toy that's been assembled for you glued together, ones out of the box, but they're not mutually exclusive. You can build a durable application and foundation with the building blocks more durable. You can manage it, refine it, but you got the solution that breaks. You don't have as much flexibility but you gotta replace it. That's okay too. So like this is now kind of a new portfolio approach to the cloud. It's very interesting and I think, I think, I think that's what I took away from the keynote is that you can have both. >>Yes, absolutely. You can do both. I mean, we're gonna go full throttle on releasing innovations and pushing the envelope on compute and storage and databases and our core services because they matter. And having, you know, the choice to choose from a wide range of options. I mean that's what, that's what customers need. You know, if you're gonna run hpc, you're gonna run machine learning and you're gonna run your SAP applications or your Windows applications, you need choice of what you know, specific type of instance and compute capabilities. You need to get the price performance. It's, it's definitely not a one size fits all. It's a 600 instance type. Size fits all maybe. >>Exactly. And you got a lot of instance and we'll get to that in a second. Yeah, I love the themes. I love this keynote themes you had like at first space, but I get the whole data, then you look at it, you can look at it differently. Really good metaphor, the ocean one I love with the security because he mentioned you can have the confidence to explore go deep snorkeling versus scuba and knowing how much oxygen you have. I mean, so really cool metaphor made me think very provocative. So again, this is kind of why people go to AWS because you now have these, these abilities to do things differently, depend on the context of what products you're working with. Yes. Explain why that was the core theme. Was there any rationale behind that? Was it just how you guys saw it? I mean that was pretty clever. >>Well, I think that, you know, we're, we're talking about environments and I think in this world, you know, there's uncertainty in a lot of places and we really feel like all of us need to be prepared for different types of environments. And so we wanted to explore what that could look like. And I think, you know, we're fascinated by space and the vastness and it is very much like the world of data. I don't know about you, but I actually scuba dive. So I love the depths of the ocean. I loved working on that part. There's extremes, extreme workloads like hpc, extreme workloads like machine learning with the growing models and there's an imagination, which is also one of my favorite areas to explore. >>Yeah. And you use the Antarctica one for about the whole environment and extreme conditions. That's good in the performance. And I love that piece of it. And I want to get into the, some of the things I love the speeds and fee. I think the, the big innovation with the silicon we've been covering as, you know, like a blanket. The, he's got the GRAVITON three 25% faster than GRAVITON two, the C seven GN network intense workloads. This is kind of a big deal. I mean this is one of those things where it might not get picked up in the major press, but the network use cases are significant. Nira has been successful. Share your thoughts on these kinds of innovations because they look kind of small, but they're not, they're >>Big, they're not small for sure, especially at the scale that our customers are, are, are running their applications. Like every little optimization that you can get really makes a huge difference. And I think it's exciting. I mean you hit on, you kind of hit on it when we've been working on silicon for a while now we know that, you know, if we're gonna keep pushing the element, the envelope in these areas, we had to, we had to go down to the silicon. And I think that Nitro has really been what's kind of been a breakthrough for us. You know, reinventing that virtualization layer, offloading security and storage and networking to special purpose chips. And I think that it's not just in the area of network optimization, right? You saw training optimized instances and inference optimized instances and HPC optimized instances. So yeah, we are kind of looking at all the extremes of, of what customers want to do. >>I know you can't talk about the future, but I can almost connect the dots as you're talking. It's like, hmm, specialized instances, specialized chips, maybe programmability of workload, smart intelligence, generative AI, weaving in there. A lot of kind of cool things I can see around the corner around generative AI automation. Hey, go to this instance with that go here. This is kind of what I see kind of coming around the corner. >>And we have some of that with our instance optimizers, our cost optimizer products where, you know, we wanna help customers find the best instance for their workload, get the best utilization they possibly can, you know, cut costs, but still have the great performance. So I don't, I don't know about your future, John, it sounds great, but we have, you know, we're taking steps in that direction today. >>Still look in this code that's gonna be on this code. Okay. Any, okay, I wanna give you one final question. Well, well two questions. One was a comment Adam made, I'd love to get your reaction if you want to tighten your bell, come to the cloud. I thought that was a very interesting nuance. A lot of economic pressure. Cloud is an opportunity to get agile, time to value faster. We had Zs carve cube analyst who's with us earlier said, the more you spend on the cloud, the more you save. That was his line, which I thought was very smart. Spending more doesn't mean you're gonna lose money, means you can save money too. So a lot of cost optimization discussions. Absolutely. Hey, your belt come to the cloud. What does he mean by that? >>Well I think that in, in times where, you know, there's uncertainty and economic conditions, it is, it's really, you know, you sometimes wanna pull back kind of, you know, batten down the hatches. But the cloud really, and we saw this with C you know, if you, if you move to the cloud, not only can you cut costs, but you put yourself in this position where you can continue to innovate and you can be agile and you can be prepared for whatever environment you're in so that you know when things go back or you have a customer needs that and innovation that goes off like you, you can accelerate back up really, really quickly. And I think we talked about Airbnb, that example of how, you know, in, in that really tough time of covid when travel industry wasn't happening so much, you know, they were able to scale back and save money. And then at the same time when, you know, Airbnb's kind of once again travel came back, they were in a position to really, really quickly change with the, the customer needs. >>You know, Lee, it's always great talking with you. You got a lot of energy, you're so smart and we both love products and you're leading the product marketing. We have an Instagram challenge here on the cube. I'm gonna put you on the spot here. Oh my gosh. It's called Instagram. We called a bumper sticker section. We used to call it what's the bumper sticker for reinvent. But we kind of modernized that. If you were gonna do an Instagram reel right now, what would be the Instagram reel for reinvent Keynote day one. As we look for, we got Verner, we'll probably talk about productivity with developers. What's the Instagram reel for reinvent? >>Wow. That means I have to get short with it, right? I am, I'm not always, that's still wrong answer. Yeah, well I think, you know, this is really big day one, so it's excitement, it's, we're glad to be here. We have a lot coming for you. We're super excited. And if you think about it, it's price, performance, it's data, it's security and it's solutions for purpose-built use cases. >>Great job. Congratulations. I love the message. I love how you guys had the theme. I thought it was great. And it's great to see Amazon continue to innovate with, with the, with the, with the innovation on the product side. But as we get into transformation, starting to see these solutions and the ecosystem is thriving and looking forward to hearing the, the new partner, chief Aruba tomorrow. Absolutely. See what she's got a new plan apparently unveiling. So exciting. Everyone's pretty excited. Thanks for coming >>On. Great. Great. Thanks for having >>Me. All right. Leah, here in the cube. You are the cube, the leader in tech coverage. I'm John Fur, your host. More live coverage after the short break. We'll be right back here. Day two of the cube, day one of reinvent. Lot of great action. Three, four days of wall to wall coverage. We'll be right back.
SUMMARY :
She's in charge of the messaging, the product, overseeing how these products It's great to be here. company, has been from day one and you know, over the years watching the announcements, which I think, you know, in the future as we have more complex environments like 3D simulation and the data center days was like, we don't wanna talk about speeds and feeds about solutions and you know, And I think, you know, as we talked about today, all the other, so you got securities not as a team sport, this is what they, And to really get a full picture you know, at Coan this was the biggest cloud native issue, the runtime of And guard duty has been kind of like a quiet service that maybe you don't hear a department anymore, it is the company, a technology is the company when you transform. And I think if you look at some of the things that Adam talked about, You can manage it, refine it, but you got the solution that breaks. And having, you know, the choice to choose from a wide range of options. the ocean one I love with the security because he mentioned you can have the confidence to explore go And I think, you know, we're fascinated by space and the vastness and it the big innovation with the silicon we've been covering as, you know, like a blanket. I mean you hit on, you kind of hit on it when we've been working on silicon for a while now we know that, I know you can't talk about the future, but I can almost connect the dots as you're talking. can, you know, cut costs, but still have the great performance. the more you save. But the cloud really, and we saw this with C you know, if you, if you move to the cloud, not only can you cut I'm gonna put you on the spot here. Yeah, well I think, you know, this is really big day one, I love how you guys had the theme. Thanks for having You are the cube, the leader in tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Adam | PERSON | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Steven Schmidt | PERSON | 0.99+ |
John Fur | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
John | PERSON | 0.99+ |
two questions | QUANTITY | 0.99+ |
Friday | DATE | 0.99+ |
Leah Bibbo | PERSON | 0.99+ |
Leah Bibo | PERSON | 0.99+ |
Leah | PERSON | 0.99+ |
85% | QUANTITY | 0.99+ |
Lee | PERSON | 0.99+ |
two sets | QUANTITY | 0.99+ |
Antarctica | LOCATION | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
55,000 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
5,000 new announcements | QUANTITY | 0.99+ |
three sets | QUANTITY | 0.99+ |
35,000 people | QUANTITY | 0.99+ |
10th year | QUANTITY | 0.99+ |
four days | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
Three | QUANTITY | 0.99+ |
2013 | DATE | 0.99+ |
thousands | QUANTITY | 0.99+ |
one final question | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
25% | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
yesterday | DATE | 0.98+ |
Windows | TITLE | 0.98+ |
Nira | ORGANIZATION | 0.98+ |
Omic | ORGANIZATION | 0.98+ |
over 200 services | QUANTITY | 0.98+ |
Coan | ORGANIZATION | 0.96+ |
Day two | QUANTITY | 0.95+ |
Legos | ORGANIZATION | 0.93+ |
600 instance | QUANTITY | 0.93+ |
first | QUANTITY | 0.92+ |
ORGANIZATION | 0.91+ | |
day one | QUANTITY | 0.91+ |
Cube | ORGANIZATION | 0.9+ |
two | QUANTITY | 0.89+ |
SAP | TITLE | 0.87+ |
EKS | ORGANIZATION | 0.84+ |
omic | ORGANIZATION | 0.84+ |
first space | QUANTITY | 0.83+ |
C seven GN | COMMERCIAL_ITEM | 0.8+ |
70,000 | QUANTITY | 0.79+ |
Keynote | EVENT | 0.79+ |
Aruba | ORGANIZATION | 0.78+ |
Brad Smith & Simon Ponsford | AWS re:Invent 2022
foreign continued coverage of AWS re invent my name is Savannah Peterson and I am very excited to be joined by two brilliant blokes in the space of efficiency and performance whether you're on Prem or in the cloud today's discussion is going to be fascinating please welcome Brad and Simon to the show how are you Simon coming in from the UK how you feeling well thank you excellent and Brad we have you coming in from Seattle how are you this morning doing fine thank you excellent and feeling bookish given your background love that I know that you both really care about efficiency and performance it's a very hot topic both of the show and in the industry right now I'm curious I'm going to open it up with you Simon what challenges and I think you've actually continued to tackle these throughout the course of your career what challenges were you facing and wanting to solve when you started yellow dog um really we're just looking at cloud and coming from an on-premise environment really wanted to be able to make accessing Cloud particularly a volume to be simple and straightforward um if you look at today at the number of instance types available from the major Cloud providers there's more than seven thousand different instance types whereas on-prem you go along you select your processes you select your systems it's already be really easy when you hit the cloud you've just got this amazing amount of choice so really it was all about how can you make Intelligent Decisions for you know are you going to run your workload how to match it with what you've got on premise and that was really the inspiration for Rafael so staying there for just a second what does yellow dog provide customers is a SAS system so um you get to it by accessing through the yellow platform and what it allows people to do is to be able to make Intelligent Decisions about where to run their workload would that be on premise or in the cloud it has a wealth of information it understands the costs the performance the latency and the availability of every different instance type in all different clouds it really allows people to uh to be able to make use of that information provision exactly what they need and to be able to run their workloads yeah it also includes a provisioner and it also includes a scheduler as well which is a cloud native scheduler so it's designed to be able to cope with um with cloud in terms of things like spots and interruptions and be able to uh to reschedule and fail over between clouds if there's ever need to do so yeah that sounds incredible and I know this means a lot for partners like AMD Brad talk to me about the partnership and what this means for AMD for your customers yeah absolutely it you know we're excited to be aligned with the uh with a company like yellow dog it's it's um you know the the importance of compute is becoming more and more prevalent every day and it's it's always been top of mind but especially now when you think about what the uh what the economy and the rest of the world is kind of facing over the next you know probably a year or longer it's so important that um that you're able to maximize your dollars and your spend and doing away with uh with uh with absolute certainty that you've got the right type of people behind you uh ensuring that you're your dollars are being spent very wisely and the great thing about yell dogs that they have tremendous insight into uh into cost optimization computer optimization across the entire Globe their their indexes is quite remarkable and what it does is it allows uh customers to actually see just how performant and cost efficient AMD is so it allows us to really put our best foot forward and and gives customers a chance to understand something that they probably weren't uh more familiar with the fact that uh that AMD uh is a tremendous a tremendous value in the marketplace yeah and and uh Simon can you tell us a little bit more about the yellow dog index I'm glad you brought that up Brad yes the yellow index is uh is essentially it's live it's available for anyone to access you can just go to index.yam.tech and you'll be able to see pretty much every single instance type that's available from all the major Cloud providers and be able to make your selection are you looking for GPU type nodes are you looking for AMD processors are you looking just for performance essentially what you're able to do is create a live view of effectively what's available in different data centers around the world and the price at this moment in time also just uh as Brad mentioned in terms of you know cost efficiency and uh and being taking green values seriously as we should we should do the yellow index also has the ability to be able to see at that point in time where the best place to be at a runner job is based upon the lowest carbon impact of running at this moment in time and that for many organizations gives an amazing Insight in not just about being able to find the the understand fishing processes but being able to ensure the greenest energy possible is powering that process when you want to be able to run your workload it's so powerful what you just said and I think when we exactly it's not just about it's not just about power but it's about place when we are are looking at Global Computing at scale what I know that there's ESG advantages in and ESG being a very hot topic when we're talking about AMW on AWS and and and leveraging tools like yellow dog what other sorts of advantages Beyond being least carbon impactful can your Mutual customers benefit from so it's not like I say there's many other features you know a very important thing when you're running a high performance Computing workload is being able to match the instruction set that you're running on premise and then being able to use that in the cloud as well and also to be able to make Intelligent Decisions of where should something run should would something be more efficient um to build on premise should we always try and maximize our on-premise resources before going into the cloud there's a lot about being able to just be able to make decisions and yellow itself it makes thousands of decisions per second to be in a workout where the the best and most optimized places to to run your workload yeah so Brad you work with a lot of companies at scale what type of scale is possible when leveraging Technologies like AMD and yellow dog combined well you know I love the fact that you mentioned uh you know HPC and it's one of the areas that actually is most exciting for for me personally and for and for AMD with the combination of yellow dog and AWS and AWS launched the very first HPC uh instance type last year and you know we're we're we haven't even begun to answer a question we haven't gotten to see um the full-scale capability in the cloud when it comes to these uh these very coordinated and very refined workloads that are running at massive scale and and uh you know we've got some some products we'll be launched in the near future as well that are incredibly performant and you know to be honest I don't think I don't think we have even come close to seeing the scale relative to somebody's very optimized workloads in HPC uh that that we're capable of so um we're excited we're excited for the next few years to see how how we can wrap in um some of the tremendous success that AMD has had on-prem in these these these massive compute centers and replicating that same success inside AWS with companies like yellow dog it's uh it we're excited to see what uh what's what's going to come forward can you give us a preview of anything on the record that gets you really excited about the future I was going to ask you what what had you looking forward to 2023 and Beyond but nothing well not nothing official of course uh but um I will say this you know AMD has recently successful had the launch for Genoa uh it's our next next-gen release and it is um it is proving to be it absolutely is the dominant compute engine it at this point that exists and you know when you start to couple that with the the prowess of AWS you know you could see that over time becoming something potentially that um you know um can really start to change the compute landscape quite a bit so we're hopeful that you know in the future we'll have something along those lines uh with AWS and others and um we're very uh we're very bullish in that area love it uh Simon what about you you've been passionate about low carbon I.T for a long time is carbon neutral Tech in our future what I realize is a bold and lofty claim for you but feel free to give us any of your future predictions um yeah so well I started here trying to build solutions for you know many years ago so 2006 um I was part of a team that launched the the world's lowest powered Windows PC that was actually based on the AMD technology back then so uh you can tell that AMD have been working on a low power for us for a long time in terms of carbon neutral yes I think um certainly there's a there's a few data centers around the world now that are getting very close to uh to carbon neutral some of which may have already achieved it so that's really interesting but so you know the the second part of that is really the the manufacturer of everything that goes into those Services systems and being able to to get to uh you know a net zero on those over a period of time and when we do that which is yeah not without challenges but but certainly possible then we really will have carbon neutral I.T which will be uh a benefit to everyone you know mankind itself yeah casual statement and I have to say that I wholeheartedly agree I think that it's one of the greater challenges of Our Generation especially as what we're able to do in HPC in particular since we're talking about it is only going to grow and scale and magnitude and the amount of data that we have to organize certain process is is wild even today so I love that I'm curious is there anything that you can share with us that's in the pipeline for Yellow Dog anything coming up in the future that's very exciting um so we're coming up very soon um we're going to release something called um version 4 again log which contains um what we call a resource framework which is all about making sure you've got everything you need before you run a job either on-prem or in the cloud so that might be anything from making sure you've got the right licenses making sure that your data is all in the right location making sure you've got all aspects of your workflow ready before you start launching compute and start really but you know burning through dollars with computer could potentially sat there uh not not doing anything until other tasks keep catch up so we're really excited about this new V4 release which will uh which will come out very soon awesome we can't wait to learn more about that hopefully here again on the cube Brad what do Partnerships with companies like yellowdog meme for you and for the customers that you're able to serve yeah it's it's incredibly important I it's you know there's one of the difficulties in in compute that we have today especially in Cloud compute there's there's so much available at this point I mean there was a point in time it was very simple and straightforward it's not even close to being that anymore green so you know one of the things I love about yellow dog itself is actually it does a great they do a great job of making very complex situations and environments fairly simple to understand especially from a business perspective and so one of the things that we love about it is it actually helps our customers you know the AMD direct customers better understand how to properly use our technology and to get the most out of it and so it's difficult for us to articulate that message because you know we are a Semiconductor Company so sometimes it's a little tough to be able to articulate workloads and applications in the way that our customer base will be able to understand but you know it's it's so critical to have companies like yellow dog in the middle that can actually you know make that translation for us directly to the customer um you know and and especially too when you start thinking about ESG and environmental relationships and I'd like to make a comment and one of the things that is fantastic about AMD AWS and yellow we all share the same Mission and we're very public about those missions about just being better to the to the planet and um you know AMD has taken some very aggressive uh targets through 2025 much beyond anything that the industry has expected and you know because of that we are you know we are the most um we are the most power efficient xa6 product on the marketplace and it's not even close and you know I look forward to the day when uh you know you start looking at instance types inside these public Cloud providers in conjunction with the old dog and you can actually even start to see maybe potentially what that carbon footprint is based on those decisions you make on compute and um you know considering that more than half to spend for everybody is generally compute in these environments it's critical to really know what your true impact in the world is and um it's just one of the best parts about a partnership like this oh what a wonderful note to close on and I love both the Synergy between all the partners on a technology level but most importantly on a mission level because none of it matters if we don't have a planet that we can continue to innovate on so I'm I'm really grateful that you're both here fighting a good fight working together and also making a lot of information available for companies of all different sizes as they're navigating very complex decision trees in and operating their stack so thank you both Simon and Brad I really appreciate your time it's been incredibly insightful and thank you to our audience for tuning in to our continuing coverage of AWS re invent here on thecube my name is Savannah Peterson and I look forward to learning more with you soon foreign [Music]
SUMMARY :
to the day when uh you know you start
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brad | PERSON | 0.99+ |
Simon | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
2025 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
more than half | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
2006 | DATE | 0.98+ |
today | DATE | 0.98+ |
Brad Smith | PERSON | 0.98+ |
Simon Ponsford | PERSON | 0.98+ |
second part | QUANTITY | 0.97+ |
ESG | TITLE | 0.97+ |
last year | DATE | 0.97+ |
Yellow Dog | ORGANIZATION | 0.96+ |
index.yam.tech | OTHER | 0.96+ |
2023 | DATE | 0.95+ |
a year | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
yellowdog | ORGANIZATION | 0.94+ |
many years ago | DATE | 0.93+ |
Rafael | PERSON | 0.93+ |
yellow dog | ORGANIZATION | 0.92+ |
more than seven thousand different instance types | QUANTITY | 0.91+ |
thousands of decisions per second | QUANTITY | 0.9+ |
two brilliant blokes | QUANTITY | 0.9+ |
first | QUANTITY | 0.89+ |
AMD AWS | ORGANIZATION | 0.88+ |
Windows | TITLE | 0.85+ |
every | QUANTITY | 0.85+ |
this morning | DATE | 0.84+ |
every single instance | QUANTITY | 0.83+ |
one of the difficulties | QUANTITY | 0.83+ |
best parts | QUANTITY | 0.78+ |
lot | QUANTITY | 0.77+ |
every day | QUANTITY | 0.76+ |
one of | QUANTITY | 0.75+ |
one of the things | QUANTITY | 0.75+ |
next few years | DATE | 0.72+ |
HPC | ORGANIZATION | 0.71+ |
Genoa | LOCATION | 0.7+ |
HPC | TITLE | 0.67+ |
things | QUANTITY | 0.65+ |
few data centers | QUANTITY | 0.64+ |
instance | QUANTITY | 0.64+ |
2022 | DATE | 0.63+ |
AMD Brad | ORGANIZATION | 0.63+ |
dog | TITLE | 0.59+ |
V4 | EVENT | 0.57+ |
yellow | ORGANIZATION | 0.57+ |
areas | QUANTITY | 0.56+ |
xa6 | COMMERCIAL_ITEM | 0.56+ |
a second | QUANTITY | 0.56+ |
zero | QUANTITY | 0.56+ |
Florian Berberich, PRACE AISBL | SuperComputing 22
>>We're back at Supercomputing 22 in Dallas, winding down day four of this conference. I'm Paul Gillan, my co-host Dave Nicholson. We are talking, we've been talking super computing all week and you hear a lot about what's going on in the United States, what's going on in China, Japan. What we haven't talked a lot about is what's going on in Europe and did you know that two of the top five supercomputers in the world are actually from European countries? Well, our guest has a lot to do with that. Florian, bearish, I hope I pronounce that correctly. My German is, German is not. My strength is the operations director for price, ais, S B L. And let's start with that. What is price? >>So, hello and thank you for the invitation. I'm Flon and Price is a partnership for Advanced Computing in Europe. It's a non-profit association with the seat in Brussels in Belgium. And we have 24 members. These are representatives from different European countries dealing with high performance computing in at their place. And we, so far, we provided the resources for our European research communities. But this changed in the last year, this oral HPC joint undertaking who put a lot of funding in high performance computing and co-funded five PET scale and three preis scale systems. And two of the preis scale systems. You mentioned already, this is Lumi and Finland and Leonardo in Bologna in Italy were in the place for and three and four at the top 500 at least. >>So why is it important that Europe be in the top list of supercomputer makers? >>I think Europe needs to keep pace with the rest of the world. And simulation science is a key technology for the society. And we saw this very recently with a pandemic, with a covid. We were able to help the research communities to find very quickly vaccines and to understand how the virus spread around the world. And all this knowledge is important to serve the society. Or another example is climate change. Yeah. With these new systems, we will be able to predict more precise the changes in the future. So the more compute power you have, the better the smaller the grid and there is resolution you can choose and the lower the error will be for the future. So these are, I think with these systems, the big or challenges we face can be addressed. This is the climate change, energy, food supply, security. >>Who are your members? Do they come from businesses? Do they come from research, from government? All of the >>Above. Yeah. Our, our members are public organization, universities, research centers, compute sites as a data centers, but But public institutions. Yeah. And we provide this services for free via peer review process with excellence as the most important criteria to the research community for free. >>So 40 years ago when, when the idea of an eu, and maybe I'm getting the dates a little bit wrong, when it was just an idea and the idea of a common currency. Yes. Reducing friction between, between borders to create a trading zone. Yes. There was a lot of focus there. Fast forward to today, would you say that these efforts in supercomputing, would they be possible if there were not an EU super structure? >>No, I would say this would not be possible in this extent. I think when though, but though European initiatives are, are needed and the European Commission is supporting these initiatives very well. And before praise, for instance 2008, there were research centers and data centers operating high performance computing systems, but they were not talking to each other. So it was isolated praise created community of operation sites and it facilitated the exchange between them and also enabled to align investments and to, to get the most out of the available funding. And also at this time, and still today for one single country in Europe, it's very hard to provide all the different architectures needed for all the different kind of research communities and applications. If you want to, to offer always the latest technologies, though this is really hardly possible. So with this joint action and opening the resources for other research groups from other countries, you, we, we were able to, yeah, get access to the latest technology for different communities at any given time though. And >>So, so the fact that the two systems that you mentioned are physically located in Finland and in Italy, if you were to walk into one of those facilities and meet the people that are there, they're not just fins in Finland and Italians in Italy. Yeah. This is, this is very much a European effort. So this, this is true. So, so in this, in that sense, the geography is sort of abstracted. Yeah. And the issues of sovereignty that make might take place in in the private sector don't exist or are there, are there issues with, can any, what are the requirements for a researcher to have access to a system in Finland versus a system in Italy? If you've got a EU passport, Hmm. Are you good to go? >>I think you are good to go though. But EU passport, it's now it becomes complicated and political. It's, it's very much, if we talk about the recent systems, well first, let me start a praise. Praise was inclusive and there was no any constraints as even we had users from US, Australia, we wanted just to support excellence in science. And we did not look at the nationality of the organization, of the PI and and so on. There were quotas, but these quotas were very generously interpreted. So, and if so, now with our HPC joint undertaking, it's a question from what European funds, these systems were procured and if a country or being country are associated to this funding, the researchers also have access to these systems. And this addresses basically UK and and Switzerland, which are not in the European Union, but they were as created to the Horizon 2020 research framework. And though they could can access the systems now available, Lumi and Leono and the Petascale system as well. How this will develop in the future, I don't know. It depends to which research framework they will be associated or not. >>What are the outputs of your work at price? Are they reference designs? Is it actual semiconductor hardware? Is it the research? What do you produce? >>So the, the application we run or the simulation we run cover all different scientific domains. So it's, it's science, it's, but also we have industrial let projects with more application oriented targets. Aerodynamics for instance, for cars or planes or something like this. But also fundamental science like the physical elementary physics particles for instance or climate change, biology, drug design, protein costa, all these >>Things. Can businesses be involved in what you do? Can they purchase your, your research? Do they contribute to their, I'm sure, I'm sure there are many technology firms in Europe that would like to be involved. >>So this involving industry though our calls are open and is, if they want to do open r and d, they are invited to submit also proposals. They will be evaluated and if this is qualifying, they will get the access and they can do their jobs and simulations. It's a little bit more tricky if it's in production, if they use these resources for their business and do not publish the results. They are some, well, probably more sites who, who are able to deal with these requests. Some are more dominant than others, but this is on a smaller scale, definitely. Yeah. >>What does the future hold? Are you planning to, are there other countries who will be joining the effort, other institutions? Do you plan to expand your, your scope >>Well, or I think or HPC joint undertaking with 36 member states is quite, covers already even more than Europe. And yeah, clearly if, if there are other states interest interested to join that there is no limitation. Although the focus lies on European area and on union. >>When, when you interact with colleagues from North America, do you, do you feel that there is a sort of European flavor to supercomputing that is different or are we so globally entwined? No. >>So research is not national, it's not European, it's international. This is also clearly very clear and I can, so we have a longstanding collaboration with our US colleagues and also with Chap and South Africa and Canada. And when Covid hit the world, we were able within two weeks to establish regular seminars inviting US and European colleagues to talk to to other, to each other and exchange the results and find new collaboration and to boost the research activities. So, and I have other examples as well. So when we, we already did the joint calls US exceed and in Europe praise and it was a very interesting experience. So we received applications from different communities and we decided that we will review this on our side, on European, with European experts and US did it in US with their experts. And you can guess what the result was at the meeting when we compared our results, it was matching one by one. It was exactly the same. Recite >>That it, it's, it's refreshing to hear a story of global collaboration. Yeah. Where people are getting along and making meaningful progress. >>I have to mention you, I have to to point out, you did not mention China as a country you were collaborating with. Is that by, is that intentional? >>Well, with China, definitely we have less links and collaborations also. It's also existing. There, there was initiative to look at the development of the technologies and the group meet on a regular basis. And there, there also Chinese colleagues involved. It's on a lower level, >>Yes, but is is the con conversations are occurring. We're out of time. Florian be operations director of price, European Super Computing collaborative. Thank you so much for being with us. I'm always impressed when people come on the cube and submit to an interview in a language that is not their first language. Yeah, >>Absolutely. >>Brave to do that. Yeah. Thank you. You're welcome. Thank you. We'll be right back after this break from Supercomputing 22 in Dallas.
SUMMARY :
Well, our guest has a lot to do with that. And we have 24 members. And we saw this very recently with excellence as the most important criteria to the research Fast forward to today, would you say that these the exchange between them and also enabled to So, so the fact that the two systems that you mentioned are physically located in Finland nationality of the organization, of the PI and and so on. But also fundamental science like the physical Do they contribute to their, I'm sure, I'm sure there are many technology firms in business and do not publish the results. Although the focus lies on European area is different or are we so globally entwined? so we have a longstanding collaboration with our US colleagues and That it, it's, it's refreshing to hear a story of global I have to mention you, I have to to point out, you did not mention China as a country you the development of the technologies and the group meet Yes, but is is the con conversations are occurring. Brave to do that.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Paul Gillan | PERSON | 0.99+ |
Florian Berberich | PERSON | 0.99+ |
Brussels | LOCATION | 0.99+ |
Finland | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
European Commission | ORGANIZATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
Italy | LOCATION | 0.99+ |
Bologna | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
24 members | QUANTITY | 0.99+ |
Florian | PERSON | 0.99+ |
United States | LOCATION | 0.99+ |
two systems | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
2008 | DATE | 0.99+ |
Belgium | LOCATION | 0.99+ |
Australia | LOCATION | 0.99+ |
four | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
last year | DATE | 0.99+ |
EU | ORGANIZATION | 0.99+ |
Covid | PERSON | 0.99+ |
pandemic | EVENT | 0.99+ |
first language | QUANTITY | 0.98+ |
two weeks | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Canada | LOCATION | 0.98+ |
South Africa | LOCATION | 0.97+ |
European | OTHER | 0.97+ |
36 member states | QUANTITY | 0.97+ |
Chap | ORGANIZATION | 0.97+ |
40 years ago | DATE | 0.97+ |
Horizon 2020 | TITLE | 0.96+ |
HPC | ORGANIZATION | 0.96+ |
Flon | ORGANIZATION | 0.96+ |
European | LOCATION | 0.96+ |
day four | QUANTITY | 0.94+ |
Chinese | OTHER | 0.93+ |
Switzerland | LOCATION | 0.92+ |
UK | LOCATION | 0.92+ |
ais | ORGANIZATION | 0.91+ |
one of those facilities | QUANTITY | 0.86+ |
five supercomputers | QUANTITY | 0.86+ |
European Union | ORGANIZATION | 0.85+ |
Lumi and | ORGANIZATION | 0.8+ |
Leonardo | ORGANIZATION | 0.79+ |
three preis scale systems | QUANTITY | 0.78+ |
one single country | QUANTITY | 0.78+ |
China, | LOCATION | 0.78+ |
Price | ORGANIZATION | 0.76+ |
Finland | ORGANIZATION | 0.69+ |
Europe | ORGANIZATION | 0.68+ |
22 | OTHER | 0.67+ |
500 | QUANTITY | 0.66+ |
China | LOCATION | 0.65+ |
five PET | QUANTITY | 0.64+ |
S B L. | PERSON | 0.6+ |
price | ORGANIZATION | 0.6+ |
scale | OTHER | 0.58+ |
Petascale | TITLE | 0.57+ |
Jay Boisseau, Dell Technologies | SuperComputing 22
>>We are back in the final stretch at Supercomputing 22 here in Dallas. I'm your host Paul Gillum with my co-host Dave Nicholson, and we've been talking to so many smart people this week. It just, it makes, boggles my mind are next guest. J Poso is the HPC and AI technology strategist at Dell. Jay also has a PhD in astronomy from the University of Texas. And I'm guessing you were up watching the Artemis launch the other night? >>I, I wasn't. I really should have been, but, but I wasn't, I was in full super computing conference mode. So that means discussions at, you know, various venues with people into the wee hours. >>How did you make the transition from a PhD in astronomy to an HPC expert? >>It was actually really straightforward. I did theoretical astrophysics and I was modeling what white dwarfs look like when they create matter and then explode as type one A super Novi, which is a class of stars that blow up. And it's a very important class because they blow up almost exactly the same way. So if you know how bright they are physically, not just how bright they appear in the sky, but if you can determine from first principles how bright they're, then you have a standard ruler for the universe when they go off in a galaxy, you know how far the galaxy is about how faint it is. So to model these though, you had to understand equations of physics, including electron degeneracy pressure, as well as normal fluid dynamics kinds of of things. And so you were solving for an explosive burning front, ripping through something. And that required a supercomputer to have anywhere close to the fat fidelity to get a reasonable answer and, and hopefully some understanding. >>So I've always said electrons are degenerate. I've always said it and I, and I mentioned to Paul earlier, I said, finally we're gonna get a guest to consort through this whole dark energy dark matter thing for us. We'll do that after, after, after the segment. >>That's a whole different, >>So, well I guess super computing being a natural tool that you would use. What is, what do you do in your role as a strategist? >>So I'm in the product management team. I spend a lot of time talking to customers about what they want to do next. HPC customers are always trying to be maximally productive of what they've got, but always wanting to know what's coming next. Because if you think about it, we can't simulate the entire human body cell for cell on any supercomputer day. We can simulate parts of it, cell for cell or the whole body with macroscopic physics, but not at the, you know, atomic level, the entire organism. So we're always trying to build more powerful computers to solve larger problems with more fidelity and less approximations in it. And so I help people try to understand which technologies for their next system might give them the best advance in capabilities for their simulation work, their data analytics work, their AI work, et cetera. Another part of it is talking to our great technology partner ecosystem and learning about which technologies they have. Cause it feeds the first thing, right? So understanding what's coming, and Dell has a, we're very proud of our large partner ecosystem. We embrace many different partners in that with different capabilities. So understanding those helps understand what your future systems might be. That those are two of the major roles in it. Strategic customers and strategic technologies. >>So you've had four days to wander the, this massive floor here and lots of startups, lots of established companies doing interesting things. What have you seen this week that really excites you? >>So I'm gonna tell you a dirty little secret here. If you are working for someone who makes super computers, you don't get as much time to wander the floor as you would think because you get lots of meetings with people who really want to understand in an NDA way, not just in the public way that's on the floor, but what's, what are you not telling us on the floor? What's coming next? And so I've been in a large number of customer meetings as well as being on the floor. And while I can't obviously share the everything, that's a non-disclosure topic in those, some things that we're hearing a lot about, people are really concerned with power because they see the TDP on the roadmaps for all the silicon providers going way up. And so people with power comes heat as waste. And so that means cooling. >>So power and cooling has been a big topic here. Obviously accelerators are, are increasing in importance in hpc not just for AI calculations, but now also for simulation calculations. And we are very proud of the three new accelerator platforms we launched here at the show that are coming out in a quarter or so. Those are two of the big topics we've seen. You know, there's, as you walk the floor here, you see lots of interesting storage vendors. HPC community's been do doing storage the same way for roughly 20 years. But now we see a lot of interesting players in that space. We have some great things in storage now and some great things that, you know, are coming in a year or two as well. So it's, it's interesting to see that diversity of that space. And then there's always the fun, exciting topics like quantum computing. We unveiled our first hybrid classical quantum computing system here with I on Q and I can't say what the future holds in this, in this format, but I can say we believe strongly in the future of quantum computing and that this, that future will be integrated with the kind of classical computing infrastructure that we make and that will help make quantum computing more powerful downstream. >>Well, let's go down that rabbit hole because, oh boy, boy, quantum computing has been talked about for a long time. There was a lot of excitement about it four or five years ago, some of the major vendors were announcing quantum computers in the cloud. Excitement has kind of died down. We don't see a lot of activity around, no, not a lot of talk around commercial quantum computers, yet you're deep into this. How close are we to have having a true quantum computer or is it a, is it a hybrid? More >>Likely? So there are probably more than 20 and I think close to 40 companies trying different approaches to make quantum computers. So, you know, Microsoft's pursuing a topol topological approach, do a photonics based approach. I, on Q and i on trap approach. These are all different ways of trying to leverage the quantum properties of nature. We know the properties exist, we use 'em in other technologies. We know the physics, but trying the engineering is very difficult. It's very difficult. I mean, just like it was difficult at one point to split the atom. It's very difficult to build technologies that leverage quantum properties of nature in a consistent and reliable and durable way, right? So I, you know, I wouldn't wanna make a prediction, but I will tell you I'm an optimist. I believe that when a tremendous capability with, with tremendous monetary gain potential lines up with another incentive, national security engineering seems to evolve faster when those things line up, when there's plenty of investment and plenty of incentive things happen. >>So I think a lot of my, my friends in the office of the CTO at Dell Technologies, when they're really leading this effort for us, you know, they would say a few to several years probably I'm an optimist, so I believe that, you know, I, I believe that we will sell some of the solution we announced here in the next year for people that are trying to get their feet wet with quantum. And I believe we'll be selling multiple quantum hybrid classical Dell quantum computing systems multiple a year in a year or two. And then of course you hope it goes to tens and hundreds of, you know, by the end of the decade >>When people talk about, I'm talking about people writ large, super leaders in supercomputing, I would say Dell's name doesn't come up in conversations I have. What would you like them to know that they don't know? >>You know, I, I hope that's not true, but I, I, I guess I understand it. We are so good at making the products from which people make clusters that we're number one in servers, we're number one in enterprise storage. We're number one in so many areas of enterprise technology that I, I think in some ways being number one in those things detracts a little bit from a subset of the market that is a solution subset as opposed to a product subset. But, you know, depending on which analyst you talk to and how they count, we're number one or number two in the world in supercomputing revenue. We don't always do the biggest splashy systems. We do the, the frontier system at t, the HPC five system at ENI in Europe. That's the largest academic supercomputer in the world and the largest industrial super >>That's based the world on Dell. Dell >>On Dell hardware. Yep. But we, I think our vision is really that we want to help more people use HPC to solve more problems than any vendor in the world. And those problems are various scales. So we are really concerned about the more we're democratizing HPC to make it easier for more people to get in at any scale that their budget and workloads require, we're optimizing it to make sure that it's not just some parts they're getting, that they are validated to work together with maximum scalability and performance. And we have a great HPC and AI innovation lab that does this engineering work. Cuz you know, one of the myths is, oh, I can just go buy a bunch of servers from company X and a network from company Y and a storage system from company Z and then it'll all work as an equivalent cluster. Right? Not true. It'll probably work, but it won't be the highest performance, highest scalability, highest reliability. So we spend a lot of time optimizing and then we are doing things to try to advance the state of HPC as well. What our future systems look like in the second half of this decade might be very different than what they look like right. Now. >>You mentioned a great example of a limitation that we're running up against right now. You mentioned an entire human body as a, as a, as an organism >>Or any large system that you try to model at the atomic level, but it's a huge macro system, >>Right? So will we be able to reach milestones where we can get our arms entirely around something like an entire human organism with simply quantitative advances as opposed to qualitative advances? Right now, as an example, let's just, let's go down to the basics from a Dell perspective. You're in a season where microprocessor vendors are coming out with next gen stuff and those next NextGen microprocessors, GPUs and CPUs are gonna be plugged into NextGen motherboards, PCI e gen five, gen six coming faster memory, bigger memory, faster networking, whether it's NS or InfiniBand storage controllers, all bigger, better, faster, stronger. And I suspect that systems like Frontera, I don't know, but I suspect that a lot of the systems that are out there are not on necessarily what we would think of as current generation technology, but maybe they're n minus one as a practical matter. So, >>But yeah, I mean they have a lifetime, so Exactly. >>The >>Lifetime is longer than the evolution. >>That's the normal technologies. Yeah. So, so what some people miss is this is, this is the reality that when, when we move forward with the latest things that are being talked about here, it's often a two generation move for an individual, for an individual organization. Yep. >>So now some organizations will have multiple systems and they, the system's leapfrog and technology generations, even if one is their real large system, their next one might be newer technology, but smaller, the next one might be a larger one with newer technology and such. Yeah. So the, the biggest super computing sites are, are often running more than one HPC system that have been specifically designed with the latest technologies and, and designed and configured for maybe a different subset of their >>Workloads. Yeah. So, so the, the, to go back to kinda the, the core question, in your opinion, do we need that qualitative leap to something like quantum computing in order to get to the point, or is it simply a question of scale and power at the, at the, at the individual node level to get us to the point where we can in fact gain insight from a digital model of an entire human body, not just looking at a, not, not just looking at an at, at an organ. And to your point, it's not just about human body, any system that we would characterize as being chaotic today, so a weather system, whatever. Do you, are there any milestones that you're thinking of where you're like, wow, you know, I have, I, I understand everything that's going on, and I think we're, we're a year away. We're a, we're, we're a, we're a compute generation away from being able to gain insight out of systems that right now we can't simply because of scale. It's a very, very long question that I just asked you, but I think I, but hopefully, hopefully you're tracking it. What, what are your, what are your thoughts? What are these, what are these inflection points that we, that you've, in your mind? >>So I, I'll I'll start simple. Remember when we used to buy laptops and we worried about what gigahertz the clock speed was Exactly. Everybody knew the gigahertz of it, right? There's some tasks at which we're so good at making the hardware that now the primary issues are how great is the screen? How light is it, what's the battery life like, et cetera. Because for the set of applications on there, we we have enough compute power. We don't, you don't really need your laptop. Most people don't need their laptop to have twice as powerful a processor that actually rather up twice the battery life on it or whatnot, right? We make great laptops. We, we design for all of those, configure those parameters now. And what, you know, we, we see some customers want more of x, somewhat more of y but the, the general point is that the amazing progress in, in microprocessors, it's sufficient for most of the workloads at that level. Now let's go to HPC level or scientific and technical level. And when it needs hpc, if you're trying to model the orbit of the moon around the earth, you don't really need a super computer for that. You can get a highly accurate model on a, on a workstation, on a server, no problem. It won't even really make it break a sweat. >>I had to do it with a slide rule >>That, >>That >>Might make you break a sweat. Yeah. But to do it with a, you know, a single body orbiting with another body, I say orbiting around, but we both know it's really, they're, they're both ordering the center of mass. It's just that if one is much larger, it seems like one's going entirely around the other. So that's, that's not a super computing problem. What about the stars in a galaxy trying to understand how galaxies form spiral arms and how they spur star formation. Right now you're talking a hundred billion stars plus a massive amount of inter stellar medium in there. So can you solve that on that server? Absolutely not. Not even close. Can you solve it on the largest super computer in the world today? Yes and no. You can solve it with approximations on the largest super computer in the world today. But there's a lot of approximations that go into even that. >>The good news is the simulations produce things that we see through our great telescopes. So we know the approximations are sufficient to get good fidelity, but until you really are doing direct numerical simulation of every particle, right? Right. Which is impossible to do. You need a computer as big as the universe to do that. But the approximations and the science in the science as well as the known parts of the science are good enough to give fidelity. So, and answer your question, there's tremendous number of problem scales. There are problems in every field of science and study that exceed the der direct numerical simulation capabilities of systems today. And so we always want more computing power. It's not macho flops, it's real, we need it, we need exo flops and we will need zeta flops beyond that and yada flops beyond that. But an increasing number of problems will be solved as we keep working to solve problems that are farther out there. So in terms of qualitative steps, I do think technologies like quantum computing, to be clear as part of a hybrid classical quantum system, because they're really just accelerators for certain kinds of algorithms, not for general purpose algorithms. I do think advances like that are gonna be necessary to solve some of the very hardest problem. It's easy to actually formulate an optimization problem that is absolutely intractable by the larger systems in the world today, but quantum systems happen to be in theory when they're big and stable enough, great at that kind of problem. >>I, that should be understood. Quantum is not a cure all for absolutely. For the, for the shortage of computing power. It's very good for certain, certain >>Problems. And as you said at this super computing, we see some quantum, but it's a little bit quieter than I probably expected. I think we're in a period now of everybody saying, okay, there's been a lot of buzz. We know it's gonna be real, but let's calm down a little bit and figure out what the right solutions are. And I'm very proud that we offered one of those >>At the show. We, we have barely scratched the surface of what we could talk about as we get into intergalactic space, but unfortunately we only have so many minutes and, and we're out of them. Oh, >>I'm >>J Poso, HPC and AI technology strategist at Dell. Thanks for a fascinating conversation. >>Thanks for having me. Happy to do it anytime. >>We'll be back with our last interview of Supercomputing 22 in Dallas. This is Paul Gillen with Dave Nicholson. Stay with us.
SUMMARY :
We are back in the final stretch at Supercomputing 22 here in Dallas. So that means discussions at, you know, various venues with people into the wee hours. the sky, but if you can determine from first principles how bright they're, then you have a standard ruler for the universe when We'll do that after, after, after the segment. What is, what do you do in your role as a strategist? We can simulate parts of it, cell for cell or the whole body with macroscopic physics, What have you seen this week that really excites you? not just in the public way that's on the floor, but what's, what are you not telling us on the floor? the kind of classical computing infrastructure that we make and that will help make quantum computing more in the cloud. We know the properties exist, we use 'em in other technologies. And then of course you hope it goes to tens and hundreds of, you know, by the end of the decade What would you like them to know that they don't know? detracts a little bit from a subset of the market that is a solution subset as opposed to a product subset. That's based the world on Dell. So we are really concerned about the more we're You mentioned a great example of a limitation that we're running up against I don't know, but I suspect that a lot of the systems that are out there are not on That's the normal technologies. but smaller, the next one might be a larger one with newer technology and such. And to your point, it's not just about human of the moon around the earth, you don't really need a super computer for that. But to do it with a, you know, a single body orbiting with another are sufficient to get good fidelity, but until you really are doing direct numerical simulation I, that should be understood. And as you said at this super computing, we see some quantum, but it's a little bit quieter than We, we have barely scratched the surface of what we could talk about as we get into intergalactic J Poso, HPC and AI technology strategist at Dell. Happy to do it anytime. This is Paul Gillen with Dave Nicholson.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Paul Gillum | PERSON | 0.99+ |
Jay Boisseau | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Jay | PERSON | 0.99+ |
Dallas | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
J Poso | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
tens | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Paul Gillen | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
University of Texas | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
four | DATE | 0.99+ |
first principles | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
more than 20 | QUANTITY | 0.99+ |
two generation | QUANTITY | 0.98+ |
Supercomputing 22 | TITLE | 0.98+ |
one point | QUANTITY | 0.98+ |
twice | QUANTITY | 0.98+ |
hundreds | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
five years ago | DATE | 0.97+ |
both | QUANTITY | 0.97+ |
earth | LOCATION | 0.96+ |
more than one | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
a year | QUANTITY | 0.96+ |
this week | DATE | 0.96+ |
first thing | QUANTITY | 0.95+ |
20 years | QUANTITY | 0.94+ |
four days | QUANTITY | 0.93+ |
second half of this decade | DATE | 0.93+ |
ENI | ORGANIZATION | 0.91+ |
Z | ORGANIZATION | 0.9+ |
40 companies | QUANTITY | 0.9+ |
e gen five | COMMERCIAL_ITEM | 0.86+ |
a year | QUANTITY | 0.84+ |
hundred billion stars | QUANTITY | 0.83+ |
HPC | ORGANIZATION | 0.83+ |
three new accelerator platforms | QUANTITY | 0.81+ |
end of the decade | DATE | 0.8+ |
hpc | ORGANIZATION | 0.8+ |
Frontera | ORGANIZATION | 0.8+ |
single body | QUANTITY | 0.79+ |
X | ORGANIZATION | 0.76+ |
NextGen | ORGANIZATION | 0.73+ |
Supercomputing 22 | ORGANIZATION | 0.69+ |
five system | QUANTITY | 0.62+ |
gen six | QUANTITY | 0.61+ |
number one | QUANTITY | 0.57+ |
approximations | QUANTITY | 0.53+ |
particle | QUANTITY | 0.53+ |
a quarter | QUANTITY | 0.52+ |
Y | ORGANIZATION | 0.49+ |
type | OTHER | 0.49+ |
22 | OTHER | 0.49+ |
Armando Acosta, Dell Technologies and Matt Leininger, Lawrence Livermore National Laboratory
(upbeat music) >> We are back, approaching the finish line here at Supercomputing 22, our last interview of the day, our last interview of the show. And I have to say Dave Nicholson, my co-host, My name is Paul Gillin. I've been attending trade shows for 40 years Dave, I've never been to one like this. The type of people who are here, the type of problems they're solving, what they talk about, the trade shows are typically, they're so speeds and feeds. They're so financial, they're so ROI, they all sound the same after a while. This is truly a different event. Do you get that sense? >> A hundred percent. Now, I've been attending trade shows for 10 years since I was 19, in other words, so I don't have necessarily your depth. No, but seriously, Paul, totally, completely, completely different than any other conference. First of all, there's the absolute allure of looking at the latest and greatest, coolest stuff. I mean, when you have NASA lecturing on things when you have Lawrence Livermore Labs that we're going to be talking to here in a second it's a completely different story. You have all of the academics you have students who are in competition and also interviewing with organizations. It's phenomenal. I've had chills a lot this week. >> And I guess our last two guests sort of represent that cross section. Armando Acosta, director of HPC Solutions, High Performance Solutions at Dell. And Matt Leininger, who is the HPC Strategist at Lawrence Livermore National Laboratory. Now, there is perhaps, I don't know you can correct me on this, but perhaps no institution in the world that uses more computing cycles than Lawrence Livermore National Laboratory and is always on the leading edge of what's going on in Supercomputing. And so we want to talk to both of you about that. Thank you. Thank you for joining us today. >> Sure, glad to be here. >> For having us. >> Let's start with you, Armando. Well, let's talk about the juxtaposition of the two of you. I would not have thought of LLNL as being a Dell reference account in the past. Tell us about the background of your relationship and what you're providing to the laboratory. >> Yeah, so we're really excited to be working with Lawrence Livermore, working with Matt. But actually this process started about two years ago. So we started looking at essentially what was coming down the pipeline. You know, what were the customer requirements. What did we need in order to make Matt successful. And so the beauty of this project is that we've been talking about this for two years, and now it's finally coming to fruition. And now we're actually delivering systems and delivering racks of systems. But what I really appreciate is Matt coming to us, us working together for two years and really trying to understand what are the requirements, what's the schedule, what do we need to hit in order to make them successful >> At Lawrence Livermore, what drives your computing requirements I guess? You're working on some very, very big problems but a lot of very complex problems. How do you decide what you need to procure to address them? >> Well, that's a difficult challenge. I mean, our mission is a national security mission dealing with making sure that we do our part to provide the high performance computing capabilities to the US Department of Energy's National Nuclear Security Administration. We do that through the Advanced Simulation computing program. Its goal is to provide that computing power to make sure that the US nuclear rep of the stockpile is safe, secure, and effective. So how we go about doing that? There's a lot of work involved. We have multiple platform lines that we accomplish that goal with. One of them is the advanced technology systems. Those are the ones you've heard about a lot, they're pushing towards exit scale, the GPU technologies incorporated into those. We also have a second line, a platform line, called the Commodity Technology Systems. That's where right now we're partnering with Dell on the latest generation of those. Those systems are a little more conservative, they're right now CPU only driven but they're also intended to be the everyday work horses. So those are the first systems our users get on. It's very easy for them to get their applications up and running. They're the first things they use usually on a day to day basis. They run a lot of small to medium size jobs that you need to do to figure out how to most effectively use what workloads you need to move to the even larger systems to accomplish our mission goals. >> The workhorses. >> Yeah. >> What have you seen here these last few days of the show, what excites you? What are the most interesting things you've seen? >> There's all kinds of things that are interesting. Probably most interesting ones I can't talk about in public, unfortunately, 'cause of NDA agreements, of course. But it's always exciting to be here at Supercomputing. It's always exciting to see the products that we've been working with industry and co-designing with them on for, you know, several years before the public actually sees them. That's always an exciting part of the conference as well specifically with CTS-2, it's exciting. As was mentioned before, I've been working with Dell for nearly two years on this, but the systems first started being delivered this past August. And so we're just taking the initial deliveries of those. We've deployed, you know, roughly about 1600 nodes now but that'll ramp up to over 6,000 nodes over the next three or four months. >> So how does this work intersect with Sandia and Los Alamos? Explain to us the relationship there. >> Right, so those three laboratories are the laboratories under the National Nuclear Security Administration. We partner together on CTS. So the architectures, as you were asking, how do we define these things, it's the labs coming together. Those three laboratories we define what we need for that architecture. We have a joint procurement that is run out of Livermore but then the systems are deployed at all three laboratories. And then they serve the programs that I mentioned for each laboratory as well. >> I've worked in this space for a very long time you know I've worked with agencies where the closest I got to anything they were actually doing was the sort of guest suite outside the secure area. And sometimes there are challenges when you're communicating, it's like you have a partner like Dell who has all of these things to offer, all of these ideas. You have requirements, but maybe you can't share 100% of what you need to do. How do you navigate that? Who makes the decision about what can be revealed in these conversations? You talk about NDA in terms of what's been shared with you, you may be limited in terms of what you can share with vendors. Does that cause inefficiency? >> To some degree. I mean, we do a good job within the NSA of understanding what our applications need and then mapping that to technical requirements that we can talk about with vendors. We also have kind of in between that we've done this for many years. A recent example is of course with the exit scale computing program and some things it's doing creating proxy apps or mini apps that are smaller versions of some of the things that we are important to us. Some application areas are important to us, hydrodynamics, material science, things like that. And so we can collaborate with vendors on those proxy apps to co-design systems and tweak the architectures. In fact, we've done a little bit that with CTS-2, not as much in CTS as maybe in the ATS platforms but that kind of general idea of how we collaborate through these proxy applications is something we've used across platforms. >> Now is Dell one of your co-design partners? >> In CTS-2 absolutely, yep. >> And how, what aspects of CTS-2 are you working on with Dell? >> Well, the architecture itself was the first, you know thing we worked with them on, we had a procurement come out, you know they bid an architecture on that. We had worked with them, you know but previously on our requirements, understanding what our requirements are. But that architecture today is based on the fourth generation Intel Xeon that you've heard a lot about at the conference. We are one of the first customers to get those systems in. All the systems are interconnected together with the Cornell Network's Omni-Path Network that we've used before and are very excited about as well. And we build up from there. The systems get integrated in by the operations teams at the laboratory. They get integrated into our production computing environment. Dell is really responsible, you know for designing these systems and delivering to the laboratories. The laboratories then work with Dell. We have a software stack that we provide on top of that called TOSS, for Tri-Lab Operating System. It's based on Redhead Enterprise Linux. But the goal there is that it allows us, a common user environment, a common simulation environment across not only CTS-2, but maybe older systems we have and even the larger systems that we'll be deploying as well. So from a user perspective they see a common user interface, a common environment across all the different platforms that they use at Livermore and the other laboratories. >> And Armando, what does Dell get out of the co-design arrangement with the lab? >> Well, we get to make sure that they're successful. But the other big thing that we want to do, is typically when you think about Dell and HPC, a lot of people don't make that connection together. And so what we're trying to do is make sure that, you know they know that, hey, whether you're a work group customer at the smallest end or a super computer customer at the highest end, Dell wants to make sure that we have the right setup portfolio to match any needs across this. But what we were really excited about this, this is kind of our, you know big CTS-2 first thing we've done together. And so, you know, hopefully this has been successful. We've made Matt happy and we look forward to the future what we can do with bigger and bigger things. >> So will the labs be okay with Dell coming up with a marketing campaign that said something like, "We can't confirm that alien technology is being reverse engineered." >> Yeah, that would fly. >> I mean that would be right, right? And I have to ask you the question directly and the way you can answer it is by smiling like you're thinking, what a stupid question. Are you reverse engineering alien technology at the labs? >> Yeah, you'd have to suck the PR office. >> Okay, okay. (all laughing) >> Good answer. >> No, but it is fascinating because to a degree it's like you could say, yeah, we're working together but if you really want to dig into it, it's like, "Well I kind of can't tell you exactly how some of this stuff is." Do you consider anything that you do from a technology perspective, not what you're doing with it, but the actual stack, do you try to design proprietary things into the stack or do you say, "No, no, no, we're going to go with standards and then what we do with it is proprietary and secret."? >> Yeah, it's more the latter. >> Is the latter? Yeah, yeah, yeah. So you're not going to try to reverse engineer the industry? >> No, no. We want the solutions that we develop to enhance the industry to be able to apply to a broader market so that we can, you know, gain from the volume of that market, the lower cost that they would enable, right? If we go off and develop more and more customized solutions that can be extraordinarily expensive. And so we we're really looking to leverage the wider market, but do what we can to influence that, to develop key technologies that we and others need that can enable us in the high forms computing space. >> We were talking with Satish Iyer from Dell earlier about validated designs, Dell's reference designs for for pharma and for manufacturing, in HPC are you seeing that HPC, Armando, and is coming together traditionally and more of an academic research discipline beginning to come together with commercial applications? And are these two markets beginning to blend? >> Yeah, I mean so here's what's happening, is you have this convergence of HPC, AI and data analytics. And so when you have that combination of those three workloads they're applicable across many vertical markets, right? Whether it's financial services, whether it's life science, government and research. But what's interesting, and Matt won't brag about, but a lot of stuff that happens in the DoE labs trickles down to the enterprise space, trickles down to the commercial space because these guys know how to do it at scale, they know how to do it efficiently and they know how to hit the mark. And so a lot of customers say, "Hey we want what CTS-2 does," right? And so it's very interesting. The way I love it is their process the way they do the RFP process. Matt talked about the benchmarks and helping us understand, hey here's kind of the mark you have to hit. And then at the same time, you know if we make them successful then obviously it's better for all of us, right? You know, I want to secure nuclear stock pile so I hope everybody else does as well. >> The software stack you mentioned, I think Tia? >> TOSS. >> TOSS. >> Yeah. >> How did that come about? Why did you feel the need to develop your own software stack? >> It originated back, you know, even 20 years ago when we first started building Linux clusters when that was a crazy idea. Livermore and other laboratories were really the first to start doing that and then push them to larger and larger scales. And it was key to have Linux running on that at the time. And so we had the. >> So 20 years ago you knew you wanted to run on Linux? >> Was 20 years ago, yeah, yeah. And we started doing that but we needed a way to have a version of Linux that we could partner with someone on that would do, you know, the support, you know, just like you get from an EoS vendor, right? Security support and other things. But then layer on top of that, all the HPC stuff you need either to run the system, to set up the system, to support our user base. And that evolved into to TOSS which is the Tri-Lab Operating System. Now it's based on the latest version of Redhead Enterprise Linux, as I mentioned before, with all the other HPC magic, so to speak and all that HPC magic is open source things. It's not stuff, it may be things that we develop but it's nothing closed source. So all that's there we run it across all these different environments as I mentioned before. And it really originated back in the early days of, you know, Beowulf clusters, Linux clusters, as just needing something that we can use to run on multiple systems and start creating that common environment at Livermore and then eventually the other laboratories. >> How is a company like Dell, able to benefit from the open source work that's coming out of the labs? >> Well, when you look at the open source, I mean open source is good for everybody, right? Because if you make a open source tool available then people start essentially using that tool. And so if we can make that open source tool more robust and get more people using it, it gets more enterprise ready. And so with that, you know, we're all about open source we're all about standards and really about raising all boats 'cause that's what open source is all about. >> And with that, we are out of time. This is our 28th interview of SC22 and you're taking us out on a high note. Armando Acosta, director of HPC Solutions at Dell. Matt Leininger, HPC Strategist, Lawrence Livermore National Laboratories. Great discussion. Hopefully it was a good show for you. Fascinating show for us and thanks for being with us today. >> Thank you very much. >> Thank you for having us >> Dave it's been a pleasure. >> Absolutely. >> Hope we'll be back next year. >> Can't believe, went by fast. Absolutely at SC23. >> We hope you'll be back next year. This is Paul Gillin. That's a wrap, with Dave Nicholson for theCUBE. See here in next time. (soft upbear music)
SUMMARY :
And I have to say Dave You have all of the academics and is always on the leading edge about the juxtaposition of the two of you. And so the beauty of this project How do you decide what you need that you need to do but the systems first Explain to us the relationship there. So the architectures, as you were asking, 100% of what you need to do. And so we can collaborate with and the other laboratories. And so, you know, hopefully that said something like, And I have to ask you and then what we do with it reverse engineer the industry? so that we can, you know, gain And so when you have that combination running on that at the time. all the HPC stuff you need And so with that, you know, and thanks for being with us today. Absolutely at SC23. with Dave Nicholson for theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt Leininger | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
National Nuclear Security Administration | ORGANIZATION | 0.99+ |
Armando Acosta | PERSON | 0.99+ |
Cornell Network | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Matt | PERSON | 0.99+ |
CTS-2 | TITLE | 0.99+ |
US Department of Energy | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
40 years | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Lawrence Livermore | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
CTS | TITLE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Linux | TITLE | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
HPC Solutions | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Lawrence Livermore Labs | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Los Alamos | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Lawrence Livermore National Laboratory | ORGANIZATION | 0.99+ |
Armando | ORGANIZATION | 0.99+ |
each laboratory | QUANTITY | 0.99+ |
second line | QUANTITY | 0.99+ |
over 6,000 nodes | QUANTITY | 0.99+ |
20 years ago | DATE | 0.98+ |
three laboratories | QUANTITY | 0.98+ |
28th interview | QUANTITY | 0.98+ |
Lawrence Livermore National Laboratories | ORGANIZATION | 0.98+ |
three | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Tri-Lab | ORGANIZATION | 0.98+ |
Sandia | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
two markets | QUANTITY | 0.97+ |
Supercomputing | ORGANIZATION | 0.96+ |
first systems | QUANTITY | 0.96+ |
fourth generation | QUANTITY | 0.96+ |
this week | DATE | 0.96+ |
Livermore | ORGANIZATION | 0.96+ |
Omni-Path Network | ORGANIZATION | 0.95+ |
about 1600 nodes | QUANTITY | 0.95+ |
Lawrence Livermore National Laboratory | ORGANIZATION | 0.94+ |
LLNL | ORGANIZATION | 0.93+ |
NDA | ORGANIZATION | 0.93+ |
Satish Iyer, Dell Technologies | SuperComputing 22
>>We're back at Super Computing, 22 in Dallas, winding down the final day here. A big show floor behind me. Lots of excitement out there, wouldn't you say, Dave? Just >>Oh, it's crazy. I mean, any, any time you have NASA presentations going on and, and steampunk iterations of cooling systems that the, you know, it's, it's >>The greatest. I've been to hundreds of trade shows. I don't think I've ever seen NASA exhibiting at one like they are here. Dave Nicholson, my co-host. I'm Paul Gell, in which with us is Satish Ier. He is the vice president of emerging services at Dell Technologies and Satit, thanks for joining us on the cube. >>Thank you. Paul, >>What are emerging services? >>Emerging services are actually the growth areas for Dell. So it's telecom, it's cloud, it's edge. So we, we especially focus on all the growth vectors for, for the companies. >>And, and one of the key areas that comes under your jurisdiction is called apex. Now I'm sure there are people who don't know what Apex is. Can you just give us a quick definition? >>Absolutely. So Apex is actually Dells for a into cloud, and I manage the Apex services business. So this is our way of actually bringing cloud experience to our customers, OnPrem and in color. >>But, but it's not a cloud. I mean, you don't, you don't have a Dell cloud, right? It's, it's of infrastructure as >>A service. It's infrastructure and platform and solutions as a service. Yes, we don't have our own e of a public cloud, but we want to, you know, this is a multi-cloud world, so technically customers want to consume where they want to consume. So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. >>You, you mentioned something just ahead of us going on air. A great way to describe Apex, to contrast Apex with CapEx. There's no c there's no cash up front necessary. Yeah, I thought that was great. Explain that, explain that a little more. Well, >>I mean, you know, one, one of the main things about cloud is the consumption model, right? So customers would like to pay for what they consume, they would like to pay in a subscription. They would like to not prepay CapEx ahead of time. They want that economic option, right? So I think that's one of the key tenets for anything in cloud. So I think it's important for us to recognize that and think Apex is basically a way by which customers pay for what they consume, right? So that's a absolutely a key tenant for how, how we want to design Apex. So it's absolutely right. >>And, and among those services are high performance computing services. Now I was not familiar with that as an offering in the Apex line. What constitutes a high performance computing Apex service? >>Yeah, I mean, you know, I mean, this conference is great, like you said, you know, I, there's so many HPC and high performance computing folks here, but one of the things is, you know, fundamentally, if you look at high performance computing ecosystem, it is quite complex, right? And when you call it as an Apex HPC or Apex offering offer, it brings a lot of the cloud economics and cloud, you know, experience to the HPC offer. So fundamentally, it's about our ability for customers to pay for what they consume. It's where Dell takes a lot of the day to day management of the infrastructure on our own so that customers don't need to do the grunge work of managing it, and they can really focus on the actual workload, which actually they run on the CHPC ecosystem. So it, it is, it is high performance computing offer, but instead of them buying the infrastructure, running all of that by themself, we make it super easy for customers to consume and manage it across, you know, proven designs, which Dell always implements across these verticals. >>So what, what makes the high performance computing offering as opposed to, to a rack of powered servers? What do you add in to make it >>Hpc? Ah, that's a great question. So, I mean, you know, so this is a platform, right? So we are not just selling infrastructure by the drink. So we actually are fundamentally, it's based on, you know, we, we, we launch two validated designs, one for life science sales, one for manufacturing. So we actually know how these PPO work together, how they actually are validated design tested solution. And we also, it's a platform. So we actually integrate the softwares on the top. So it's just not the infrastructure. So we actually integrate a cluster manager, we integrate a job scheduler, we integrate a contained orchestration layer. So a lot of these things, customers have to do it by themself, right? If they're buy the infrastructure. So by basically we are actually giving a platform or an ecosystem for our customers to run their workloads. So make it easy for them to actually consume those. >>That's Now is this, is this available on premises for customer? >>Yeah, so we, we, we make it available customers both ways. So we make it available OnPrem for customers who want to, you know, kind of, they want to take that, take that economics. We also make it available in a colo environment if the customers want to actually, you know, extend colo as that OnPrem environment. So we do both. >>What are, what are the requirements for a customer before you roll that equipment in? How do they sort of have to set the groundwork for, >>For Well, I think, you know, fundamentally it starts off with what the actual use case is, right? So, so if you really look at, you know, the two validated designs we talked about, you know, one for, you know, healthcare life sciences, and one other one for manufacturing, they do have fundamentally different requirements in terms of what you need from those infrastructure systems. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require a lot of memory intensive loads, or do they actually require something which has got a lot of compute power. So, you know, it all depends on what they would require in terms of the workloads to be, and then we do havet sizing. So we do have small, medium, large, we have, you know, multiple infrastructure options, CPU core options. Sometimes the customer would also wanna say, you know what, as long as the regular CPUs, I also want some GPU power on top of that. So those are determinations typically a customer makes as part of the ecosystem, right? And so those are things which would, they would talk to us about to say, okay, what is my best option in terms of, you know, kind of workloads I wanna run? And then they can make a determination in terms of how, how they would actually going. >>So this, this is probably a particularly interesting time to be looking at something like HPC via Apex with, with this season of Rolling Thunder from various partners that you have, you know? Yep. We're, we're all expecting that Intel is gonna be rolling out new CPU sets from a powered perspective. You have your 16th generation of PowerEdge servers coming out, P C I E, gen five, and all of the components from partners like Invidia and Broadcom, et cetera, plugging into them. Yep. What, what does that, what does that look like from your, from your perch in terms of talking to customers who maybe, maybe they're doing things traditionally and they're likely to be not, not fif not 15 G, not generation 15 servers. Yeah. But probably more like 14. Yeah, you're offering a pretty huge uplift. Yep. What, what do those conversations look >>Like? I mean, customers, so talking about partners, right? I mean, of course Dell, you know, we, we, we don't bring any solutions to the market without really working with all of our partners, whether that's at the infrastructure level, like you talked about, you know, Intel, amd, Broadcom, right? All the chip vendors, all the way to software layer, right? So we have cluster managers, we have communities orchestrators. So we usually what we do is we bring the best in class, whether it's a software player or a hardware player, right? And we bring it together as a solution. So we do give the customers a choice, and the customers always want to pick what you they know actually is awesome, right? So they that, that we actually do that. And, you know, and one of the main aspects of, especially when you talk about these things, bringing it as a service, right? >>We take a lot of guesswork away from our customer, right? You know, one of the good example of HPC is capacity, right? So customers, these are very, you know, I would say very intensive systems. Very complex systems, right? So customers would like to buy certain amount of capacity, they would like to grow and, you know, come back, right? So give, giving them the flexibility to actually consume more if they want, giving them the buffer and coming down. All of those things are very important as we actually design these things, right? And that takes some, you know, customers are given a choice, but it actually, they don't need to worry about, oh, you know, what happens if I actually have a spike, right? There's already buffer capacity built in. So those are awesome things. When we talk about things as a service, >>When customers are doing their ROI analysis, buying CapEx on-prem versus, versus using Apex, is there a point, is there a crossover point typically at which it's probably a better deal for them to, to go OnPrem? >>Yeah, I mean, it it like specifically talking about hpc, right? I mean, why, you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, right? That's not gonna go away, right? But there are certain reasons why they would look at OnPrem or they would look at, for example, Ola environment, right? One of the main reasons they would like to do that is purely have to do with cost, right? These are pretty expensive systems, right? There is a lot of ingress, egress, there is a lot of data going back and forth, right? Public cloud, you know, it costs money to put data in or actually pull data back, right? And the second one is data residency and security requirements, right? A lot of these things are probably proprietary set of information. We talked about life sciences, there's a lot of research, right? >>Manufacturing, a lot of these things are just, just in time decision making, right? You are on a factory floor, you gotta be able to do that. Now there is a latency requirement. So I mean, I think a lot of things play, you know, plays into this outside of just cost, but data residency requirements, ingress, egress are big things. And when you're talking about mass moments of data you wanna put and pull it back in, they would like to kind of keep it close, keep it local, and you know, get a, get a, get a price >>Point. Nevertheless, I mean, we were just talking to Ian Coley from aws and he was talking about how customers have the need to sort of move workloads back and forth between the cloud and on-prem. That's something that they're addressing without posts. You are very much in the, in the on-prem world. Do you have, or will you have facilities for customers to move workloads back and forth? Yeah, >>I wouldn't, I wouldn't necessarily say, you know, Dell's cloud strategy is multi-cloud, right? So we basically, so it kind of falls into three, I mean we, some customers, some workloads are suited always for public cloud. It's easier to consume, right? There are, you know, customers also consume on-prem, the customers also consuming Kohler. And we also have like Dell's amazing piece of software like storage software. You know, we make some of these things available for customers to consume a software IP on their public cloud, right? So, you know, so this is our multi-cloud strategy. So we announced a project in Alpine, in Delta fold. So you know, if you look at those, basically customers are saying, I love your Dell IP on this, on this product, on the storage, can you make it available through, in this public environment, whether, you know, it's any of the hyper skill players. So if we do all of that, right? So I think it's, it shows that, you know, it's not always tied to an infrastructure, right? Customers want to consume the best thumb and if we need to be consumed in hyperscale, we can make it available. >>Do you support containers? >>Yeah, we do support containers on hpc. We have, we have two container orchestrators we have to support. We, we, we have aner similarity, we also have a container options to customers. Both options. >>What kind of customers are you signing up for the, for the HPC offerings? Are they university research centers or is it tend to be smaller >>Companies? It, it's, it's, you know, the last three days, this conference has been great. We probably had like, you know, many, many customers talking to us. But HC somewhere in the range of 40, 50 customers, I would probably say lot of interest from educational institutions, universities research, to your point, a lot of interest from manufacturing, factory floor automation. A lot of customers want to do dynamic simulations on factory floor. That is also quite a bit of interest from life sciences pharmacies because you know, like I said, we have two designs, one on life sciences, one on manufacturing, both with different dynamics on the infrastructure. So yeah, quite a, quite a few interest definitely from academics, from life sciences, manufacturing. We also have a lot of financials, big banks, you know, who wants to simulate a lot of the, you know, brokerage, a lot of, lot of financial data because we have some, you know, really optimized hardware we announced in Dell for, especially for financial services. So there's quite a bit of interest from financial services as well. >>That's why that was great. We often think of Dell as, as the organization that democratizes all things in it eventually. And, and, and, and in that context, you know, this is super computing 22 HPC is like the little sibling trailing around, trailing behind the super computing trend. But we definitely have seen this move out of just purely academia into the business world. Dell is clearly a leader in that space. How has Apex overall been doing since you rolled out that strategy, what, two couple? It's been, it's been a couple years now, hasn't it? >>Yeah, it's been less than two years. >>How are, how are, how are mainstream Dell customers embracing Apex versus the traditional, you know, maybe 18 months to three year upgrade cycle CapEx? Yeah, >>I mean I look, I, I think that is absolutely strong momentum for Apex and like we, Paul pointed out earlier, we started with, you know, making the infrastructure and the platforms available to customers to consume as a service, right? We have options for customers, you know, to where Dell can fully manage everything end to end, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, basically environment for the customers, we also have options where customers would say, you know what, I actually have a pretty sophisticated IT organization. I want Dell to manage the infrastructure, but up to this level in the layer up to the guest operating system, I'll take care of the rest, right? So we are seeing customers who are coming to us with various requirements in terms of saying, I can do up to here, but you take all of this pain point away from me or you do everything for me. >>It all depends on the customer. So we do have wide interest. So our, I would say our products and the portfolio set in Apex is expanding and we are also learning, right? We are getting a lot of feedback from customers in terms of what they would like to see on some of these offers. Like the example we just talked about in terms of making some of the software IP available on a public cloud where they'll look at Dell as a software player, right? That's also is absolutely critical. So I think we are giving customers a lot of choices. Our, I would say the choice factor and you know, we are democratizing, like you said, expanding in terms of the customer choices. And I >>Think it's, we're almost outta our time, but I do wanna be sure we get to Dell validated designs, which you've mentioned a couple of times. How specific are the, well, what's the purpose of these designs? How specific are they? >>They, they are, I mean I, you know, so the most of these valid, I mean, again, we look at these industries, right? And we look at understanding exactly how would, I mean we have huge embedded base of customers utilizing HPC across our ecosystem in Dell, right? So a lot of them are CapEx customers. We actually do have an active customer profile. So these validated designs takes into account a lot of customer feedback, lot of partner feedback in terms of how they utilize this. And when you build these solutions, which are kind of end to end and integrated, you need to start anchoring on something, right? And a lot of these things have different characteristics. So these validated design basically prove to us that, you know, it gives a very good jump off point for customers. That's the way I look at it, right? So a lot of them will come to the table with, they don't come to the blank sheet of paper when they say, oh, you know what I'm, this, this is my characteristics of what I want. I think this is a great point for me to start from, right? So I think that that gives that, and plus it's the power of validation, really, right? We test, validate, integrate, so they know it works, right? So all of those are hypercritical. When you talk to, >>And you mentioned healthcare, you, you mentioned manufacturing, other design >>Factoring. We just announced validated design for financial services as well, I think a couple of days ago in the event. So yep, we are expanding all those DVDs so that we, we can, we can give our customers a choice. >>We're out of time. Sat ier. Thank you so much for joining us. Thank you. At the center of the move to subscription to everything as a service, everything is on a subscription basis. You really are on the leading edge of where, where your industry is going. Thanks for joining us. >>Thank you, Paul. Thank you Dave. >>Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas, wrapping up the show this afternoon and stay with us for, they'll be half more soon.
SUMMARY :
Lots of excitement out there, wouldn't you say, Dave? you know, it's, it's He is the vice Thank you. So it's telecom, it's cloud, it's edge. Can you just give us a quick definition? So this is our way I mean, you don't, you don't have a Dell cloud, right? So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. You, you mentioned something just ahead of us going on air. I mean, you know, one, one of the main things about cloud is the consumption model, right? an offering in the Apex line. we make it super easy for customers to consume and manage it across, you know, proven designs, So, I mean, you know, so this is a platform, if the customers want to actually, you know, extend colo as that OnPrem environment. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require Thunder from various partners that you have, you know? I mean, of course Dell, you know, we, we, So customers, these are very, you know, I would say very intensive systems. you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, in, they would like to kind of keep it close, keep it local, and you know, get a, Do you have, or will you have facilities So you know, if you look at those, basically customers are saying, I love your Dell IP on We have, we have two container orchestrators We also have a lot of financials, big banks, you know, who wants to simulate a you know, this is super computing 22 HPC is like the little sibling trailing around, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, we are democratizing, like you said, expanding in terms of the customer choices. How specific are the, well, what's the purpose of these designs? So these validated design basically prove to us that, you know, it gives a very good jump off point for So yep, we are expanding all those DVDs so that we, Thank you so much for joining us. Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Terry | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ian Coley | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Terry Ramos | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Paul Gell | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Paul Gillum | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
190 days | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
European Space Agency | ORGANIZATION | 0.99+ |
Max Peterson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
Africa | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Arcus Global | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
Bahrain | LOCATION | 0.99+ |
D.C. | LOCATION | 0.99+ |
Everee | ORGANIZATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
four hours | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Zero Days | TITLE | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
Washington | LOCATION | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
Capgemini | ORGANIZATION | 0.99+ |
Department for Wealth and Pensions | ORGANIZATION | 0.99+ |
Ireland | LOCATION | 0.99+ |
Washington, DC | LOCATION | 0.99+ |
an hour | QUANTITY | 0.99+ |
Paris | LOCATION | 0.99+ |
five weeks | QUANTITY | 0.99+ |
1.8 billion | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Germany | LOCATION | 0.99+ |
450 applications | QUANTITY | 0.99+ |
Department of Defense | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
John Walls | PERSON | 0.99+ |
Satish Iyer | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
GDPR | TITLE | 0.99+ |
Middle East | LOCATION | 0.99+ |
42% | QUANTITY | 0.99+ |
Jet Propulsion Lab | ORGANIZATION | 0.99+ |
Ian Colle, AWS | SuperComputing 22
(lively music) >> Good morning. Welcome back to theCUBE's coverage at Supercomputing Conference 2022, live here in Dallas. I'm Dave Nicholson with my co-host Paul Gillin. So far so good, Paul? It's been a fascinating morning Three days in, and a fascinating guest, Ian from AWS. Welcome. >> Thanks, Dave. >> What are we going to talk about? Batch computing, HPC. >> We've got a lot, let's get started. Let's dive right in. >> Yeah, we've got a lot to talk about. I mean, first thing is we recently announced our batch support for EKS. EKS is our Kubernetes, managed Kubernetes offering at AWS. And so batch computing is still a large portion of HPC workloads. While the interactive component is growing, the vast majority of systems are just kind of fire and forget, and we want to run thousands and thousands of nodes in parallel. We want to scale out those workloads. And what's unique about our AWS batch offering, is that we can dynamically scale, based upon the queue depth. And so customers can go from seemingly nothing up to thousands of nodes, and while they're executing their work they're only paying for the instances while they're working. And then as the queue depth starts to drop and the number of jobs waiting in the queue starts to drop, then we start to dynamically scale down those resources. And so it's extremely powerful. We see lots of distributed machine learning, autonomous vehicle simulation, and traditional HPC workloads taking advantage of AWS Batch. >> So when you have a Kubernetes cluster does it have to be located in the same region as the HPC cluster that's going to be doing the batch processing, or does the nature of batch processing mean, in theory, you can move something from here to somewhere relatively far away to do the batch processing? How does that work? 'Cause look, we're walking around here and people are talking about lengths of cables in order to improve performance. So what does that look like when you peel back the cover and you look at it physically, not just logically, AWS is everywhere, but physically, what does that look like? >> Oh, physically, for us, it depends on what the customer's looking for. We have workflows that are all entirely within a single region. And so where they could have a portion of say the traditional HPC workflow, is within that region as well as the batch, and they're saving off the results, say to a shared storage file system like our Amazon FSx for Lustre, or maybe aging that back to an S3 object storage for a little lower cost storage solution. Or you can have customers that have a kind of a multi-region orchestration layer to where they say, "You know what? "I've got a portion of my workflow that occurs "over on the other side of the country "and I replicate my data between the East Coast "and the West Coast just based upon business needs. "And I want to have that available to customers over there. "And so I'll do a portion of it in the East Coast "a portion of it in the West Coast." Or you can think of that even globally. It really depends upon the customer's architecture. >> So is the intersection of Kubernetes with HPC, is this relatively new? I know you're saying you're, you're announcing it. >> It really is. I think we've seen a growing perspective. I mean, Kubernetes has been a long time kind of eating everything, right, in the enterprise space? And now a lot of CIOs in the industrial space are saying, "Why am I using one orchestration layer "to manage my HPC infrastructure and another one "to manage my enterprise infrastructure?" And so there's a growing appreciation that, you know what, why don't we just consolidate on one? And so that's where we've seen a growth of Kubernetes infrastructure and our own managed Kubernetes EKS on AWS. >> Last month you announced a general availability of Trainium, of a chip that's optimized for AI training. Talk about what's special about that chip or what is is customized to the training workloads. >> Yeah, what's unique about the Trainium, is you'll you'll see 40% price performance over any other GPU available in the AWS cloud. And so we've really geared it to be that most price performance of options for our customers. And that's what we like about the silicon team, that we're part of that Annaperna acquisition, is because it really has enabled us to have this differentiation and to not just be innovating at the software level but the entire stack. That Annaperna Labs team develops our network cards, they develop our ARM cards, they developed this Trainium chip. And so that silicon innovation has become a core part of our differentiator from other vendors. And what Trainium allows you to do is perform similar workloads, just at a lower price performance. >> And you also have a chip several years older, called Inferentia- >> Um-hmm. >> Which is for inferencing. What is the difference between, I mean, when would a customer use one versus the other? How would you move the workload? >> What we've seen is customers traditionally have looked for a certain class of machine, more of a compute type that is not as accelerated or as heavy as you would need for Trainium for their inference portion of their workload. So when they do that training they want the really beefy machines that can grind through a lot of data. But when you're doing the inference, it's a little lighter weight. And so it's a different class of machine. And so that's why we've got those two different product lines with the Inferentia being there to support those inference portions of their workflow and the Trainium to be that kind of heavy duty training work. >> And then you advise them on how to migrate their workloads from one to the other? And once the model is trained would they switch to an Inferentia-based instance? >> Definitely, definitely. We help them work through what does that design of that workflow look like? And some customers are very comfortable doing self-service and just kind of building it on their own. Other customers look for a more professional services engagement to say like, "Hey, can you come in and help me work "through how I might modify my workflow to "take full advantage of these resources?" >> The HPC world has been somewhat slower than commercial computing to migrate to the cloud because- >> You're very polite. (panelists all laughing) >> Latency issues, they want to control the workload, they want to, I mean there are even issues with moving large amounts of data back and forth. What do you say to them? I mean what's the argument for ditching the on-prem supercomputer and going all-in on AWS? >> Well, I mean, to be fair, I started at AWS five years ago. And I can tell you when I showed up at Supercomputing, even though I'd been part of this community for many years, they said, "What is AWS doing at Supercomputing?" I know you care, wait, it's Amazon Web Services. You care about the web, can you actually handle supercomputing workloads? Now the thing that very few people appreciated is that yes, we could. Even at that time in 2017, we had customers that were performing HPC workloads. Now that being said, there were some real limitations on what we could perform. And over those past five years, as we've grown as a company, we've started to really eliminate those frictions for customers to migrate their HPC workloads to the AWS cloud. When I started in 2017, we didn't have our elastic fabric adapter, our low-latency interconnect. So customers were stuck with standard TCP/IP. So for their highly demanding open MPI workloads, we just didn't have the latencies to support them. So the jobs didn't run as efficiently as they could. We didn't have Amazon FSx for Lustre, our managed lustre offering for high performant, POSIX-compliant file system, which is kind of the key to a large portion of HPC workloads is you have to have a high-performance file system. We didn't even, I mean, we had about 25 gigs of networking when I started. Now you look at, with our accelerated instances, we've got 400 gigs of networking. So we've really continued to grow across that spectrum and to eliminate a lot of those really, frictions to adoption. I mean, one of the key ones, we had a open source toolkit that was jointly developed by Intel and AWS called CFN Cluster that customers were using to even instantiate their clusters. So, and now we've migrated that all the way to a fully functional supported service at AWS called AWS Parallel Cluster. And so you've seen over those past five years we have had to develop, we've had to grow, we've had to earn the trust of these customers and say come run your workloads on us and we will demonstrate that we can meet your demanding requirements. And at the same time, there's been, I'd say, more of a cultural acceptance. People have gone away from the, again, five years ago, to what are you doing walking around the show, to say, "Okay, I'm not sure I get it. "I need to look at it. "I, okay, I, now, oh, it needs to be a part "of my architecture but the standard questions, "is it secure? "Is it price performant? "How does it compare to my on-prem?" And really culturally, a lot of it is, just getting IT administrators used to, we're not eliminating a whole field, right? We're just upskilling the people that used to rack and stack actual hardware, to now you're learning AWS services and how to operate within that environment. And it's still key to have those people that are really supporting these infrastructures. And so I'd say it's a little bit of a combination of cultural shift over the past five years, to see that cloud is a super important part of HPC workloads, and part of it's been us meeting the the market segment of where we needed to with innovating both at the hardware level and at the software level, which we're going to continue to do. >> You do have an on-prem story though. I mean, you have outposts. We don't hear a lot of talk about outposts lately, but these innovations, like Inferentia, like Trainium, like the networking innovation you're talking about, are these going to make their way into outposts as well? Will that essentially become this supercomputing solution for customers who want to stay on-prem? >> Well, we'll see what the future lies, but we believe that we've got the, as you noted, we've got the hardware, we've got the network, we've got the storage. All those put together gives you a a high-performance computer, right? And whether you want it to be redundant in your local data center or you want it to be accessible via APIs from the AWS cloud, we want to provide that service to you. >> So to be clear, that's not that's not available now, but that is something that could be made available? >> Outposts are available right now, that have this the services that you need. >> All these capabilities? >> Often a move to cloud, an impetus behind it comes from the highest levels in an organization. They're looking at the difference between OpEx versus CapEx. CapEx for a large HPC environment, can be very, very, very high. Are these HPC clusters consumed as an operational expense? Are you essentially renting time, and then a fundamental question, are these multi-tenant environments? Or when you're referring to batches being run in HPC, are these dedicated HPC environments for customers who are running batches against them? When you think about batches, you think of, there are times when batches are being run and there are times when they're not being run. So that would sort of conjure, in the imagination, multi-tenancy, what does that look like? >> Definitely, and that's been, let me start with your second part first is- >> Yeah. That's been a a core area within AWS is we do not see as, okay we're going to, we're going to carve out this super computer and then we're going to allocate that to you. We are going to dynamically allocate multi-tenant resources to you to perform the workloads you need. And especially with the batch environment, we're going to spin up containers on those, and then as the workloads complete we're going to turn those resources over to where they can be utilized by other customers. And so that's where the batch computing component really is powerful, because as you say, you're releasing resources from workloads that you're done with. I can use those for another portion of the workflow for other work. >> Okay, so it makes a huge difference, yeah. >> You mentioned, that five years ago, people couldn't quite believe that AWS was at this conference. Now you've got a booth right out in the center of the action. What kind of questions are you getting? What are people telling you? >> Well, I love being on the show floor. This is like my favorite part is talking to customers and hearing one, what do they love, what do they want more of? Two, what do they wish we were doing that we're not currently doing? And three, what are the friction points that are still exist that, like, how can I make their lives easier? And what we're hearing is, "Can you help me migrate my workloads to the cloud? "Can you give me the information that I need, "both from a price for performance, "for an operational support model, "and really help me be an internal advocate "within my environment to explain "how my resources can be operated proficiently "within the AWS cloud." And a lot of times it's, let's just take your application a subset of your applications and let's benchmark 'em. And really that, AWS, one of the key things is we are a data-driven environment. And so when you take that data and you can help a customer say like, "Let's just not look at hypothetical, "at synthetic benchmarks, let's take "actually the LS-DYNA code that you're running, perhaps. "Let's take the OpenFOAM code that you're running, "that you're running currently "in your on-premises workloads, "and let's run it on AWS cloud "and let's see how it performs." And then we can take that back to your to the decision makers and say, okay, here's the price for performance on AWS, here's what we're currently doing on-premises, how do we think about that? And then that also ties into your earlier question about CapEx versus OpEx. We have models where actual, you can capitalize a longer-term purchase at AWS. So it doesn't have to be, I mean, depending upon the accounting models you want to use, we do have a majority of customers that will stay with that OpEx model, and they like that flexibility of saying, "Okay, spend as you go." We need to have true ups, and make sure that they have insight into what they're doing. I think one of the boogeyman is that, oh, I'm going to spend all my money and I'm not going to know what's available. And so we want to provide the, the cost visibility, the cost controls, to where you feel like, as an HPC administrator you have insight into what your customers are doing and that you have control over that. And so once you kind of take away some of those fears and and give them the information that they need, what you start to see too is, you know what, we really didn't have a lot of those cost visibility and controls with our on-premises hardware. And we've had some customers tell us we had one portion of the workload where this work center was spending thousands of dollars a day. And we went back to them and said, "Hey, we started to show this, "what you were spending on-premises." They went, "Oh, I didn't realize that." And so I think that's part of a cultural thing that, at an HPC, the question was, well on-premises is free. How do you compete with free? And so we need to really change that culturally, to where people see there is no free lunch. You're paying for the resources whether it's on-premises or in the cloud. >> Data scientists don't worry about budgets. >> Wait, on-premises is free? Paul mentioned something that reminded me, you said you were here in 2017, people said AWS, web, what are you even doing here? Now in 2022, you're talking in terms of migrating to cloud. Paul mentioned outposts, let's say that a customer says, "Hey, I'd like you to put "in a thousand-node cluster in this data center "that I happen to own, but from my perspective, "I want to interact with it just like it's "in your data center." In other words, the location doesn't matter. My experience is identical to interacting with AWS in an AWS data center, in a CoLo that works with AWS, but instead it's my physical data center. When we're tracking the percentage of IT that's that is on-prem versus off-prem. What is that? Is that, what I just described, is that cloud? And in five years are you no longer going to be talking about migrating to cloud because people go, "What do you mean migrating to cloud? "What do you even talking about? "What difference does it make?" It's either something that AWS is offering or it's something that someone else is offering. Do you think we'll be at that point in five years, where in this world of virtualization and abstraction, you talked about Kubernetes, we should be there already, thinking in terms of it doesn't matter as long as it meets latency and sovereignty requirements. So that, your prediction, we're all about insights and supercomputing- >> My prediction- >> In five years, will you still be talking about migrating to cloud or will that be something from the past? >> In five years, I still think there will be a component. I think the majority of the assumption will be that things are cloud-native and you start in the cloud and that there are perhaps, an aspect of that, that will be interacting with some sort of an edge device or some sort of an on-premises device. And we hear more and more customers that are saying, "Okay, I can see the future, "I can see that I'm shrinking my footprint." And, you can see them still saying, "I'm not sure how small that beachhead will be, "but right now I want to at least say "that I'm going to operate in that hybrid environment." And so I'd say, again, the pace of this community, I'd say five years we're still going to be talking about migrations, but I'd say the vast majority will be a cloud-native, cloud-first environment. And how do you classify that? That outpost sitting in someone's data center? I'd say we'd still, at least I'll leave that up to the analysts, but I think it would probably come down as cloud spend. >> Great place to end. Ian, you and I now officially have a bet. In five years we're going to come back. My contention is, no we're not going to be talking about it anymore. >> Okay. >> And kids in college are going to be like, "What do you mean cloud, it's all IT, it's all IT." And they won't remember this whole phase of moving to cloud and back and forth. With that, join us in five years to see the result of this mega-bet between Ian and Dave. I'm Dave Nicholson with theCUBE, here at Supercomputing Conference 2022, day three of our coverage with my co-host Paul Gillin. Thanks again for joining us. Stay tuned, after this short break, we'll be back with more action. (lively music)
SUMMARY :
Welcome back to theCUBE's coverage What are we going to talk about? Let's dive right in. in the queue starts to drop, does it have to be of say the traditional HPC workflow, So is the intersection of Kubernetes And now a lot of CIOs in the to the training workloads. And what Trainium allows you What is the difference between, to be that kind of heavy to say like, "Hey, can you You're very polite. to control the workload, to what are you doing I mean, you have outposts. And whether you want it to be redundant that have this the services that you need. Often a move to cloud, to you to perform the workloads you need. Okay, so it makes a What kind of questions are you getting? the cost controls, to where you feel like, And in five years are you no And so I'd say, again, the not going to be talking of moving to cloud and back and forth.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ian | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
400 gigs | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
Ian Colle | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
40% | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
Annaperna | ORGANIZATION | 0.99+ |
second part | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Last month | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
five years ago | DATE | 0.98+ |
five | QUANTITY | 0.98+ |
Two | QUANTITY | 0.98+ |
Supercomputing | ORGANIZATION | 0.98+ |
Lustre | ORGANIZATION | 0.97+ |
Annaperna Labs | ORGANIZATION | 0.97+ |
Trainium | ORGANIZATION | 0.97+ |
five years | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
OpEx | TITLE | 0.96+ |
both | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.96+ |
Supercomputing Conference | EVENT | 0.96+ |
first | QUANTITY | 0.96+ |
West Coast | LOCATION | 0.96+ |
thousands of dollars a day | QUANTITY | 0.96+ |
Supercomputing Conference 2022 | EVENT | 0.95+ |
CapEx | TITLE | 0.94+ |
three | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.92+ |
East Coast | LOCATION | 0.91+ |
single region | QUANTITY | 0.91+ |
years | QUANTITY | 0.91+ |
thousands of nodes | QUANTITY | 0.88+ |
Parallel Cluster | TITLE | 0.87+ |
about 25 gigs | QUANTITY | 0.87+ |
Dhabaleswar “DK” Panda, Ohio State State University | SuperComputing 22
>>Welcome back to The Cube's coverage of Supercomputing Conference 2022, otherwise known as SC 22 here in Dallas, Texas. This is day three of our coverage, the final day of coverage here on the exhibition floor. I'm Dave Nicholson, and I'm here with my co-host, tech journalist extraordinaire, Paul Gillum. How's it going, >>Paul? Hi, Dave. It's going good. >>And we have a wonderful guest with us this morning, Dr. Panda from the Ohio State University. Welcome Dr. Panda to the Cube. >>Thanks a lot. Thanks a lot to >>Paul. I know you're, you're chopping at >>The bit, you have incredible credentials, over 500 papers published. The, the impact that you've had on HPC is truly remarkable. But I wanted to talk to you specifically about a product project you've been working on for over 20 years now called mva, high Performance Computing platform that's used by more than 32 organ, 3,200 organizations across 90 countries. You've shepherded this from, its, its infancy. What is the vision for what MVA will be and and how is it a proof of concept that others can learn from? >>Yeah, Paul, that's a great question to start with. I mean, I, I started with this conference in 2001. That was the first time I came. It's very coincidental. If you remember the Finman Networking Technology, it was introduced in October of 2000. Okay. So in my group, we were working on NPI for Marinette Quadrics. Those are the old technology, if you can recollect when Finman was there, we were the very first one in the world to really jump in. Nobody knew how to use Infin van in an HPC system. So that's how the Happy Project was born. And in fact, in super computing 2002 on this exhibition floor in Baltimore, we had the first demonstration, the open source happy, actually is running on an eight node infinite van clusters, eight no zeros. And that was a big challenge. But now over the years, I means we have continuously worked with all infinite van vendors, MPI Forum. >>We are a member of the MPI Forum and also all other network interconnect. So we have steadily evolved this project over the last 21 years. I'm very proud of my team members working nonstop, continuously bringing not only performance, but scalability. If you see now INFIN event are being deployed in 8,000, 10,000 node clusters, and many of these clusters actually use our software, stack them rapid. So, so we have done a lot of, like our focuses, like we first do research because we are in academia. We come up with good designs, we publish, and in six to nine months, we actually bring it to the open source version and people can just download and then use it. And that's how currently it's been used by more than 3000 orange in 90 countries. And, but the interesting thing is happening, your second part of the question. Now, as you know, the field is moving into not just hvc, but ai, big data, and we have those support. This is where like we look at the vision for the next 20 years, we want to design this MPI library so that not only HPC but also all other workloads can take advantage of it. >>Oh, we have seen libraries that become a critical develop platform supporting ai, TensorFlow, and, and the pie torch and, and the emergence of, of, of some sort of default languages that are, that are driving the community. How, how important are these frameworks to the, the development of the progress making progress in the HPC world? >>Yeah, no, those are great. I mean, spite our stencil flow, I mean, those are the, the now the bread and butter of deep learning machine learning. Am I right? But the challenge is that people use these frameworks, but continuously models are becoming larger. You need very first turnaround time. So how do you train faster? How do you do influencing faster? So this is where HPC comes in and what exactly what we have done is actually we have linked floor fighters to our happy page because now you see the MPI library is running on a million core system. Now your fighters and tenor four clan also be scaled to to, to those number of, large number of course and gps. So we have actually done that kind of a tight coupling and that helps the research to really take advantage of hpc. >>So if, if a high school student is thinking in terms of interesting computer science, looking for a place, looking for a university, Ohio State University, bruns, world renowned, widely known, but talk about what that looks like from a day on a day to day basis in terms of the opportunity for undergrad and graduate students to participate in, in the kind of work that you do. What is, what does that look like? And is, and is that, and is that a good pitch to for, for people to consider the university? >>Yes. I mean, we continuously, from a university perspective, by the way, the Ohio State University is one of the largest single campus in, in us, one of the top three, top four. We have 65,000 students. Wow. It's one of the very largest campus. And especially within computer science where I am located, high performance computing is a very big focus. And we are one of the, again, the top schools all over the world for high performance computing. And we also have very strength in ai. So we always encourage, like the new students who like to really work on top of the art solutions, get exposed to the concepts, principles, and also practice. Okay. So, so we encourage those people that wish you can really bring you those kind of experience. And many of my past students, staff, they're all in top companies now, have become all big managers. >>How, how long, how long did you say you've been >>At 31 >>Years? 31 years. 31 years. So, so you, you've had people who weren't alive when you were already doing this stuff? That's correct. They then were born. Yes. They then grew up, yes. Went to university graduate school, and now they're on, >>Now they're in many top companies, national labs, all over the universities, all over the world. So they have been trained very well. Well, >>You've, you've touched a lot of lives, sir. >>Yes, thank you. Thank >>You. We've seen really a, a burgeoning of AI specific hardware emerge over the last five years or so. And, and architectures going beyond just CPUs and GPUs, but to Asics and f PGAs and, and accelerators, does this excite you? I mean, are there innovations that you're seeing in this area that you think have, have great promise? >>Yeah, there is a lot of promise. I think every time you see now supercomputing technology, you see there is sometime a big barrier comes barrier jump. Rather I'll say, new technology comes some disruptive technology, then you move to the next level. So that's what we are seeing now. A lot of these AI chips and AI systems are coming up, which takes you to the next level. But the bigger challenge is whether it is cost effective or not, can that be sustained longer? And this is where commodity technology comes in, which commodity technology tries to take you far longer. So we might see like all these likes, Gaudi, a lot of new chips are coming up, can they really bring down the cost? If that cost can be reduced, you will see a much more bigger push for AI solutions, which are cost effective. >>What, what about on the interconnect side of things, obvi, you, you, your, your start sort of coincided with the initial standards for Infin band, you know, Intel was very, very, was really big in that, in that architecture originally. Do you see interconnects like RDMA over converged ethernet playing a part in that sort of democratization or commoditization of things? Yes. Yes. What, what are your thoughts >>There for internet? No, this is a great thing. So, so we saw the infinite man coming. Of course, infinite Man is, commod is available. But then over the years people have been trying to see how those RDMA mechanisms can be used for ethernet. And then Rocky has been born. So Rocky has been also being deployed. But besides these, I mean now you talk about Slingshot, the gray slingshot, it is also an ethernet based systems. And a lot of those RMA principles are actually being used under the hood. Okay. So any modern networks you see, whether it is a Infin and Rocky Links art network, rock board network, you name any of these networks, they are using all the very latest principles. And of course everybody wants to make it commodity. And this is what you see on the, on the slow floor. Everybody's trying to compete against each other to give you the best performance with the lowest cost, and we'll see whoever wins over the years. >>Sort of a macroeconomic question, Japan, the US and China have been leapfrogging each other for a number of years in terms of the fastest supercomputer performance. How important do you think it is for the US to maintain leadership in this area? >>Big, big thing, significantly, right? We are saying that I think for the last five to seven years, I think we lost that lead. But now with the frontier being the number one, starting from the June ranking, I think we are getting that leadership back. And I think it is very critical not only for fundamental research, but for national security trying to really move the US to the leading edge. So I hope us will continue to lead the trend for the next few years until another new system comes out. >>And one of the gating factors, there is a shortage of people with data science skills. Obviously you're doing what you can at the university level. What do you think can change at the secondary school level to prepare students better to, for data science careers? >>Yeah, I mean that is also very important. I mean, we, we always call like a pipeline, you know, that means when PhD levels we are expecting like this even we want to students to get exposed to, to, to many of these concerts from the high school level. And, and things are actually changing. I mean, these days I see a lot of high school students, they, they know Python, how to program in Python, how to program in sea object oriented things. Even they're being exposed to AI at that level. So I think that is a very healthy sign. And in fact we, even from Ohio State side, we are always engaged with all this K to 12 in many different programs and then gradually trying to take them to the next level. And I think we need to accelerate also that in a very significant manner because we need those kind of a workforce. It is not just like a building a system number one, but how do we really utilize it? How do we utilize that science? How do we propagate that to the community? Then we need all these trained personal. So in fact in my group, we are also involved in a lot of cyber training activities for HPC professionals. So in fact, today there is a bar at 1 1 15 I, yeah, I think 1215 to one 15. We'll be talking more about that. >>About education. >>Yeah. Cyber training, how do we do for professionals? So we had a funding together with my co-pi, Dr. Karen Tom Cook from Ohio Super Center. We have a grant from NASA Science Foundation to really educate HPT professionals about cyber infrastructure and ai. Even though they work on some of these things, they don't have the complete knowledge. They don't get the time to, to learn. And the field is moving so fast. So this is how it has been. We got the initial funding, and in fact, the first time we advertised in 24 hours, we got 120 application, 24 hours. We couldn't even take all of them. So, so we are trying to offer that in multiple phases. So, so there is a big need for those kind of training sessions to take place. I also offer a lot of tutorials at all. Different conference. We had a high performance networking tutorial. Here we have a high performance deep learning tutorial, high performance, big data tutorial. So I've been offering tutorials at, even at this conference since 2001. Good. So, >>So in the last 31 years, the Ohio State University, as my friends remind me, it is properly >>Called, >>You've seen the world get a lot smaller. Yes. Because 31 years ago, Ohio, in this, you know, of roughly in the, in the middle of North America and the United States was not as connected as it was to everywhere else in the globe. So that's, that's pro that's, I i it kind of boggles the mind when you think of that progression over 31 years, but globally, and we talk about the world getting smaller, we're sort of in the thick of, of the celebratory seasons where, where many, many groups of people exchange gifts for varieties of reasons. If I were to offer you a holiday gift, that is the result of what AI can deliver the world. Yes. What would that be? What would, what would, what would the first thing be? This is, this is, this is like, it's, it's like the genie, but you only get one wish. >>I know, I know. >>So what would the first one be? >>Yeah, it's very hard to answer one way, but let me bring a little bit different context and I can answer this. I, I talked about the happy project and all, but recently last year actually we got awarded an S f I institute award. It's a 20 million award. I am the overall pi, but there are 14 universities involved. >>And who is that in that institute? >>What does that Oh, the I ici. C e. Okay. I cycle. You can just do I cycle.ai. Okay. And that lies with what exactly what you are trying to do, how to bring lot of AI for masses, democratizing ai. That's what is the overall goal of this, this institute, think of like a, we have three verticals we are working think of like one is digital agriculture. So I'll be, that will be my like the first ways. How do you take HPC and AI to agriculture the world as though we just crossed 8 billion people. Yeah, that's right. We need continuous food and food security. How do we grow food with the lowest cost and with the highest yield? >>Water >>Consumption. Water consumption. Can we minimize or minimize the water consumption or the fertilization? Don't do blindly. Technologies are out there. Like, let's say there is a weak field, A traditional farmer see that, yeah, there is some disease, they will just go and spray pesticides. It is not good for the environment. Now I can fly it drone, get images of the field in the real time, check it against the models, and then it'll tell that, okay, this part of the field has disease. One, this part of the field has disease. Two, I indicate to the, to the tractor or the sprayer saying, okay, spray only pesticide one, you have pesticide two here. That has a big impact. So this is what we are developing in that NSF A I institute I cycle ai. We also have, we have chosen two additional verticals. One is animal ecology, because that is very much related to wildlife conservation, climate change, how do you understand how the animals move? Can we learn from them? And then see how human beings need to act in future. And the third one is the food insecurity and logistics. Smart food distribution. So these are our three broad goals in that institute. How do we develop cyber infrastructure from below? Combining HP c AI security? We have, we have a large team, like as I said, there are 40 PIs there, 60 students. We are a hundred members team. We are working together. So, so that will be my wish. How do we really democratize ai? >>Fantastic. I think that's a great place to wrap the conversation here On day three at Supercomputing conference 2022 on the cube, it was an honor, Dr. Panda working tirelessly at the Ohio State University with his team for 31 years toiling in the field of computer science and the end result, improving the lives of everyone on Earth. That's not a stretch. If you're in high school thinking about a career in computer science, keep that in mind. It isn't just about the bits and the bobs and the speeds and the feeds. It's about serving humanity. Maybe, maybe a little, little, little too profound a statement, I would argue not even close. I'm Dave Nicholson with the Queue, with my cohost Paul Gillin. Thank you again, Dr. Panda. Stay tuned for more coverage from the Cube at Super Compute 2022 coming up shortly. >>Thanks a lot.
SUMMARY :
Welcome back to The Cube's coverage of Supercomputing Conference 2022, And we have a wonderful guest with us this morning, Dr. Thanks a lot to But I wanted to talk to you specifically about a product project you've So in my group, we were working on NPI for So we have steadily evolved this project over the last 21 years. that are driving the community. So we have actually done that kind of a tight coupling and that helps the research And is, and is that, and is that a good pitch to for, So, so we encourage those people that wish you can really bring you those kind of experience. you were already doing this stuff? all over the world. Thank this area that you think have, have great promise? I think every time you see now supercomputing technology, with the initial standards for Infin band, you know, Intel was very, very, was really big in that, And this is what you see on the, Sort of a macroeconomic question, Japan, the US and China have been leapfrogging each other for a number the number one, starting from the June ranking, I think we are getting that leadership back. And one of the gating factors, there is a shortage of people with data science skills. And I think we need to accelerate also that in a very significant and in fact, the first time we advertised in 24 hours, we got 120 application, that's pro that's, I i it kind of boggles the mind when you think of that progression over 31 years, I am the overall pi, And that lies with what exactly what you are trying to do, to the tractor or the sprayer saying, okay, spray only pesticide one, you have pesticide two here. I think that's a great place to wrap the conversation here On
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Paul Gillum | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
October of 2000 | DATE | 0.99+ |
Paul | PERSON | 0.99+ |
NASA Science Foundation | ORGANIZATION | 0.99+ |
2001 | DATE | 0.99+ |
Baltimore | LOCATION | 0.99+ |
8,000 | QUANTITY | 0.99+ |
14 universities | QUANTITY | 0.99+ |
31 years | QUANTITY | 0.99+ |
20 million | QUANTITY | 0.99+ |
24 hours | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Karen Tom Cook | PERSON | 0.99+ |
60 students | QUANTITY | 0.99+ |
Ohio State University | ORGANIZATION | 0.99+ |
90 countries | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
Earth | LOCATION | 0.99+ |
Panda | PERSON | 0.99+ |
today | DATE | 0.99+ |
65,000 students | QUANTITY | 0.99+ |
3,200 organizations | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
Python | TITLE | 0.99+ |
United States | LOCATION | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
over 500 papers | QUANTITY | 0.99+ |
June | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
more than 32 organ | QUANTITY | 0.99+ |
120 application | QUANTITY | 0.99+ |
Ohio | LOCATION | 0.99+ |
more than 3000 orange | QUANTITY | 0.99+ |
first ways | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
nine months | QUANTITY | 0.99+ |
40 PIs | QUANTITY | 0.99+ |
Asics | ORGANIZATION | 0.99+ |
MPI Forum | ORGANIZATION | 0.98+ |
China | ORGANIZATION | 0.98+ |
Two | QUANTITY | 0.98+ |
Ohio State State University | ORGANIZATION | 0.98+ |
8 billion people | QUANTITY | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
HP | ORGANIZATION | 0.97+ |
Dr. | PERSON | 0.97+ |
over 20 years | QUANTITY | 0.97+ |
US | ORGANIZATION | 0.97+ |
Finman | ORGANIZATION | 0.97+ |
Rocky | PERSON | 0.97+ |
Japan | ORGANIZATION | 0.97+ |
first time | QUANTITY | 0.97+ |
first demonstration | QUANTITY | 0.96+ |
31 years ago | DATE | 0.96+ |
Ohio Super Center | ORGANIZATION | 0.96+ |
three broad goals | QUANTITY | 0.96+ |
one wish | QUANTITY | 0.96+ |
second part | QUANTITY | 0.96+ |
31 | QUANTITY | 0.96+ |
Cube | ORGANIZATION | 0.95+ |
eight | QUANTITY | 0.95+ |
over 31 years | QUANTITY | 0.95+ |
10,000 node clusters | QUANTITY | 0.95+ |
day three | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
INFIN | EVENT | 0.94+ |
seven years | QUANTITY | 0.94+ |
Dhabaleswar “DK” Panda | PERSON | 0.94+ |
three | QUANTITY | 0.93+ |
S f I institute | TITLE | 0.93+ |
first thing | QUANTITY | 0.93+ |
Lucas Snyder, Indiana University and Karl Oversteyns, Purdue University | SuperComputing 22
(upbeat music) >> Hello, beautiful humans and welcome back to Supercomputing. We're here in Dallas, Texas giving you live coverage with theCUBE. I'm joined by David Nicholson. Thank you for being my left arm today. >> Thank you Savannah. >> It's a nice little moral. Very excited about this segment. We've talked a lot about how the fusion between academia and the private sector is a big theme at this show. You can see multiple universities all over the show floor as well as many of the biggest companies on earth. We were very curious to learn a little bit more about this from people actually in the trenches. And we are lucky to be joined today by two Purdue students. We have Lucas and Karl. Thank you both so much for being here. >> One Purdue, one IU, I think. >> Savannah: Oh. >> Yeah, yeah, yeah. >> I'm sorry. Well then wait, let's give Indiana University their fair do. That's where Lucas is. And Karl is at Purdue. Sorry folks. I apparently need to go back to school to learn how to read. (chuckles) In the meantime, I know you're in the middle of a competition. Thank you so much for taking the time out. Karl, why don't you tell us what's going on? What is this competition? What brought you all here? And then let's dive into some deeper stuff. >> Yeah, this competition. So we're a joint team between Purdue and IU. We've overcome our rivalries, age old rivalries to computer at the competition. It's a multi-part competition where we're going head to head against other teams from all across the world, benchmarking our super computing cluster that we designed. >> Was there a moment of rift at all when you came together? Or was everyone peaceful? >> We came together actually pretty nicely. Our two advisors they were very encouraging and so we overcame that, no hostility basically. >> I love that. So what are you working on and how long have you guys been collaborating on it? You can go ahead and start Lucas. >> So we've been prepping for this since the summer and some of us even before that. >> Savannah: Wow. >> And so currently we're working on the application phase of the competition. So everybody has different specialties and basically the competition gives you a set of rules and you have to accomplish what they tell you to do in the allotted timeframe and run things very quickly. >> And so we saw, when we came and first met you, we saw that there are lights and sirens and a monitor looking at the power consumption involved. So part of this is how much power is being consumed. >> Karl: That's right. >> Explain exactly what are the what are the rules that you have to live within? >> So, yeah, so the main constraint is the time as we mentioned and the power consumption. So for the benchmarking phase, which was one, two days ago there was a hard camp of 3000 watts to be consumed. You can't go over that otherwise you would be penalized for that. You have to rerun, start from scratch basically. Now there's a dynamic one for the application section where it's it modulates at random times. So we don't know when it's going to go down when it's going to go back up. So we have to adapt to that in real time. >> David: Oh, interesting. >> Dealing with a little bit of real world complexity I guess probably is simulation is here. I think that's pretty fascinating. I want to know, because I am going to just confess when I was your age last week, I did not understand the power of supercomputing and high performance computing. Lucas, let's start with you. How did you know this was the path you wanted to go down in your academic career? >> David: Yeah, what's your background? >> Yeah, give us some. >> So my background is intelligence systems engineering which is kind of a fusion. It's between, I'm doing bioengineering and then also more classical computer engineering. So my background is biology actually. But I decided to go down this path kind of on a whim. My professor suggested it and I've kind of fallen in love with it. I did my summer internship doing HPC and I haven't looked back. >> When did you think you wanted to go into this field? I mean, in high school, did you have a special teacher that sparked it? What was it? >> Lucas: That's funny that you say that. >> What was in your background? >> Yes, I mean, in high school towards the end I just knew that, I saw this program at IU and it's pretty new and I just thought this would be a great opportunity for me and I'm loving it so far. >> Do you have family in tech or is this a different path for you? >> Yeah, this is a different path for me, but my family is so encouraging and they're very happy for me. They text me all the time. So I couldn't be happier. >> Savannah: Just felt that in my heart. >> I know. I was going to say for the parents out there get the tissue out. >> Yeah, yeah, yeah. (chuckles) >> These guys they don't understand. But, so Karl, what's your story? What's your background? >> My background, I'm a major in unmanned Aerial systems. So this is a drones commercial applications not immediately connected as you might imagine although there's actually more overlap than one might think. So a lot of unmanned systems today a lot of it's remote sensing, which means that there's a lot of image processing that takes place. Mapping of a field, what have you, or some sort of object, like a silo. So a lot of it actually leverages high performance computing in order to map, to visualize much replacing, either manual mapping that used to be done by humans in the field or helicopters. So a lot of cost reduction there and efficiency increases. >> And when did you get this spark that said I want to go to Purdue? You mentioned off camera that you're from Belgium. >> Karl: That's right. >> Did you, did you come from Belgium to Purdue or you were already in the States? >> No, so I have family that lives in the States but I grew up in Belgium. >> David: Okay. >> I knew I wanted to study in the States. >> But at what age did you think that science and technology was something you'd be interested in? >> Well, I've always loved computers from a young age. I've been breaking computers since before I can remember. (chuckles) Much to my parents dismay. But yeah, so I've always had a knack for technology and that's sort of has always been a hobby of mine. >> And then I want to ask you this question and then Lucas and then Savannah will get some time. >> Savannah: It cool, will just sit here and look pretty. >> Dream job. >> Karl: Dream job. >> Okay. So your undergrad both you. >> Savannah: Offering one of my questions. Kind of, It's adjacent though. >> Okay. You're undergrad now? Is there grad school in your future do you feel that's necessary? Is that something you want to pursue? >> I think so. Entrepreneurship is something that's been in the back of my head for a while as well. So may be or something. >> So when I say dream job, understand could be for yourself. >> Savannah: So just piggyback. >> Dream thing after academia or stay in academia. What's do you think at this point? >> That's a tough question. You're asking. >> You'll be able to review this video in 10 years. >> Oh boy. >> This is give us your five year plan and then we'll have you back on theCUBE and see 2027. >> What's the dream? There's people out here watching this. I'm like, go, hey, interesting. >> So as I mentioned entrepreneurship I'm thinking I'll start a company at some point. >> David: Okay. >> Yeah. In what? I don't know yet. We'll see. >> David: Lucas, any thoughts? >> So after graduation, I am planning to go to grad school. IU has a great accelerated master's degree program so I'll stay an extra year and get my master's. Dream job is, boy, that's impossible to answer but I remember telling my dad earlier this year that I was so interested in what NASA was doing. They're sending a probe to one of the moons of Jupiter. >> That's awesome. From a parent's perspective the dream often is let's get the kids off the payroll. So I'm sure that your families are happy to hear that you have. >> I think these two will be right in that department. >> I think they're going to be okay. >> Yeah, I love that. I was curious, I want to piggyback on that because I think when NASA's doing amazing we have them on the show. Who doesn't love space. >> Yeah. >> I'm also an entrepreneur though so I very much empathize with that. I was going to ask to your dream job, but also what companies here do you find the most impressive? I'll rephrase. Because I was going to say, who would you want to work with? >> David: Anything you think is interesting? >> But yeah. Have you even had a chance to walk the floor? I know you've been busy competing >> Karl: Very little. >> Yeah, I was going to say very little. Unfortunately I haven't been able to roam around very much. But I look around and I see names that I'm like I can't even, it's crazy to see them. Like, these are people who are so impressive in the space. These are people who are extremely smart. I'm surrounded by geniuses everywhere I look, I feel like, so. >> Savannah: That that includes us. >> Yeah. >> He wasn't talking about us. Yeah. (laughs) >> I mean it's hard to say any of these companies I would feel very very lucky to be a part of, I think. >> Well there's a reason why both of you were invited to the party, so keep that in mind. Yeah. But so not a lot of time because of. >> Yeah. Tomorrow's our day. >> Here to get work. >> Oh yes. Tomorrow gets play and go talk to everybody. >> Yes. >> And let them recruit you because I'm sure that's what a lot of these companies are going to be doing. >> Yeah. Hopefully it's plan. >> Have you had a second at all to look around Karl. >> A Little bit more I've been going to the bathroom once in a while. (laughs) >> That's allowed I mean, I can imagine that's a vital part of the journey. >> I've ruin my gaze a little bit to what's around all kinds of stuff. Higher education seems to be very important in terms of their presence here. I find that very, very impressive. Purdue has a big stand IU as well, but also others all from Europe as well and Asia. I think higher education has a lot of potential in this field. >> David: Absolutely. >> And it really is that union between academia and the private sector. We've seen a lot of it. But also one of the things that's cool about HPC is it's really not ageist. It hasn't been around for that long. So, I mean, well, at this scale it's obviously this show's been going on since 1988 before you guys were even probably a thought. But I think it's interesting. It's so fun to get to meet you both. Thank you for sharing about what you're doing and what your dreams are. Lucas and Karl. >> David: Thanks for taking the time. >> I hope you win and we're going to get you off the show here as quickly as possible so you can get back to your teams and back to competing. David, great questions as always, thanks for being here. And thank you all for tuning in to theCUBE Live from Dallas, Texas, where we are at Supercomputing. My name's Savannah Peterson and I hope you're having a beautiful day. (gentle upbeat music)
SUMMARY :
Thank you for being my left arm today. Thank you both so much for being here. I apparently need to go back from all across the world, and so we overcame that, So what are you working on since the summer and some and you have to accomplish and a monitor looking at the So for the benchmarking phase, How did you know this was the path But I decided to go down I saw this program at They text me all the time. I was going to say for Yeah, yeah, yeah. But, so Karl, what's your story? So a lot of unmanned systems today And when did you get that lives in the States I can remember. ask you this question Savannah: It cool, will of my questions. Is that something you want to pursue? I think so. So when I say dream job, understand What's do you think at this point? That's a tough question. You'll be able to review and then we'll have you back What's the dream? So as I mentioned entrepreneurship I don't know yet. planning to go to grad school. to hear that you have. I think these two will I was curious, I want to piggyback on that I was going to ask to your dream job, Have you even had I can't even, it's crazy to see them. Yeah. I mean it's hard to why both of you were invited go talk to everybody. And let them recruit you Have you had a second I've been going to the I mean, I can imagine that's I find that very, very impressive. It's so fun to get to meet you both. going to get you off the show
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Savannah | PERSON | 0.99+ |
David | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Belgium | LOCATION | 0.99+ |
Karl | PERSON | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
3000 watts | QUANTITY | 0.99+ |
Lucas | PERSON | 0.99+ |
IU | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Karl Oversteyns | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
five year | QUANTITY | 0.99+ |
Asia | LOCATION | 0.99+ |
Lucas Snyder | PERSON | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
Purdue | ORGANIZATION | 0.99+ |
two advisors | QUANTITY | 0.99+ |
Tomorrow | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Purdue | LOCATION | 0.99+ |
1988 | DATE | 0.99+ |
last week | DATE | 0.99+ |
Jupiter | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
Purdue University | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two days ago | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Indiana University | ORGANIZATION | 0.98+ |
Indiana University | ORGANIZATION | 0.97+ |
earlier this year | DATE | 0.93+ |
earth | LOCATION | 0.93+ |
first | QUANTITY | 0.92+ |
Supercomputing | ORGANIZATION | 0.9+ |
2027 | TITLE | 0.86+ |
HPC | ORGANIZATION | 0.8+ |
theCUBE | ORGANIZATION | 0.8+ |
States | LOCATION | 0.56+ |
second | QUANTITY | 0.48+ |
22 | QUANTITY | 0.38+ |
Anthony Dina, Dell Technologies and Bob Crovella, NVIDIA | SuperComputing 22
>>How do y'all, and welcome back to Supercomputing 2022. We're the Cube, and we are live from Dallas, Texas. I'm joined by my co-host, David Nicholson. David, hello. Hello. We are gonna be talking about data and enterprise AI at scale during this segment. And we have the pleasure of being joined by both Dell and Navidia. Anthony and Bob, welcome to the show. How you both doing? Doing good. >>Great. Great show so far. >>Love that. Enthusiasm, especially in the afternoon on day two. I think we all, what, what's in that cup? Is there something exciting in there that maybe we should all be sharing with you? >>Just say it's just still Yeah, water. >>Yeah. Yeah. I love that. So I wanna make sure that, cause we haven't talked about this at all during the show yet, on the cube, I wanna make sure that everyone's on the same page when we're talking about data unstructured versus structured data. I, it's in your title, Anthony, tell me what, what's the difference? >>Well, look, the world has been based in analytics around rows and columns, spreadsheets, data warehouses, and we've made predictions around the forecast of sales maintenance issues. But when we take computers and we give them eyes, ears, and fingers, cameras, microphones, and temperature and vibration sensors, we now translate that into more human experience. But that kind of data, the sensor data, that video camera is unstructured or semi-structured, that's what that >>Means. We live in a world of unstructured data structure is something we add to later after the fact. But the world that we see and the world that we experience is unstructured data. And one of the promises of AI is to be able to take advantage of everything that's going on around us and augment that, improve that, solve problems based on that. And so if we're gonna do that job effectively, we can't just depend on structured data to get the problem done. We have to be able to incorporate everything that we can see here, taste, smell, touch, and use >>That as, >>As part of the problem >>Solving. We want the chaos, bring it. >>Chaos has been a little bit of a theme of our >>Show. It has been, yeah. And chaos is in the eye of the beholder. You, you think about, you think about the reason for structuring data to a degree. We had limited processing horsepower back when everything was being structured as a way to allow us to be able to, to to reason over it and gain insights. So it made sense to put things into rows and tables. How does, I'm curious, diving right into where Nvidia fits into this, into this puzzle, how does NVIDIA accelerate or enhance our ability to glean insight from or reason over unstructured data in particular? >>Yeah, great question. It's really all about, I would say it's all about ai and Invidia is a leader in the AI space. We've been investing and focusing on AI since at least 2012, if not before, accelerated computing that we do it. Invidia is an important part of it, really. We believe that AI is gonna revolutionize nearly every aspect of computing. Really nearly every aspect of problem solving, even nearly every aspect of programming. And one of the reasons is for what we're talking about now is it's a little impact. Being able to incorporate unstructured data into problem solving is really critical to being able to solve the next generation of problems. AI unlocks, tools and methodologies that we can realistically do that with. It's not realistic to write procedural code that's gonna look at a picture and solve all the problems that we need to solve if we're talking about a complex problem like autonomous driving. But with AI and its ability to naturally absorb unstructured data and make intelligent reason decisions based on it, it's really a breakthrough. And that's what NVIDIA's been focusing on for at least a decade or more. >>And how does NVIDIA fit into Dell's strategy? >>Well, I mean, look, we've been partners for many, many years delivering beautiful experiences on workstations and laptops. But as we see the transition away from taking something that was designed to make something pretty on screen to being useful in solving problems in life sciences, manufacturing in other places, we work together to provide integrated solutions. So take for example, the dgx a 100 platform, brilliant design, revolutionary bus technologies, but the rocket ship can't go to Mars without the fuel. And so you need a tank that can scale in performance at the same rate as you throw GPUs at it. And so that's where the relationship really comes alive. We enable people to curate the data, organize it, and then feed those algorithms that get the answers that Bob's been talking about. >>So, so as a gamer, I must say you're a little shot at making things pretty on a screen. Come on. That was a low blow. That >>Was a low blow >>Sassy. What I, >>I Now what's in your cup? That's what I wanna know, Dave, >>I apparently have the most boring cup of anyone on you today. I don't know what happened. We're gonna have to talk to the production team. I'm looking at all of you. We're gonna have to make that better. One of the themes that's been on this show, and I love that you all embrace the chaos, we're, we're seeing a lot of trend in the experimentation phase or stage rather. And it's, we're in an academic zone of it with ai, companies are excited to adopt, but most companies haven't really rolled out their strategy. What is necessary for us to move from this kind of science experiment, science fiction in our heads to practical application at scale? Well, >>Let me take this, Bob. So I've noticed there's a pattern of three levels of maturity. The first level is just what you described. It's about having an experience, proof of value, getting stakeholders on board, and then just picking out what technology, what algorithm do I need? What's my data source? That's all fun, but it is chaos over time. People start actually making decisions based on it. This moves us into production. And what's important there is normality, predictability, commonality across, but hidden and embedded in that is a center of excellence. The community of data scientists and business intelligence professionals sharing a common platform in the last stage, we get hungry to replicate those results to other use cases, throwing even more information at it to get better accuracy and precision. But to do this in a budget you can afford. And so how do you figure out all the knobs and dials to turn in order to make, take billions of parameters and process that, that's where casual, what's >>That casual decision matrix there with billions of parameters? >>Yeah. Oh, I mean, >>But you're right that >>That's, that's exactly what we're, we're on this continuum, and this is where I think the partnership does really well, is to marry high performant enterprise grade scalability that provides the consistency, the audit trail, all of the things you need to make sure you don't get in trouble, plus all of the horsepower to get to the results. Bob, what would you >>Add there? I think the thing that we've been talking about here is complexity. And there's complexity in the AI problem solving space. There's complexity everywhere you look. And we talked about the idea that NVIDIA can help with some of that complexity from the architecture and the software development side of it. And Dell helps with that in a whole range of ways, not the least of which is the infrastructure and the server design and everything that goes into unlocking the performance of the technology that we have available to us today. So even the center of excellence is an example of how do I take this incredibly complex problem and simplify it down so that the real world can absorb and use this? And that's really what Dell and Vidia are partnering together to do. And that's really what the center of excellence is. It's an idea to help us say, let's take this extremely complex problem and extract some good value out of >>It. So what is Invidia's superpower in this realm? I mean, look, we're we are in, we, we are in the era of Yeah, yeah, yeah. We're, we're in a season of microprocessor manufacturers, one uping, one another with their latest announcements. There's been an ebb and a flow in our industry between doing everything via the CPU versus offloading processes. Invidia comes up and says, Hey, hold on a second, gpu, which again, was focused on graphics processing originally doing something very, very specific. How does that translate today? What's the Nvidia again? What's, what's, what's the superpower? Because people will say, well, hey, I've got a, I've got a cpu, why do I need you? >>I think our superpower is accelerated computing, and that's really a hardware and software thing. I think your question is slanted towards the hardware side, which is, yes, it is very typical and we do make great processors, but the processor, the graphics processor that you talked about from 10 or 20 years ago was designed to solve a very complex task. And it was exquisitely designed to solve that task with the resources that we had available at that time. Time. Now, fast forward 10 or 15 years, we're talking about a new class of problems called ai. And it requires both exquisite, soft, exquisite processor design as well as very complex and exquisite software design sitting on top of it as well. And the systems and infrastructure knowledge, high performance storage and everything that we're talking about in the solution today. So Nvidia superpower is really about that accelerated computing stack at the bottom. You've got hardware above that, you've got systems above that, you have middleware and libraries and above that you have what we call application SDKs that enable the simplification of this really complex problem to this domain or that domain or that domain, while still allowing you to take advantage of that processing horsepower that we put in that exquisitely designed thing called the gpu >>Decreasing complexity and increasing speed to very key themes of the show. Shocking, no one, you all wanna do more faster. Speaking of that, and I'm curious because you both serve a lot of different unique customers, verticals and use cases, is there a specific project that you're allowed to talk about? Or, I mean, you know, you wanna give us the scoop, that's totally cool too. We're here for the scoop on the cube, but is there a specific project or use case that has you personally excited Anthony? We'll start with that. >>Look, I'm, I've always been a big fan of natural language processing. I don't know why, but to derive intent based on the word choices is very interesting to me. I think what compliments that is natural language generation. So now we're having AI programs actually discover and describe what's inside of a package. It wouldn't surprise me that over time we move from doing the typical summary on the economic, the economics of the day or what happened in football. And we start moving that towards more of the creative advertising and marketing arts where you are no longer needed because the AI is gonna spit out the result. I don't think we're gonna get there, but I really love this idea of human language and computational linguistics. >>What a, what a marriage. I agree. Think it's fascinating. What about you, Bob? It's got you >>Pumped. The thing that really excites me is the problem solving, sort of the tip of the spear in problem solving. The stuff that you've never seen before, the stuff that you know, in a geeky way kind of takes your breath away. And I'm gonna jump or pivot off of what Anthony said. Large language models are really one of those areas that are just, I think they're amazing and they're just kind of surprising everyone with what they can do here on the show floor. I was looking at a demonstration from a large language model startup, basically, and they were showing that you could ask a question about some obscure news piece that was reported only in a German newspaper. It was about a little shipwreck that happened in a hardware. And I could type in a query to this system and it would immediately know where to find that information as if it read the article, summarized it for you, and it even could answer questions that you could only only answer by looking pic, looking at pictures in that article. Just amazing stuff that's going on. Just phenomenal >>Stuff. That's a huge accessibility. >>That's right. And I geek out when I see stuff like that. And that's where I feel like all this work that Dell and Invidia and many others are putting into this space is really starting to show potential in ways that we wouldn't have dreamed of really five years ago. Just really amazing. And >>We see this in media and entertainment. So in broadcasting, you have a sudden event, someone leaves this planet where they discover something new where they get a divorce and they're a major quarterback. You wanna go back somewhere in all of your archives to find that footage. That's a very laborist project. But if you can use AI technology to categorize that and provide the metadata tag so you can, it's searchable, then we're off to better productions, more interesting content and a much richer viewer experience >>And a much more dynamic picture of what's really going on. Factoring all of that in, I love that. I mean, David and I are both nerds and I know we've had take our breath away moments, so I appreciate that you just brought that up. Don't worry, you're in good company. In terms of the Geek Squad over >>Here, I think actually maybe this entire show for Yes, exactly. >>I mean, we were talking about how steampunk some of the liquid cooling stuff is, and you know, this is the only place on earth really, or the only show where you would come and see it at this level in scale and, and just, yeah, it's, it's, it's very, it's very exciting. How important for the future of innovation in HPC are partnerships like the one that Navia and Dell have? >>You wanna start? >>Sure, I would, I would just, I mean, I'm gonna be bold and brash and arrogant and say they're essential. Yeah, you don't not, you do not want to try and roll this on your own. This is, even if we just zoomed in to one little beat, little piece of the technology, the software stack that do modern, accelerated deep learning is incredibly complicated. There can be easily 20 or 30 components that all have to be the right version with the right buttons pushed, built the right way, assembled the right way, and we've got lots of technologies to help with that. But you do not want to be trying to pull that off on your own. That's just one little piece of the complexity that we talked about. And we really need, as technology providers in this space, we really need to do as much as we do to try to unlock the potential. We have to do a lot to make it usable and capable as well. >>I got a question for Anthony. All >>Right, >>So in your role, and I, and I'm, I'm sort of, I'm sort of projecting here, but I think, I think, I think your superpower personally is likely in the realm of being able to connect the dots between technology and the value that that technology holds in a variety of contexts. That's right. Whether it's business or, or whatever, say sentences. Okay. Now it's critical to have people like you to connect those dots. Today in the era of pervasive ai, how important will it be to have AI have to explain its answer? In other words, words, should I trust the information the AI is giving me? If I am a decision maker, should I just trust it on face value? Or am I going to want a demand of the AI kind of what you deliver today, which is No, no, no, no, no, no. You need to explain this to me. How did you arrive at that conclusion, right? How important will that be for people to move forward and trust the results? We can all say, oh hey, just trust us. Hey, it's ai, it's great, it's got Invidia, you know, Invidia acceleration and it's Dell. You can trust us, but come on. So many variables in the background. It's >>An interesting one. And explainability is a big function of ai. People want to know how the black box works, right? Because I don't know if you have an AI engine that's looking for potential maladies in an X-ray, but it misses it. Do you sue the hospital, the doctor or the software company, right? And so that accountability element is huge. I think as we progress and we trust it to be part of our everyday decision making, it's as simply as a recommendation engine. It isn't actually doing all of the decisions. It's supporting us. We still have, after decades of advanced technology algorithms that have been proven, we can't predict what the market price of any object is gonna be tomorrow. And you know why? You know why human beings, we are so unpredictable. How we feel in the moment is radically different. And whereas we can extrapolate for a population to an individual choice, we can't do that. So humans and computers will not be separated. It's a, it's a joint partnership. But I wanna get back to your point, and I think this is very fundamental to the philosophy of both companies. Yeah, it's about a community. It's always about the people sharing ideas, getting the best. And anytime you have a center of excellence and algorithm that works for sales forecasting may actually be really interesting for churn analysis to make sure the employees or students don't leave the institution. So it's that community of interest that I think is unparalleled at other conferences. This is the place where a lot of that happens. >>I totally agree with that. We felt that on the show. I think that's a beautiful note to close on. Anthony, Bob, thank you so much for being here. I'm sure everyone feels more educated and perhaps more at peace with the chaos. David, thanks for sitting next to me asking the best questions of any host on the cube. And thank you all for being a part of our community. Speaking of community here on the cube, we're alive from Dallas, Texas. It's super computing all week. My name is Savannah Peterson and I'm grateful you're here. >>So I.
SUMMARY :
And we have the pleasure of being joined by both Dell and Navidia. Great show so far. I think we all, cause we haven't talked about this at all during the show yet, on the cube, I wanna make sure that everyone's on the same page when we're talking about But that kind of data, the sensor data, that video camera is unstructured or semi-structured, And one of the promises of AI is to be able to take advantage of everything that's going on We want the chaos, bring it. And chaos is in the eye of the beholder. And one of the reasons is for what we're talking about now is it's a little impact. scale in performance at the same rate as you throw GPUs at it. So, so as a gamer, I must say you're a little shot at making things pretty on a I apparently have the most boring cup of anyone on you today. But to do this in a budget you can afford. the horsepower to get to the results. and simplify it down so that the real world can absorb and use this? What's the Nvidia again? So Nvidia superpower is really about that accelerated computing stack at the bottom. We're here for the scoop on the cube, but is there a specific project or use case that has you personally excited And we start moving that towards more of the creative advertising and marketing It's got you And I'm gonna jump or pivot off of what That's a huge accessibility. And I geek out when I see stuff like that. and provide the metadata tag so you can, it's searchable, then we're off to better productions, so I appreciate that you just brought that up. I mean, we were talking about how steampunk some of the liquid cooling stuff is, and you know, this is the only place on earth really, There can be easily 20 or 30 components that all have to be the right version with the I got a question for Anthony. to have people like you to connect those dots. And anytime you have a center We felt that on the show.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Anthony | PERSON | 0.99+ |
Bob Crovella | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
Invidia | ORGANIZATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Mars | LOCATION | 0.99+ |
Vidia | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
Navidia | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
first level | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
2012 | DATE | 0.98+ |
today | DATE | 0.98+ |
billions | QUANTITY | 0.98+ |
earth | LOCATION | 0.97+ |
10 | DATE | 0.96+ |
Anthony Dina | PERSON | 0.96+ |
five years ago | DATE | 0.96+ |
30 components | QUANTITY | 0.95+ |
Navia | ORGANIZATION | 0.95+ |
day two | QUANTITY | 0.94+ |
one little piece | QUANTITY | 0.91+ |
tomorrow | DATE | 0.87+ |
three levels | QUANTITY | 0.87+ |
HPC | ORGANIZATION | 0.86+ |
20 years ago | DATE | 0.83+ |
one little | QUANTITY | 0.77+ |
billions of parameters | QUANTITY | 0.75+ |
a decade | QUANTITY | 0.74+ |
decades | QUANTITY | 0.68+ |
German | OTHER | 0.68+ |
dgx a 100 platform | COMMERCIAL_ITEM | 0.67+ |
themes | QUANTITY | 0.63+ |
second | QUANTITY | 0.57+ |
22 | QUANTITY | 0.48+ |
Squad | ORGANIZATION | 0.4+ |
Supercomputing 2022 | ORGANIZATION | 0.36+ |
Andrea Booker, Dell Technologies | SuperComputing 22
>> Hello everyone and welcome back to theCUBE, where we're live from Dallas, Texas here at Super computing 2022. I am joined by my cohost David Nicholson. Thank you so much for being here with me and putting up with my trashy jokes all day. >> David: Thanks for having me. >> Yeah. Yes, we are going to be talking about AI this morning and I'm very excited that our guest has has set the stage for us here quite well. Please welcome Andrea Booker. Andrea, thank you so much for being here with us. >> Absolutely. Really excited to be here. >> Savannah: How's your show going so far? >> It's been really cool. I think being able to actually see people in person but also be able to see the latest technologies and and have the live dialogue that connects us in a different way than we have been able to virtually. >> Savannah: Oh yeah. No, it's all, it's all about that human connection and that it is driving towards our first question. So as we were just chit chatting. You said you are excited about making AI real and humanizing that. >> Andrea: Absolutely. >> What does that mean to you? >> So I think when it comes down to artificial intelligence it means so many different things to different people. >> Savannah: Absolutely. >> I was talking to my father the other day for context, he's in his late seventies, right. And I'm like, oh, artificial intelligence, this or that, and he is like, machines taking over the world. Right. >> Savannah: Very much the dark side. >> A little bit Terminator. And I'm like, well, not so much. So that was a fun discussion. And then you flip it to the other side and I'm talking to my 11 year old daughter and she's like, Alexa make sure you know my song preferences. Right. And that's the other very real way in which it's kind of impacting our lives. >> Savannah: Yeah. >> Right. There's so many different use cases that I don't think everyone understands how that resonates. Right. It's the simple things from, you know, recommend Jason Engines when you're on Amazon and it suggests just a little bit more. >> Oh yeah. >> I'm a little bit to you that one, right. To stuff that's more impactful in regards to getting faster diagnoses from your doctors. Right. Such peace of mind being able to actually hear that answer faster know how to go tackle something. >> Savannah: Great point, yeah. >> You know, and, and you know, what's even more interesting is from a business perspective, you know the projections are over the next five years about 90% of customers are going to use AI applications in in some fashion, right. >> Savannah: Wow. >> And the reason why that's interesting is because if you look at it today, only about 15% of of them are doing so. Right. So we're early. So when we're talking growth and the opportunity, it's, it's amazing. >> Yeah. I can, I can imagine. So when you're talking to customers, what are are they excited? Are they nervous? Are you educating them on how to apply Dell technology to advance their AI? Where are they off at because we're so early? >> Yeah well, I think they're figuring it out what it means to them, right? >> Yeah. Because there's so many different customer applications of it, right? You have those in which, you know, are on on the highest end in which that our new XE products are targeting that when they think of it. You know, I I, I like to break it down in this fashion in which artificial intelligence can actually save human lives, right? And this is those extreme workloads that I'm talking about. We actually can develop a Covid vaccine faster, right. Pandemic tracking, you know with global warming that's going on. And we have these extreme weather events with hurricanes and tsunamis and all these things to be able to get advanced notice to people to evacuate, to move. I mean, that's a pretty profound thing. And it is, you know so it could be used in that way to save lives, right? >> Absolutely. >> Which is it's the natural outgrowth of the speeds and feeds discussions that we might have internally. It's, it's like, oh, oh, speed doubled. Okay. Didn't it double last year? Yeah. Doubled last year too. So it's four x now. What does that mean to your point? >> Andrea: Yeah, yeah. >> Savannah: Yeah. >> Being able to deliver faster insight insights that are meaningful within a timeframe when otherwise they wouldn't be meaningful. >> Andrea: Yeah. >> If I tell you, within a two month window whether it's going to rain this weekend, that doesn't help you. In hindsight, we did the calculation and we figured out it's going to be 40 degrees at night last Thursday >> Knowing it was going to completely freeze here in Dallas to our definition in Texas but we prepare better to back to bring clothes. >> We were talking to NASA about that yesterday too. I mean, I think it's, it's must be fascinating for you to see your technology deployed in so many of these different use cases as well. >> Andrea: Absolutely, absolutely. >> It's got to be a part of one of the more >> Andrea: Not all of them are extreme, right? >> Savannah: Yeah. >> There's also examples of, you know natural language processing and what it does for us you know, the fact that it can break down communication barriers because we're global, right? We're all in a global environment. So if you think about conference calls in which we can actually clearly understand each other and what the intent is, and the messaging brings us closer in different ways as well. Which, which is huge, right? You don't want things lost in translation, right? So it, it helps on so many fronts. >> You're familiar with the touring test idea of, of, you know whether or not, you know, the test is if you can't discern within a certain number of questions that you're interacting with an AI versus a real human, then it passes the touring test. I think there should be a natural language processing test where basically I say, fine >> Andrea: You see if people was mad or not. >> You tell me, you tell me. >> I love this idea, David. >> You know? >> Yeah. This is great. >> Okay. AI lady, >> You tell me what I meant. >> Yeah, am I actually okay? >> How far from, that's silly example but how far do you think we are from that? I mean, what, what do you seeing out there in terms of things where you're kind of like, whoa, they did this with technology I'm responsible for, that was impressive. Or have you heard of things that are on the horizon that, you know, again, you, you know they're the big, they're the big issues. >> Yeah. >> But any, anything kind of interesting and little >> I think we're seeing it perfected and tweaked, right? >> Yeah. >> You know, I think going back to my daughter it goes from her screaming at Alexa 'cause she did hear her right the first time to now, oh she understands and modifies, right? Because we're constantly tweaking that technology to have a better experience with it. And it's a continuum, right? The voice to text capabilities, right. You know, I I'd say early on it got most of those words, right Right now it's, it's getting pretty dialed in. Right. >> Savannah: That's a great example. >> So, you know, little things, little things. >> Yeah. I think I, I love the, the this thought of your daughter as the example of training AI. What, what sort of, you get to look into the future quite a bit, I'm sure with your role. >> Andrea: Absolutely. >> Where, what is she going to be controlling next? >> The world. >> The world. >> No, I mean if you think about it just from a generational front, you know technology when I was her age versus what she's experiencing, she lives and breathes it. I mean, that's the generational change. So as these are coming out, you have new folks growing with it that it's so natural that they are so open to adopting it in their common everyday behaviors. Right? >> Savannah: Yeah. >> But they'd they never, over time they learn, oh well how it got there is 'cause of everything we're doing now, right. >> Savannah: Yeah. >> You know, one, one fun example, you know as my dad was like machines are taking over the world is not, not quite right. Even if when you look at manufacturing, there's a difference in using AI to go build a digital simulation of a factory to be able to optimize it and design it right before you're laying the foundation that saves cost, time and money. That's not taking people's jobs in that extreme event. >> Right. >> It's really optimizing for faster outcomes and, and and helping our customers get there which is better for everyone. >> Savannah: Yeah and safer too. I mean, using the factory example, >> Totally safer. >> You're able to model out what a workplace injury might be or what could happen. Or even the ergonomics of how people are using. >> Andrea: Yeah, should it be higher so they don't have to bend over? Right. >> Exactly. >> There's so many fantastic positive ways. >> Yeah so, so for your dad, you know, I mean it's going to help us, it's going to make, it's going to take away when I. Well I'm curious what you think, David when I think about AI, I think it's going to take out a lot of the boring things in life that, that we don't like >> Andrea: Absolutely. Doing. The monotony and the repetitive and let us optimize our creative selves maybe. >> However, some of the boring things are people's jobs. So, so it is, it it it will, it will it will push a transition in our economy in the global economy, in my opinion. That would be painful for some, for some period of time. But overall beneficial, >> Savannah: Yes. But definitely as you know, definitely there will be there will be people who will be disrupted and, you know. >> Savannah: Tech's always kind of done that. >> We No, but we need, I, I think we need to make sure that the digital divide doesn't get so wide that you know that, that people might not be negative, negatively affected. And, but, but I know that like organizations like Dell I believe what you actually see is, >> Andrea: Yeah. >> No, it's, it's elevating people. It's actually taking away >> Andrea: Easier. >> Yeah. It's, it's, it's allowing people to spend their focus on things that are higher level, more interesting tasks. >> Absolutely. >> David: So a net, A net good. But definitely some people disrupted. >> Yes. >> I feel, I feel disrupted. >> I was going to say, are, are we speaking for a friend or for ourselves here today on stage? >> I'm tired of software updates. So maybe if you could, if you could just standardize. So AI and ML. >> Andrea: Yeah. >> People talk about machine learning and, and, and and artificial intelligence. How would you differentiate the two? >> Savannah: Good question. >> It it, it's, it's just the different applications and the different workloads of it, right? Because you actually have artificial intelligence you have machine learning in which the learn it's learning from itself. And then you have like the deep learning in which it's diving deeper in in its execution and, and modeling. And it really depends on the workload applications as long as well as how large the data set is that's feeding into it for those applications. Right. And that really leads into the, we have to make sure we have the versatility in our offerings to be able to meet every dimension of that. Right. You know our XE products that we announced are really targeted for that, those extreme AI HPC workloads. Right. Versus we also have our entire portfolio products that we make sure we have GPU diversity throughout for the other applications that may be more edge centric or telco centric, right? Because AI isn't just these extreme situations it's also at the edge. It's in the cloud, it's in the data center, right? So we want to make sure we have, you know versatility in our offerings and we're really meeting customers where they're at in regards to the implementation and and the AI workloads that they have. >> Savannah: Let's dig in a little bit there. So what should customers expect with the next generation acceleration trends that Dell's addressing in your team? You had three exciting product announcements here >> Andrea: We did, we did. >> Which is very exciting. So you can talk about that a little bit and give us a little peek. >> Sure. So, you know, for, for the most extreme applications we have the XE portfolio that we built upon, right? We already had the XC 85 45 and we've expanded that out in a couple ways. The first of which is our very first XC 96 88 way offering in which we have Nvidia's H 100 as well as 8 100. 'Cause we want choice, right? A choice between performance, power, what really are your needs? >> Savannah: Is that the first time you've combined? >> Andrea: It's the first time we've had an eight way offering. >> Yeah. >> Andrea: But we did so mindful that the technology is emerging so much from a thermal perspective as well as a price and and other influencers that we wanted that choice baked into our next generation of product as we entered the space. >> Savannah: Yeah, yeah. >> The other two products we have were both in the four way SXM and OAM implementation and we really focus on diversifying and not only from vendor partnerships, right. The XC 96 40 is based off Intel Status Center max. We have the XE 86 40 that is going to be in or Nvidia's NB length, their latest H 100. But the key differentiator is we have air cold and we have liquid cold, right? So depending on where you are from that data center journey, I mean, I think one of the common themes you've heard is thermals are going up, performance is going up, TBPs are going up power, right? >> Savannah: Yeah. >> So how do we kind of meet in the middle to be able to accommodate for that? >> Savannah: I think it's incredible how many different types of customers you're able to accommodate. I mean, it's really impressive. I feel lucky we've gotten to see these products you're describing. They're here on the show floor. There's millions of dollars of hardware literally sitting in your booth. >> Andrea: Oh yes. >> Which is casual only >> Pies for you. Yeah. >> Yeah. We were, we were chatting over there yesterday and, and oh, which, which, you know which one of these is more expensive? And the response was, they're both expensive. It was like, okay perfect >> But assume the big one is more. >> David: You mentioned, you mentioned thermals. One of the things I've been fascinated by walking around is all of the different liquid cooling solutions. >> Andrea: Yeah. >> And it's almost hysterical. You look, you look inside, it looks like something from it's like, what is, what is this a radiator system for a 19th century building? >> Savannah: Super industrial? >> Because it looks like Yeah, yeah, exactly. Exactly, exactly. It's exactly the way to describe it. But just the idea that you're pumping all of this liquid over this, over this very, very valuable circuitry. A lot of the pitches have to do with, you know this is how we prevent disasters from happening based on the cooling methods. >> Savannah: Quite literally >> How, I mean, you look at the power requirements of a single rack in a data center, and it's staggering. We've talked about this a lot. >> Savannah: Yeah. >> People who aren't kind of EV you know electric vehicle nerds don't appreciate just how much power 90 kilowatts of power is for an individual rack and how much heat that can generate. >> Andrea: Absolutely. >> So Dell's, Dell's view on this is air cooled water cooled figure it out fit for for function. >> Andrea: Optionality, optionality, right? Because our customers are a complete diverse set, right? You have those in which they're in a data center 10 to 15 kilowatt racks, right? You're not going to plum a liquid cool power hungry or air power hungry thing in there, right? You might get one of these systems in, in that kind of rack you know, architecture, but then you have the middle ground the 50 to 60 is a little bit of choice. And then the super extreme, that's where liquid cooling makes sense to really get optimized and have the best density and, and the most servers in that solution. So that's why it really depends, and that's why we're taking that approach of diversity, of not only vendors and, and choice but also implementation and ways to be able to address that. >> So I think, again, again, I'm, you know electric vehicle nerd. >> Yeah. >> It's hysterical when you, when you mention a 15 kilowatt rack at kind of flippantly, people don't realize that's way more power than the average house is consuming. >> Andrea: Yeah, yeah >> So it's like your entire house is likely more like five kilowatts on a given day, you know, air conditioning. >> Andrea: Maybe you have still have solar panel. >> In Austin, I'm sorry >> California, Austin >> But, but, but yeah, it's, it's staggering amounts of power staggering amounts of heat. There are very real problems that you guys are are solving for to drive all of these top line value >> Andrea: Yeah. >> Propositions. It's super interesting. >> Savannah: It is super interesting. All right, Andrea, last question. >> Yes. Yes. >> Dell has been lucky to have you for the last decade. What is the most exciting part about you for the next decade of your Dell career given the exciting stuff that you get to work on. >> I think, you know, really working on what's coming our way and working with my team on that is is just amazing. You know, I can't say it enough from a Dell perspective I have the best team. I work with the most, the smartest people which creates such a fun environment, right? So then when we're looking at all this optionality and and the different technologies and, and, and you know partners we work with, you know, it's that coming together and figuring out what's that best solution and then bringing our customers along that journey. That kind of makes it fun dynamic that over the next 10 years, I think you're going to see fantastic things. >> David: So I, before, before we close, I have to say that's awesome because this event is also a recruiting event where some of these really really smarts students that are surrounding us. There were some sirens going off. They're having competitions back here. >> Savannah: Yeah, yeah, yeah. >> So, so when they hear that. >> Andrea: Where you want to be. >> David: That's exactly right. That's exactly right. >> Savannah: Well played. >> David: That's exactly right. >> Savannah: Well played. >> Have fun. Come on over. >> Well, you've certainly proven that to us. Andrea, thank you so much for being with us This was such a treat. David Nicholson, thank you for being here with me and thank you for tuning in to theCUBE a lot from Dallas, Texas. We are all things HPC and super computing this week. My name's Savannah Peterson and we'll see you soon. >> Andrea: Awesome.
SUMMARY :
Thank you so much for being here Andrea, thank you so much Really excited to be here. and have the live You said you are excited things to different people. machines taking over the world. And that's the other very real way things from, you know, in regards to getting faster business perspective, you know and the opportunity, it's, it's amazing. Are you educating them You have those in which, you know, are on What does that mean to your point? Being able to deliver faster insight out it's going to be 40 in Dallas to our definition in Texas for you to see your technology deployed So if you think about conference calls you know, the test is if you can't discern Andrea: You see if on the horizon that, you right the first time to now, So, you know, little What, what sort of, you get to look I mean, that's the generational change. But they'd they never, Even if when you look at and helping our customers get there Savannah: Yeah and safer too. You're able to model out what don't have to bend over? There's so many of the boring things in life The monotony and the repetitive in the global economy, in my opinion. But definitely as you know, Savannah: Tech's that the digital divide doesn't It's actually taking away people to spend their focus on things David: So a net, A net good. So maybe if you could, if you could How would you differentiate the two? So we want to make sure we have, you know that Dell's addressing in your team? So you can talk about that we built upon, right? Andrea: It's the first time that the technology is emerging so much We have the XE 86 40 that is going to be They're here on the show floor. Yeah. oh, which, which, you know is all of the different You look, you look inside, have to do with, you know How, I mean, you look People who aren't kind of EV you know So Dell's, Dell's view on this is the 50 to 60 is a little bit of choice. So I think, again, again, I'm, you know power than the average house on a given day, you Andrea: Maybe you have problems that you guys are It's super interesting. Savannah: It is super interesting. What is the most exciting part about you I think, you know, that are surrounding us. David: That's exactly right. Come on over. and we'll see you soon.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andrea | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Dallas | LOCATION | 0.99+ |
Austin | LOCATION | 0.99+ |
40 degrees | QUANTITY | 0.99+ |
Texas | LOCATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Andrea Booker | PERSON | 0.99+ |
XE 86 40 | COMMERCIAL_ITEM | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
15 kilowatt | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
first question | QUANTITY | 0.99+ |
XC 85 45 | COMMERCIAL_ITEM | 0.99+ |
90 kilowatts | QUANTITY | 0.99+ |
XC 96 40 | COMMERCIAL_ITEM | 0.99+ |
10 | QUANTITY | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
H 100 | COMMERCIAL_ITEM | 0.99+ |
two month | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
19th century | DATE | 0.99+ |
Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22
>>You can put this in a conference. >>Good morning and welcome back to Dallas. Ladies and gentlemen, we are here with the cube Live from, from Supercomputing 2022. David, my cohost, how you doing? Exciting. Day two. Feeling good. >>Very exciting. Ready to start off the >>Day. Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >>Having us, >>For having us. I'm excited that you're starting off the day because we've been hearing a lot of rumors about ethernet as the fabric for hpc, but we really haven't done a deep dive yet during the show. Y'all seem all in on ethernet. Tell us about that. Armando, why don't you start? >>Yeah. I mean, when you look at ethernet, customers are asking for flexibility and choice. So when you look at HPC and you know, infinite band's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial and their enterprise customers. And not everybody wants to be in the top 500. What they want to do is improve their job time and improve their latency over the network. And when you look at ethernet, you kinda look at the sweet spot between 8, 12, 16, 32 nodes. That's a perfect fit for ethernet and that space and, and those types of jobs. >>I love that. Pete, you wanna elaborate? Yeah, yeah, >>Yeah, sure. I mean, I think, you know, one of the biggest things you find with internet for HPC is that, you know, if you look at where the different technologies have gone over time, you know, you've had old technologies like, you know, atm, Sonic, fitty, you know, and pretty much everything is now kind of converged toward ethernet. I mean, there's still some technologies such as, you know, InfiniBand, omnipath that are out there. Yeah. But basically there's single source at this point. So, you know, what you see is that there is a huge ecosystem behind ethernet. And you see that also, the fact that ethernet is used in the rest of the enterprise is using the cloud data centers that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia, you know, into, you know, into enterprise, into cloud service providers is much easier to integrate it with the same technology you're already using in those data centers, in those networks. >>So, so what's this, what is, what's the state of the art for ethernet right now? What, you know, what's, what's the leading edge, what's shipping now and what and what's in the near future? You, you were with Broadcom, you guys design this stuff. >>Yeah, yeah. Right. Yeah. So leading edge right now, I got a couple, you know, Wes stage >>Trough here on the cube. Yeah. >>So this is Tomahawk four. So this is what is in production is shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 tets per second. Okay. Which matches any other technology out there. Like if you look at say, infin band, highest they have right now that's just starting to get into production is 25 point sixt. So state of the art right now is what we introduced. We announced this in August. This is Tomahawk five. So this is 51.2 terabytes per second. So double the bandwidth have, you know, any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, it's actually winds up being a factor of six efficiency. Wow. Cause if you want, I can go into that, but why >>Not? Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, with some like Terminator kind of character. Cause that would be cool if it's not true. Don't just don't say anything. I just want, I can actually shift visual >>It into a terminator. So. >>Well, but so what, what are the, what are the, so this is, this is from a switching perspective. Yeah. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, well, the kns that are, that are going in there, what's, what speed are we talking about today? >>So as far as 30 speeds, it tends to be 50 gigabits per second. Okay. Moving to a hundred gig pan four. Okay. And we do see a lot of Knicks in the 200 gig ethernet port speed. So that would be, you know, four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon. 800 gig in the future. But say state of the art right now, we're seeing for the end nodes tends to be 200 gig E based on 50 gig pan four. Wow. >>Yeah. That's crazy. Yeah, >>That is, that is great. My mind is act actively blown. I wanna circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen. Where do you think we are on the adoption curve and sort of in that cycle? Armand, do you wanna go? >>Yeah, yeah. Well, if you look at the market research, they're actually telling it's 50 50 now. So ethernet is at the level of 50%. InfiniBand is at 50%. Right. Interesting. Yeah. And so what's interesting to us, customers are coming to us and say, Hey, we want to see, you know, flexibility and choice and hey, let's look at ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we have our switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially mpi. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves there. And so what we wanna do is really simplify that so that we can make it easy to install, get the cluster up and running, and they can actually get some value out of the cluster. >>Yeah. Peter, what, talk about that partnership. What, what, what does that look like? Is it, is it, I mean, are you, you working with Dell before the, you know, before the T six comes out? Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? >>No, we've had a very long partnership both on the hardware and the software side. You know, Dell has been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, you know, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group within Broadcom, we've then gotten, you know, very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, you know, Dell can take it and, you know, deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again in both the hardware and the software. >>So, so I, I'm, I'm just, I'm fascinated by, I I, I always like to know kind like what Yeah, exactly. Exactly right. Look, you, you start talking about the largest super supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be 2 million CPUs, 2 million CPU cores, yeah. Ex alop of, of, of, of performance. What are the, what are the outward limits of T five in switches, building out a fabric, what does that look like? What are the, what are the increments in terms of how many, and I know it, I know it's a depends answer, but, but, but how many nodes can you support in a, in a, in a scale out cluster before you need another switch? What does that increment of scale look like today? >>Yeah, so I think, so this is 51.2 terras per second. What we see the most common implementation based on this would be with 400 gig ethernet ports. Okay. So that would be 128, you know, 400 giggi ports connected to, to one chip. Okay. Now, if you went to 200 gig, which is kind of the state of the art for the Nicks, you can have double that. Okay. So, you know, in a single hop you can have 256 end nodes connected through one switch. >>So, okay, so this T five, that thing right there inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what is, what does that, what's the form factor look like for that, for where that T five sits? Is there just one in a chassis or you have, what does that look >>Like? It tends to be pizza boxes these days. Okay. What you've seen overall is that the industry's moved away from chassis for these high end systems more towards pizza, pizza boxes. And you can have composable systems where, you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface to these days, what tends to happen is you'd have a pizza box, and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the, the line card. >>Okay. >>So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a two R U with 64 OSF P ports. And often each of those OSF p, which is an 800 gig e or 800 gig port, we've broken out into two 400 gig quarts. Okay. So yeah, in two r u you've got, and this is all air cooled, you know, in two re you've got 51.2 T. We do see some cases where customers would like to have different optics, and they'll actually deploy a four U just so that way they have the face place density, so they can plug in 128, say qsf P one 12. But yeah, it really depends on which optics, if you wanna have DAK connectivity combined with, with optics. But those are the two most common form factors. >>And, and Armando ethernet isn't, ethernet isn't necessarily ethernet in the sense that many protocols can be run over it. Right. I think I have a projector at home that's actually using ethernet physical connections. But what, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center ethernet, or, or is this, you know, RDMA over converged ethernet? What, what are >>We talking about? Yeah, so our rdma, right? So when you look at, you know, running, you know, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on ethernet, if you look at NPI is officially, you know, built to, Hey, it was designed to run on InfiniBand, but now what you see with Broadcom and the great work they're doing now, we can make that work on ethernet and get, you know, it's same performance. So that's huge for customers. >>Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a, a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of ethernet in hpc in terms of AI and ml. Where, where do you think we're gonna be next year or 10 years from now? >>You wanna go first or you want me to go first? I can start. >>Yeah. Pete feels ready. >>So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. That's >>Impressive. >>Yeah. So nicely >>Done, casual, humble brag there. That was great. That was great. I love that. >>I'm here for you. I mean, I think that's one of the benefits of, of Ethan is like, is the ecosystem, is the trajectory, the roadmap we've had, I mean, you don't see that in any other networking technology >>More who, >>So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, you know, doubling in bandwidth. I think that, you know, they're evolving protocols. You know, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on rdma, you know, for the supercomputing, the a AIML workloads. But we do see that, you know, as you have, you know, a mix of the applications running on these end nodes, maybe they're interfacing to the, the CPUs for some processing, you might use a different mix of protocols. So I'd say it's gonna be doubling a bandwidth over time evolution of the protocols. I mean, I expect that Rocky is probably gonna evolve over time depending on the a AIML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-pack optics. So, you know, right now this chip is all, all the balls in the back here, there's electrical connections. How >>Many are there, by the way? 9,000 plus on the back of that >>352. >>I love how specific it is. It's brilliant. >>Yeah. So we get, so right now, you know, all the thirties, all the signals are coming out electrically based, but we've actually shown, we have this, actually, we have a version of Hawk four at 25 point sixt that has co-pack optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk five Nice. Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. Wow. Cool. So I see, you know, there's, you know, the bandwidth, there's radis increasing protocols, different physical connectivity. So I think there's, you know, a lot of things throughout, and the protocol stack's also evolving. So, you know, a lot of excitement, a lot of new technology coming to bear. >>Okay. You just threw a carrot down the rabbit hole. I'm only gonna chase this one. Okay. >>All right. >>So I think of, I think of individual discreet physical connections to the back of those balls. Yeah. So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that in many optical connections? What's, what's, what's the mapping there? What does that, what does that look like? >>So what we've announced for TAMA five is it would have fr four optics coming out. So you'd actually have, you know, 512 fiber pairs coming out. So you'd have, you know, basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the, the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because >>It's, it's miraculous, essentially. It's, I know. Yeah, yeah, yeah, yeah. Yeah. So, so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus versus ethernet. I think you've highlighted some of the benefits of specifically running ethernet moving forward as, as hpc, you know, which is sort of just trails slightly behind supercomputing as we define it, becomes more pervasive AI ml. What, what are some of the other things that maybe people might not immediately think about when they think about the advantages of running ethernet in that environment? Is it, is it connecting, is it about connecting the HPC part of their business into the rest of it? What, or what, what are the advantages? >>Yeah, I mean, that's a big thing. I think, and one of the biggest things that ethernet has again, is that, you know, the data centers, you know, the networks within enterprises within, you know, clouds right now are run on ethernet. So now if you want to add services for your customers, the easiest thing for you to do is, you know, the drop in clusters that are connected with the same networking technology, you know, so I think what, you know, one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to ethernet. So now you've got to train your technicians, you train your, your assist admins on two different network technologies. You need to have all the, the debug technology, all the interconnect for that. So here, the easiest thing is you can use ethernet, it's gonna give you the same performance. And actually in some cases we seen better performance than we've seen with omnipath than, you know, better than in InfiniBand. >>That's awesome. Armando, we didn't get to you, so I wanna make sure we get your future hot take. Where do you see the future of ethernet here in hpc? >>Well, Pete hit on a big thing is bandwidth, right? So when you look at train a model, okay, so when you go and train a model in ai, you need to have a lot of data in order to train that model, right? So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, but if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the cpu. And essentially, if you're gonna do it maybe on CPU only, but if you do it on accelerators, well guess what? You need a big pipe in order to get all that data through. And here's the deal. The bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage. Maybe it's some new way you design a product, but that's a benefit of speed you want faster, faster, faster. >>It's all about making it faster and easier. It is for, for the users. I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas Stakes, there's a lot going on with with that making >>Me hungry. >>I know exactly. I'm sitting up here thinking, man, I did not have a big enough breakfast. How do you come up with the name Tomahawk? >>So Tomahawk, I think you just came, came from a list. So we had, we have a tri end product line. Ah, a missile product line. And Tomahawk is being kinda like, you know, the bigger and batter missile, so, oh, okay. >>Love this. Yeah, I, well, I >>Mean, so you let your engineers, you get to name it >>Had to ask. It's >>Collaborative. Oh good. I wanna make sure everyone's in sync with it. >>So just so we, it's not the Aquaman tried. Right, >>Right. >>The steak Tomahawk. I >>Think we're, we're good now. Now that we've cleared that up. Now we've cleared >>That up. >>Armando P, it was really nice to have both you. Thank you for teaching us about the future of ethernet N hpc. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to the Cube Live from Dallas. We're here talking all things HPC and Supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us.
SUMMARY :
how you doing? Ready to start off the Gentlemen, thank you for being here with us. why don't you start? So when you look at HPC and you know, infinite band's always been around, right? Pete, you wanna elaborate? I mean, I think, you know, one of the biggest things you find with internet for HPC is that, What, you know, what's, what's the leading edge, Trough here on the cube. So double the bandwidth have, you know, any other technology that's out there. Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, So. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, So that would be, you know, four lanes, 50 gig. Yeah, Where do you think we are on the adoption curve and So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? on the operating system, you know, and they provide very valuable feedback for us on our roadmap. most powerful supercomputers that exist today, and you start looking at the specs and there might be So, you know, in a single hop you can have 256 end nodes connected through one switch. Is there just one in a chassis or you have, what does that look you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface if you wanna have DAK connectivity combined with, with optics. Is this exactly the same as what you think of as data So when you look at, you know, running, you know, a looking into the crystal ball type because you essentially get to see the future knowing what people are You wanna go first or you want me to go first? So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, I love that. the roadmap we've had, I mean, you don't see that in any other networking technology So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, I love how specific it is. So I see, you know, there's, you know, the bandwidth, I'm only gonna chase this one. How do you do So what we've announced for TAMA five is it would have fr four optics coming out. so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus know, so I think what, you know, one of the biggest things there is that if you look at Where do you see the future of ethernet here in So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, It is for, for the users. How do you come up with the name Tomahawk? And Tomahawk is being kinda like, you know, the bigger and batter missile, Yeah, I, well, I Had to ask. I wanna make sure everyone's in sync with it. So just so we, it's not the Aquaman tried. I Now that we've cleared that up. And thank you all for tuning in to the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
August | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
128 | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
2 million | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
400 gig | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
400 gig | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
30 speeds | QUANTITY | 0.99+ |
50 gig | QUANTITY | 0.99+ |
one chip | QUANTITY | 0.99+ |
400 giggi | QUANTITY | 0.99+ |
512 channels | QUANTITY | 0.99+ |
9,000 | QUANTITY | 0.99+ |
seven times | QUANTITY | 0.99+ |
800 gig | QUANTITY | 0.99+ |
Armando | PERSON | 0.99+ |
24 months | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
9,000 plus | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Peter Del Vecchio | PERSON | 0.99+ |
single source | QUANTITY | 0.99+ |
North America | LOCATION | 0.98+ |
double | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Both | QUANTITY | 0.98+ |
Hawk four | COMMERCIAL_ITEM | 0.98+ |
three | QUANTITY | 0.98+ |
Day two | QUANTITY | 0.97+ |
next year | DATE | 0.97+ |
hpc | ORGANIZATION | 0.97+ |
Tomahawk five | COMMERCIAL_ITEM | 0.97+ |
Dell Technologies | ORGANIZATION | 0.97+ |
T six | COMMERCIAL_ITEM | 0.96+ |
two | QUANTITY | 0.96+ |
one switch | QUANTITY | 0.96+ |
Texas | LOCATION | 0.96+ |
six efficiency | QUANTITY | 0.96+ |
25 point | QUANTITY | 0.95+ |
Armando | ORGANIZATION | 0.95+ |
50 | QUANTITY | 0.93+ |
25.6 tets per second | QUANTITY | 0.92+ |
51.2 terabytes per second | QUANTITY | 0.92+ |
18 | QUANTITY | 0.91+ |
512 fiber pairs | QUANTITY | 0.91+ |
two fascinating guests | QUANTITY | 0.91+ |
hundred gig | QUANTITY | 0.91+ |
four lanes | QUANTITY | 0.9+ |
HPC | ORGANIZATION | 0.9+ |
51.2 T. | QUANTITY | 0.9+ |
InfiniBand | ORGANIZATION | 0.9+ |
256 end | QUANTITY | 0.89+ |
first | QUANTITY | 0.89+ |
Armando Acosta | PERSON | 0.89+ |
two different network technologies | QUANTITY | 0.88+ |
Rajesh Pohani, Dell Technologies | SuperComputing 22
>>Good afternoon friends, and welcome back to Supercomputing. We're live here at the Cube in Dallas. I'm joined by my co-host, David. My name is Savannah Peterson and our a fabulous guest. I feel like this is almost his show to a degree, given his role at Dell. He is the Vice President of HPC over at Dell. Raja Phan, thank you so much for being on the show with us. How you doing? >>Thank you guys. I'm doing okay. Good to be back in person. This is a great show. It's really filled in nicely today and, and you know, a lot of great stuff happening. >>It's great to be around all of our fellow hardware nerds. The Dell portfolio grew by three products. It it did, I believe. Can you give us a bit of an intro on >>That? Sure. Well, yesterday afternoon and yesterday evening, we had a series of events that announced our new AI portfolio, artificial intelligence portfolio, you know, which will really help scale where I think the world is going in the future with, with the creation of, of all this data and what we can do with it. So yeah, it was an exciting day for us. Yesterday we had a, a session over in a ballroom where we did a product announce and then in the evening had an unveil in our booth here at the SUPERCOMPUTE conference, which was pretty eventful cupcakes, you know, champagne drinks and, and most importantly, Yeah, I know. Good time. Did >>You get the invite? >>No, I, most importantly, some really cool new servers for our customers. >>Well, tell us about them. Yeah, so what's, what's new? What's in the news? >>Well, you know, as you think about artificial intelligence and what customers are, are needing to do and the way artificial intelligence is gonna change how, you know, frankly, the world works. We have now developed and designed new purpose-built hardware, new purpose-built servers for a variety of AI and artificial intelligence needs. We launched our first eight way, you know, Invidia H 100 a a 100 s XM product. Yesterday we launched a four u four way H 100 product yesterday and a two u fully liquid cooled intel data center, Max GPU server yesterday as well. So, you know, a full range of portfolio for a variety of customer needs, depending on their use cases, what they're trying to do, their infrastructure, we're able to now provide, you know, servers to and hardware that help, you know, meet those needs in those use cases. >>So I wanna double click, you just said something interesting, water cooled. >>Yeah. So >>Where does, at what point do you need to move in the direction of water cooling and, you know, I know you mentioned, you know, GPU centric, but, but, but talk about that, that balance between, you know, a density and what you can achieve with the power that's going into the system. Well, you system, >>It all depends on what the customers are trying to accommodate, right? I, I think that there's a dichotomy that's existing now between customers who have already or are planning liquid cooled infrastructures and power distribution to the rack. So you take those two together and if you have the power distribution to the rack, you wanna take advantage of the density to take advantage of the density you need to be able to cool the servers and therefore liquid cooling comes into play. Now you have other customers that either don't have the power to the rack or aren't ready for liquid cooling, and at that point, you know, they're not gonna want to take advantage. They can't take advantage of the density. So there's this dichotomy in products, and that's why we've got our XE 96 40, which is in two U dense liquid cooled, but we also have our XE 86 40, which is a four U air cold, right? Or liquid assisted air cold, right? So depending on where you are on your journey, whether it's power infrastructure, liquid cooling, infrastructure, we've got the right solution for you that, you know, meets your needs. You don't have to take advantage of the density, the expense of liquid cooling, unless you're ready to do that. Otherwise we've got this other option for you. And so that's really what dichotomy is beginning to exist in our customers infrastructures today. >>I was curious about that. So do you see, is there a category or a vertical that is more in the liquid cooling zone because that's a priority in terms of the density or >>Yeah, yeah. I mean, you've got your, your large HTC installations, right? Your large clusters that not only have the power have, you know, the liquid cooling density that they've built in, you've got, you know, federal government installations, you've got financial tech installations, you've got colos that are built for sustainability and density and space that, that can also take advantage of it. Then you've got others that are, you know, more enterprises, more in the mainstream of what they do, where, you know, they're not ready for that. So it just, it just depends on the scale of the customer that we're talking about and what they're trying to do and, and where they're, and where they're doing it. >>So we hear, you know, we hear at Supercomputing conference and HPC is sort of the kind of trailing mini version of supercomputing in a way where maybe you have someone who they don't need 2 million CPU cores, but maybe they need a hundred thousand CPU cores. So it's all a matter of scale. What is, can you identify kind of an HPC sweet spot right now as, as Dell customers are adopting the kinds of things that you just just announced? >>You know, I think >>How big are these clusters at this >>Point? Well, let, let me, let me hit something else first. Yeah, I think people talk about HPC as, as something really specific and what we're seeing now with the, you know, vast amount of data creation, the need for computational analytics, the need for artificial intelligence, the HPC is kind of morphing right into, into, you know, more and more general customer use cases. And so where before you used to think about HPC is research and academics and computational dynamics. Now, you know, there's a significant Venn diagram overlap with just regular artificial intelligence, right? And, and so that is beginning to change the nature of how we think about hpc. You think about the vast data that's being created. You've got data driven HPC where you're running computational analytics on this data that's giving you insights or outcomes or information. It's not just, Hey, I'm running, you know, physics calculations or astronomical how, you know, calculations. It is now expanding in a variety of ways where it's democratizing into, you know, customers who wouldn't actually talk about themselves as HVC customers. And when you meet with them, it's like, well, yeah, but your compute needs are actually looking like HPC customers. So let's talk to you about these products. Let's talk to you about these solutions, whether it's software solutions, hardware solutions, or even purpose-built hardware. Like we're, like we talked about that now becomes the new norm. >>Customer feedback and community engagement is big for you. I know this portfolio of products that was developed based on customer feedback, correct? Yep. >>So everything we do at Dell is customer driven, right? We want to be, we want to drive, you know, customer driven innovation, customer driven value to meet our customer's needs. So yeah, we spent a while, right, researching these products, researching these needs, understanding is this one product? Is it two products? Is it three products? Talking to our partners, right? Driving our own innovation in IP and then where they're going with their roadmaps to be able to deliver kind of a harmonized solution to customers. So yeah, it was a good amount of customer engagement. I know I was on the road quite a bit talking to customers, you know, one of our products was, you know, we almost named after one of our customers, right? I'm like, Hey, this, we've talked about this. This is what you said you wanted. Now he, he was representative of a group of customers and we validated that with other customers and it's also a way of me making sure he buys it. But great, great. Yeah, >>Sharing sales there, >>That was good. But you know, it's heavily customer driven and that's where understanding those use cases and where they fit drove the various products. And, you know, in terms of, in terms of capability, in terms of size, in terms of liquid versus air cooling, in terms of things like number of P C I E lanes, right? What the networking infrastructure was gonna look like. All customer driven, all designed to meet where customers are going in their artificial intelligence journey, in their AI journey. >>It feels really collaborative. I mean, you've got both the intel and the Nvidia GPU on your new product. There's a lot of CoLab between academics and the private sector. What has you most excited today about supercomputing? >>What it's going to enable? If you think about what artificial intelligence is gonna enable, it's gonna enable faster medical research, right? Genomics the next pandemic. Hopefully not anytime soon. We'll be able to diagnose, we'll be able to track it so much faster through artificial intelligence, right? That the data that was created in this last one is gonna be an amazing source of research to, to go address stuff like that in the future and get to the heart of the problem faster. If you think about a manufacturing and, and process improvement, you can now simulate your entire manufacturing process. You don't have to run physical pilots, right? You can simulate it all, get 90% of the way there, which means your, your either factory process will get reinvented factor faster, or a new factory can get up and running faster. Think about retail, how retail products are laid out. >>You can use media analytics to track how customers go through the store, what they're buying. You can lay things out differently. You're not gonna have in the future people going, you know, to test cell phone reception. Can you hear me now? Can you hear me? Now you can simulate where customers are patterns to ensure that the 5G infrastructure is set up, you know, to the maximum advantage. All of that through digital simulation, through digital twins, through media analytics, through natural language processing. Customer experience is gonna be better, communication's gonna be better. All of this stuff with, you know, using this data, training it, and then applying it is probably what excites me the most about super computing and, and really compute in the future. >>So on the hardware front, kind of digging down below the, the covers, you know, the surface a little more, Dell has been well known for democratizing things in it, making them available to, at a variety of levels. Never a one size fits all right? Company, these latest announcements would be fair to say. They represent sort of the tip of the spear in terms of high performance. What about, what about rpc regular performance computing? Where's, where's the overlap? Cause you know, we're in this season where we've got AMD and Intel leapfrogging one another, new bus architectures. The, the, you know, the, the connectivity that's plugged into these things are getting faster and faster and faster. So from a Dell perspective, where does my term rpc regular performance computing and, and HPC begin? Are you seeing people build stuff on kind of general purpose clusters also? >>Well, sure, I mean, you can run a, a good amount of artificial acceleration on, you know, high core count CPUs without acceleration, and you can do it with P C I E accelerators and then, then you can do it with some of the, the, the very specific high performance accelerators like that, the intel, you know, data center, Max GPUs or NVIDIAs a 100 or H 100. So there are these scale up opportunities. I mean, if you think about, >>You know, >>Our mission to democratize compute, not just hpc, but general compute is about making it easier for customers to implement, to get the value out of what they're trying to do. So we focus on that with, you know, reference designs or validated designs that take out a good amount of time that customers would have to do it on their own, right? We can cut by six to 12 months the ability for customers in, in, I'm gonna use an HPC example and then I'll come back to your, your regular performance compute by us doing the work us, you know, setting, you know, determining the configuration, determining the software packages, testing it, tuning it so that by the time it gets to the customer, they get to take advantage of the expertise of Dell Engineers Dell Scale and they are ready to go in a much faster point of view. >>The challenge with AI is, and you talk to customers, is they all know what it can lead to and the benefits of it. Sometimes they just dunno how to start. We are trying to make it easier for customers to start, whether it is using regular RPC or you know, non optimized, non specialized compute, or as you move up the value stack into compute capability, our goal is to make it easier for customers to start to get on their journey and to get to what they're trying to do faster. So where do I see, you know, regular performance compute, you know, it's, it's, you know, they go hand in hand, right? As you think about what customers are trying to do. And I think a lot of customers, like we talked about, don't actually think about what they're trying to do as high performance computing. They don't think of themselves as one of those specialized institutions as their hpc, but they're on this glide path to greater and greater compute needs and greater and greater compute attributes that that merge kind of regular performance computing and high performance computing to where it's hard to really draw the line, especially when you get to data driven HPC data's everywhere >>And so much data. And it sounds like a lot people are very early in this journey. From our conversation with Travis, I mean five AI programs per very large company or less at this point for 75% of customers, that's pretty wild. I mean you're, you're an educational coach, you're teachers, you're innovating on the hardware front, you're doing everything at Dell. Last question for you. You've been at 24 years, >>25 in this coming march. >>What has a company like that done to retain talent like you for more than two and a half decades? >>You know, for me and I, I, and I'd like to say I had an atypical journey, but I don't think I have right there, there has always been opportunity for me, right? You know, I started off as a quality engineer. A couple years later I'm living in Singapore running or you know, running services for Enterprise and apj. I come back couple years in Austin, then I'm in our Bangalore development center helping set that up. Then I come back, then I'm in our Taiwan development center helping with some of the work out there. And then I come back. There has always been the next opportunity before I could even think about am I ready for the next opportunity? Oh. And so for me, why would I leave? Right? Why would I do anything different given that there's always been the next opportunity? The other thing is jobs are what you make of it and Dell embraces that. So if there's something that needs to be done or there was an opportunity, or even in the case of our AI ML portfolio, we saw an opportunity, we reviewed it, we talked about it, and then we went all in. So that innovation, that opportunity, and then most of all the people at Dell, right? I can't ask to work with a better set of set of folks from from the top on down. >>That's fantastic. Yeah. So it's culture. >>It is culture B really, at the end of the day, it is culture. >>That's fantastic. Raja, thank you so much for being here with us. >>Thank you guys, the >>Show. >>Really appreciate it. >>Questions? Yeah, this was such a pleasure. And thank you for tuning into the Cube Live from Dallas here at Supercomputing. My name is Savannah Peterson, and we'll see y'all in just a little bit.
SUMMARY :
Raja Phan, thank you so much for being on the show with us. nicely today and, and you know, a lot of great stuff happening. Can you give us a bit of an intro on which was pretty eventful cupcakes, you know, What's in the news? the way artificial intelligence is gonna change how, you know, frankly, the world works. cooling and, you know, I know you mentioned, you know, either don't have the power to the rack or aren't ready for liquid cooling, and at that point, you know, So do you see, is there a category or a vertical that is more in the more in the mainstream of what they do, where, you know, they're not ready for that. So we hear, you know, we hear at Supercomputing conference and HPC is sort of ways where it's democratizing into, you know, customers who wouldn't actually I know this portfolio of products that was developed customers, you know, one of our products was, you know, we almost named after one of our But you know, it's heavily customer driven and that's where understanding those use cases has you most excited today about supercomputing? you can now simulate your entire manufacturing process. you know, to the maximum advantage. So on the hardware front, kind of digging down below the, the covers, you know, the surface a little more, that, the intel, you know, data center, Max GPUs or NVIDIAs a 100 or H 100. you know, setting, you know, determining the configuration, determining the software packages, testing it, see, you know, regular performance compute, you know, it's, And it sounds like a lot people are very early in this journey. you know, running services for Enterprise and apj. That's fantastic. Raja, thank you so much for being here with us. And thank you for tuning into the Cube Live from Dallas here at
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rajesh Pohani | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Singapore | LOCATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
Raja Phan | PERSON | 0.99+ |
24 years | QUANTITY | 0.99+ |
Austin | LOCATION | 0.99+ |
75% | QUANTITY | 0.99+ |
HTC | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
XE 86 40 | COMMERCIAL_ITEM | 0.99+ |
XE 96 40 | COMMERCIAL_ITEM | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
yesterday afternoon | DATE | 0.99+ |
two products | QUANTITY | 0.99+ |
three products | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
yesterday evening | DATE | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
Yesterday | DATE | 0.99+ |
25 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
one product | QUANTITY | 0.99+ |
12 months | QUANTITY | 0.99+ |
H 100 | COMMERCIAL_ITEM | 0.99+ |
five | QUANTITY | 0.99+ |
Taiwan | LOCATION | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.98+ |
Raja | PERSON | 0.98+ |
Travis | PERSON | 0.98+ |
Supercomputing | ORGANIZATION | 0.98+ |
HPC | ORGANIZATION | 0.98+ |
intel | ORGANIZATION | 0.98+ |
more than two and a half decades | QUANTITY | 0.98+ |
Dell Technologies | ORGANIZATION | 0.98+ |
Bangalore | LOCATION | 0.97+ |
H 100 a a | COMMERCIAL_ITEM | 0.97+ |
pandemic | EVENT | 0.97+ |
NVIDIAs | ORGANIZATION | 0.96+ |
Invidia | ORGANIZATION | 0.96+ |
2 million CPU | QUANTITY | 0.93+ |
first | QUANTITY | 0.92+ |
A couple years later | DATE | 0.92+ |
Cube | ORGANIZATION | 0.9+ |
Dell Engineers | ORGANIZATION | 0.88+ |
a hundred thousand CPU | QUANTITY | 0.88+ |
Cube Live | TITLE | 0.84+ |
SUPERCOMPUTE | EVENT | 0.82+ |
couple years | QUANTITY | 0.79+ |
100 s XM | COMMERCIAL_ITEM | 0.77+ |
first eight way | QUANTITY | 0.76+ |
SuperComputing 22 | ORGANIZATION | 0.73+ |
CoLab | ORGANIZATION | 0.7+ |
hpc | ORGANIZATION | 0.68+ |
double | QUANTITY | 0.68+ |
Supercomputing | EVENT | 0.66+ |
four way | QUANTITY | 0.62+ |
march | DATE | 0.62+ |
100 | COMMERCIAL_ITEM | 0.6+ |
four | QUANTITY | 0.57+ |
President | PERSON | 0.54+ |
Travis Vigil, Dell Technologies | SuperComputing 22
>>How do y'all, and welcome to Dallas, where we're proud to be live from Supercomputing 2022. My name is Savannah Peterson, joined here by my cohost David on the Cube, and our first guest today is a very exciting visionary. He's a leader at Dell. Please welcome Travis Vhi. Travis, thank you so much for being here. >>Thank you so much for having me. >>How you feeling? >>Okay. I I'm feeling like an exciting visionary. You >>Are. That's, that's the ideas why we tee you up for that. Great. So, so tell us, Dell had some huge announcements Yes. Last night. And you get to break it to the cube audience. Give us the rundown. >>Yeah. It's a really big show for Dell. We announced a brand new suite of GPU enabled servers, eight ways, four ways, direct liquid cooling. Really the first time in the history of the portfolio that we've had this much coverage across Intel amd, Invidia getting great reviews from the show floor. I had the chance earlier to be in the whisper suite to actually look at the gear. Customers are buzzing over it. That's one thing I love about this show is the gear is here. >>Yes, it is. It is a haven for hardware nerds. Yes. Like, like well, I'll include you in this group, it sounds like, on >>That. Great. Yes. Oh >>Yeah, absolutely. And I know David is as well, sew up >>The street. Oh, big, big time. Big time hardware nerd. And just to be clear, for the kids that will be watching these videos Yes. We're not talking about alien wear gaming systems. >>No. Right. >>So they're >>Yay big yay tall, 200 pounds. >>Give us a price point on one of these things. Re retail, suggested retail price. >>Oh, I'm >>More than 10 grand. >>Oh, yeah. Yeah. Try another order of magnitude. Yeah. >>Yeah. So this is, this is the most exciting stuff from an infrastructure perspective. Absolutely. You can imagine. Absolutely. But what is it driving? So talk, talk to us about where you see the world of high performance computing with your customers. What are they, what are they doing with this? What do they expect to do with this stuff in the future? >>Yeah. You know, it's, it's a real interesting time and, and I know that the provenance of this show is HPC focused, but what we're seeing and what we're hearing from our customers is that AI workloads and traditional HPC workloads are becoming almost indistinguishable. You need the right mix of compute, you need GPU acceleration, and you need the ability to take the vast quantities of data that are being generated and actually gather insight from them. And so if you look at what customers are trying to do with, you know, enterprise level ai, it's really, you know, how do I classify and categorize my data, but more, more importantly, how do I make sense of it? How do I derive insights from it? Yeah. And so at the end of the day, you know, you look, you look at what customers are trying to do. It's, it's take all the various streams of data, whether it be structured data, whether it be unstructured data, bring it together and make decisions, make business decisions. >>And it's a really exciting time because customers are saying, you know, the same things that, that, that, you know, research scientists and universities have been trying to do forever with hpc. I want to do it on industrial scale, but I want to do it in a way that's more open, more flexible, you know, I call it AI for the rest of us. And, and, and customers are here and they want those systems, but they want the ecosystem to support ease of deployment, ease of use, ease of scale. And that's what we're providing in addition to the systems. We, we provide, you know, Dell's one of the only providers on the on in the industry that can provide not only the, the compute, but the networking and the storage, and more importantly, the solutions that bring it all together. Give you one example. We, we have what we call a validated design for, for ai. And that validated design, we put together all of the pieces, provided the recipe for customers so that they can take what used to be two months to build and run a model. We provide that capability 18 times faster. So we're talking about hours versus months. So >>That's a lot. 18 times faster. I just wanna emphasize that 18 times faster, and we're talking about orders of magnitude and whatnot up here, that makes a huge difference in what people are able to do. Absolutely. >>Absolutely. And so, I mean, we've, you know, you've been doing this for a while. We've been talking about the, the deluge of data forever, but it's gotten to the point and it's, you know, the, the disparity of the data, the fact that much of it remains siloed. Customers are demanding that we provide solutions that allow them to bring that data together, process it, make decisions with it. So >>Where, where are we in the adoption cycle early because we, we've been talking about AI and ML for a while. Yeah. You, you mentioned, you know, kind of the leading edge of academia and supercomputing and HPC and what that, what that conjures up in people's minds. Do you have any numbers or, you know, any, any thoughts about where we are in this cycle? How many, how many people are actually doing this in production versus, versus experimenting at this point? Yeah, >>I think it's a, it's a reason. There's so much interest in what we're doing and so much demand for not only the systems, but the solutions that bring the systems together. The ecosystem that brings the, the, the systems together. We did a study recently and ask customers where they felt they were at in terms of deploying best practices for ai, you know, mass deployment of ai. Only 31% of customers said that they felt that they self-reported. 31% said they felt that they were deploying best practices for their AI deployments. So almost 70% self reporting saying we're not doing it right yet. Yeah. And, and, and another good stat is, is three quarters of customers have fewer than five AI applications deployed at scale in their, in their IT environments today. So, you know, I think we're on the, you know, if, if I, you think about it as a traditional S curve, I think we're at the first inflection point and customers are asking, Can I do it end to end? >>Can I do it with the best of breed in terms of systems? But Dell, can you also use an ecosystem that I know and understand? And I think that's, you know, another great example of something that Dell is doing is, is we have focused on ethernet as connectivity for many of the solutions that we put together. Again, you know, provenance of hpc InfiniBand, it's InfiniBand is a great connectivity option, but you know, there's a lot of care and feeding that goes along with InfiniBand and the fact that you can do it both with InfiniBand for those, you know, government class CU scale, government scale clusters or university scale clusters and more of our enterprise customers can do it with, with ethernet on premises. It's a great option. >>Yeah. You've got so many things going on. I got to actually check out the million dollar hardware that you have just casually Yeah. Sitting in your booth. I feel like, I feel like an event like this is probably one of the only times you can let something like that out. Yeah, yeah. And, and people would actually know what it is you're working >>With. We actually unveiled it. There was a sheet on it and we actually unveiled it last night. >>Did you get a lot of uz and os >>You know, you said this was a show for hardware nerds. It's been a long time since I've been at a shoe, a show where people cheer and u and a when you take the sheet off the hardware and, and, and Yes, yes, >>Yes, it has and reveal you had your >>Moment. Exactly, exactly. Our three new systems, >>Speaking of u and os, I love that. And I love that everyone was excited as we all are about it. What I wanna, It's nice to be home with our nerds. Speaking of, of applications and excitement, you get to see a lot of different customers across verticals. Is there a sector or space that has you personally most excited? >>Oh, personally most excited, you know, for, for credibility at home when, when the sector is media and entertainment and the movie is one that your, your children have actually seen, that one gives me credibility. Exciting. It's, you can talk to your friends about it at, at at dinner parties and things like that. I'm like, >>Stuff >>Curing cancer. Marvel movie at home cred goes to the Marvel movie. Yeah. But, but, but you know, what really excites me is the variety of applications that AI is being used, used in healthcare. You know, on a serious note, healthcare, genomics, a huge and growing application area that excites me. You know, doing, doing good in the world is something that's very important to Dell. You know, know sustainability is something that's very important to Dell. Yeah. So any application related to that is exciting to me. And then, you know, just pragmatically speaking, anything that helps our customers make better business decisions excites me. >>So we are, we are just at the beginning of what I refer to as this rolling thunder of cpu. Yes. Next generation releases. We re recently from AMD in the near future it'll be, it'll be Intel joining the party Yeah. Going back and forth, back and forth along with that gen five PCI e at the motherboard level. Yep. It's very easy to look at it and say, Wow, previous gen, Wow, double, double, double. It >>Is, double >>It is. However, most of your customers, I would guess a fair number of them might be not just N minus one, but n minus two looking at an upgrade. So for a lot of people, the upgrade season that's ahead of us is going to be not a doubling, but a four x or eight x in a lot of, in a lot of cases. Yeah. So the quantity of compute from these new systems is going to be a, it's gonna be a massive increase from where we've been in, in, in the recent past, like as in last, last Tuesday. So is there, you know, this is sort of a philosophical question. We talked a little earlier about this idea of the quantitative versus qualitative difference in computing horsepower. Do we feel like we're at a point where there's gonna be an inflection in terms of what AI can actually deliver? Yeah. Based on current technology just doing it more, better, faster, cheaper? Yeah. Or do we, or do we need this leap to quantum computing to, to get there? >>Yeah. I look, >>I think we're, and I was having some really interesting conversations with, with, with customers that whose job it is to run very, very large, very, very complex clusters. And we're talking a little bit about quantum computing. Interesting thing about quantum computing is, you know, I think we're or we're a ways off still. And in order to make quantum computing work, you still need to have classical computing surrounding Right. Number one. Number two, with, with the advances that we're, we're seeing generation on generation with this, you know, what, what has moved from a kind of a three year, you know, call it a two to three year upgrade cycle to, to something that because of all of the technology that's being deployed into the industry is almost more continuous upgrade cycle. I, I'm personally optimistic that we are on the, the cusp of a new level of infrastructure modernization. >>And it's not just the, the computing power, it's not just the increases in GPUs. It's not, you know, those things are important, but it's things like power consumption, right? One of the, the, the ways that customers can do better in terms of power consumption and sustainability is by modernizing infrastructure. Looking to your point, a lot of people are, are running n minus one, N minus two. The stuff that's coming out now is, is much more energy efficient. And so I think there's a lot of, a lot of vectors that we're seeing in, in the market, whether it be technology innovation, whether it be be a drive for energy efficiency, whether it be the rise of AI and ml, whether it be all of the new silicon that's coming in into the portfolio where customers are gonna have a continuous reason to upgrade. I mean, that's, that's my thought. What do you think? >>Yeah, no, I think, I think that the, the, the objective numbers that are gonna be rolling out Yeah. That are starting to roll out now and in the near future. That's why it's really an exciting time. Yeah. I think those numbers are gonna support your point. Yeah. Because people will look and they'll say, Wait a minute, it used to be a dollar, but now it's $2. That's more expensive. Yeah. But you're getting 10 times as much Yeah. For half of the amount of power boom. And it's, and it's >>Done. Exactly. It's, it's a >>Tco It's, it's no brainer. It's Oh yeah. You, it gets to the point where it's, you look at this rack of amazing stuff that you have a personal relationship with and you say, I can't afford to keep you plugged in anymore. Yeah. >>And Right. >>The power is such a huge component of this. Yeah. It's huge, huge. >>Our customer, I mean, it's always a huge issue, but our customers, especially in Amia with what's going on over there are, are saying, I, you know, I need to upgrade because I need to be more energy efficient. >>Yeah. >>Yeah. I I, we were talking about 20 years from now, so you've been at Dell over 18 years. >>Yeah. It'll be 19 in in May. >>Congratulations. Yeah. What, what commitment, so 19 years from now in your, in your second Dell career. Yeah. What are we gonna be able to say then that perhaps we can't say now? >>Oh my gosh. Wow. 19 years from now. >>Yeah. I love this as an arbitrary number too. This is great. Yeah. >>38 year Dell career. Yeah. >>That might be a record. Yeah. >>And if you'd like to share the winners of Super Bowls and World Series in advance, like the world and the, the sports element act from back to the future. So we can play ball bets power and the >>Power ball, but, but any >>Point building Yeah. I mean this is what, what, what, what do you think ai, what's AI gonna deliver in the next decade? >>Yeah. I, I look, I mean, there are are, you know, global issues that advances in computing power will help us solve. And, you know, the, the models that are being built, the ability to generate a, a digital copy of the analog world and be able to run models and simulations on it is, is amazing. Truly. Yeah. You know, I, I was looking at some, you know, it's very, it's a very simple and pragmatic thing, but I think it's, it, it's an example of, of what could be, we were with one of our technology providers and they, they were, were showing us a digital simulation, you know, a digital twin of a factory for a car manufacturer. And they were saying that, you know, it used to be you had to build the factory, you had to put the people in the factory. You had to, you know, run cars through the factory to figure out sort of how you optimize and you know, where everything's placed. >>Yeah. They don't have to do that anymore. No. Right. They can do it all via simulation, all via digital, you know, copy of, of analog reality. And so, I mean, I think the, you know, the, the, the, the possibilities are endless. And, you know, 19 years ago, I had no idea I'd be sitting here so excited about hardware, you know, here we are baby. I think 19 years from now, hardware still matters. Yeah. You know, hardware still matters. I know software eats the world, the hardware still matters. Gotta run something. Yeah. And, and we'll be talking about, you know, that same type of, of example, but at a broader and more global scale. Well, I'm the knucklehead who >>Keeps waving his phone around going, There's one terabyte in here. Can you believe that one terabyte? Cause when you've been around long enough, it's like >>Insane. You know, like, like I've been to nasa, I live in Texas, I've been to NASA a couple times. They, you know, they talk about, they sent, you know, they sent people to the moon on, on way less, less on >>Too far less in our pocket computers. Yeah. It's, it's amazing. >>I am an optimist on, on where we're going clearly. >>And we're clearly an exciting visionary, like we said, said the gate. It's no surprise that people are using Dell's tech to realize their AI ecosystem dreams. Travis, thank you so much for being here with us David. Always a pleasure. And thank you for tuning in to the Cube Live from Dallas, Texas. My name is Savannah Peterson. We'll be back with more supercomputing soon.
SUMMARY :
Travis, thank you so much for being here. You And you get to break it to the cube audience. I had the chance earlier to be in the whisper suite to actually look at the gear. Like, like well, I'll include you in this group, And I know David is as well, sew up And just to be clear, for the kids that will be Give us a price point on one of these things. Yeah. you see the world of high performance computing with your customers. And so at the end of the day, you know, And it's a really exciting time because customers are saying, you know, the same things that, I just wanna emphasize that 18 times faster, and we're talking about orders of magnitude and whatnot you know, the, the disparity of the data, the fact that much of it remains siloed. you have any numbers or, you know, any, any thoughts about where we are in this cycle? you know, if, if I, you think about it as a traditional S curve, I think we're at the first inflection point and but you know, there's a lot of care and feeding that goes along with InfiniBand and the fact that you can do it I got to actually check out the million dollar hardware that you have just There was a sheet on it and we actually unveiled it last night. You know, you said this was a show for hardware nerds. Our three new systems, that has you personally most excited? Oh, personally most excited, you know, for, for credibility at home And then, you know, the near future it'll be, it'll be Intel joining the party Yeah. you know, this is sort of a philosophical question. you know, what, what has moved from a kind of a three year, you know, call it a two to three year upgrade It's not, you know, those things are important, but it's things like power consumption, For half of the amount of power boom. It's, it's a of amazing stuff that you have a personal relationship with and you say, I can't afford to keep you plugged in anymore. Yeah. what's going on over there are, are saying, I, you know, I need to upgrade because Yeah. Wow. 19 years from now. Yeah. Yeah. Yeah. advance, like the world and the, the sports element act from back to the future. what's AI gonna deliver in the next decade? And they were saying that, you know, it used to be you had to build the factory, And so, I mean, I think the, you know, the, the, the, the possibilities are endless. Can you believe that one terabyte? They, you know, they talk about, they sent, you know, they sent people to the moon on, on way less, less on Yeah. And thank you for tuning in to the Cube Live from Dallas,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Travis | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
$2 | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
18 times | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
two months | QUANTITY | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
10 times | QUANTITY | 0.99+ |
Invidia | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
200 pounds | QUANTITY | 0.99+ |
38 year | QUANTITY | 0.99+ |
31% | QUANTITY | 0.99+ |
last Tuesday | DATE | 0.99+ |
today | DATE | 0.99+ |
three year | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Super Bowls | EVENT | 0.99+ |
More than 10 grand | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
19 years ago | DATE | 0.99+ |
first time | QUANTITY | 0.98+ |
Last night | DATE | 0.98+ |
million dollar | QUANTITY | 0.98+ |
World Series | EVENT | 0.98+ |
one example | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first guest | QUANTITY | 0.98+ |
May | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
next decade | DATE | 0.97+ |
over 18 years | QUANTITY | 0.97+ |
last night | DATE | 0.97+ |
19 years | QUANTITY | 0.97+ |
Travis Vigil | PERSON | 0.97+ |
hpc | ORGANIZATION | 0.97+ |
19 | QUANTITY | 0.96+ |
four ways | QUANTITY | 0.96+ |
eight ways | QUANTITY | 0.96+ |
both | QUANTITY | 0.95+ |
InfiniBand | COMMERCIAL_ITEM | 0.95+ |
one thing | QUANTITY | 0.94+ |
four | QUANTITY | 0.92+ |
Intel | ORGANIZATION | 0.92+ |
almost 70% | QUANTITY | 0.92+ |
Amia | LOCATION | 0.91+ |
first inflection | QUANTITY | 0.91+ |
NASA | ORGANIZATION | 0.88+ |
Marvel | ORGANIZATION | 0.88+ |
Intel amd | ORGANIZATION | 0.83+ |
three quarters | QUANTITY | 0.82+ |
five | QUANTITY | 0.82+ |
three new systems | QUANTITY | 0.82+ |
eight x | QUANTITY | 0.81+ |
nasa | LOCATION | 0.78+ |
Cube Live | COMMERCIAL_ITEM | 0.77+ |
couple times | QUANTITY | 0.73+ |
about 20 years | QUANTITY | 0.7+ |
doubling | QUANTITY | 0.67+ |
times | QUANTITY | 0.64+ |
a dollar | QUANTITY | 0.61+ |
theCUBE Previews Supercomputing 22
(inspirational music) >> The history of high performance computing is unique and storied. You know, it's generally accepted that the first true supercomputer was shipped in the mid 1960s by Controlled Data Corporations, CDC, designed by an engineering team led by Seymour Cray, the father of Supercomputing. He left CDC in the 70's to start his own company, of course, carrying his own name. Now that company Cray, became the market leader in the 70's and the 80's, and then the decade of the 80's saw attempts to bring new designs, such as massively parallel systems, to reach new heights of performance and efficiency. Supercomputing design was one of the most challenging fields, and a number of really brilliant engineers became kind of quasi-famous in their little industry. In addition to Cray himself, Steve Chen, who worked for Cray, then went out to start his own companies. Danny Hillis, of Thinking Machines. Steve Frank of Kendall Square Research. Steve Wallach tried to build a mini supercomputer at Convex. These new entrants, they all failed, for the most part because the market at the time just wasn't really large enough and the economics of these systems really weren't that attractive. Now, the late 80's and the 90's saw big Japanese companies like NEC and Fujitsu entering the fray and governments around the world began to invest heavily in these systems to solve societal problems and make their nations more competitive. And as we entered the 21st century, we saw the coming of petascale computing, with China actually cracking the top 100 list of high performance computing. And today, we're now entering the exascale era, with systems that can complete a billion, billion calculations per second, or 10 to the 18th power. Astounding. And today, the high performance computing market generates north of $30 billion annually and is growing in the high single digits. Supercomputers solve the world's hardest problems in things like simulation, life sciences, weather, energy exploration, aerospace, astronomy, automotive industries, and many other high value examples. And supercomputers are expensive. You know, the highest performing supercomputers used to cost tens of millions of dollars, maybe $30 million. And we've seen that steadily rise to over $200 million. And today we're even seeing systems that cost more than half a billion dollars, even into the low billions when you include all the surrounding data center infrastructure and cooling required. The US, China, Japan, and EU countries, as well as the UK, are all investing heavily to keep their countries competitive, and no price seems to be too high. Now, there are five mega trends going on in HPC today, in addition to this massive rising cost that we just talked about. One, systems are becoming more distributed and less monolithic. The second is the power of these systems is increasing dramatically, both in terms of processor performance and energy consumption. The x86 today dominates processor shipments, it's going to probably continue to do so. Power has some presence, but ARM is growing very rapidly. Nvidia with GPUs is becoming a major player with AI coming in, we'll talk about that in a minute. And both the EU and China are developing their own processors. We're seeing massive densities with hundreds of thousands of cores that are being liquid-cooled with novel phase change technology. The third big trend is AI, which of course is still in the early stages, but it's being combined with ever larger and massive, massive data sets to attack new problems and accelerate research in dozens of industries. Now, the fourth big trend, HPC in the cloud reached critical mass at the end of the last decade. And all of the major hyperscalers are providing HPE, HPC as a service capability. Now finally, quantum computing is often talked about and predicted to become more stable by the end of the decade and crack new dimensions in computing. The EU has even announced a hybrid QC, with the goal of having a stable system in the second half of this decade, most likely around 2027, 2028. Welcome to theCUBE's preview of SC22, the big supercomputing show which takes place the week of November 13th in Dallas. theCUBE is going to be there. Dave Nicholson will be one of the co-hosts and joins me now to talk about trends in HPC and what to look for at the show. Dave, welcome, good to see you. >> Hey, good to see you too, Dave. >> Oh, you heard my narrative up front Dave. You got a technical background, CTO chops, what did I miss? What are the major trends that you're seeing? >> I don't think you really- You didn't miss anything, I think it's just a question of double-clicking on some of the things that you brought up. You know, if you look back historically, supercomputing was sort of relegated to things like weather prediction and nuclear weapons modeling. And these systems would live in places like Lawrence Livermore Labs or Los Alamos. Today, that requirement for cutting edge, leading edge, highest performing supercompute technology is bleeding into the enterprise, driven by AI and ML, artificial intelligence and machine learning. So when we think about the conversations we're going to have and the coverage we're going to do of the SC22 event, a lot of it is going to be looking under the covers and seeing what kind of architectural things contribute to these capabilities moving forward, and asking a whole bunch of questions. >> Yeah, so there's this sort of theory that the world is moving toward this connectivity beyond compute-centricity to connectivity-centric. We've talked about that, you and I, in the past. Is that a factor in the HPC world? How is it impacting, you know, supercomputing design? >> Well, so if you're designing an island that is, you know, tip of this spear, doesn't have to offer any level of interoperability or compatibility with anything else in the compute world, then connectivity is important simply from a speeds and feeds perspective. You know, lowest latency connectivity between nodes and things like that. But as we sort of democratize supercomputing, to a degree, as it moves from solely the purview of academia into truly ubiquitous architecture leverage by enterprises, you start asking the question, "Hey, wouldn't it be kind of cool if we could have this hooked up into our ethernet networks?" And so, that's a whole interesting subject to explore because with things like RDMA over converged ethernet, you now have the ability to have these supercomputing capabilities directly accessible by enterprise computing. So that level of detail, opening up the box of looking at the Nix, or the storage cards that are in the box, is actually critically important. And as an old-school hardware knuckle-dragger myself, I am super excited to see what the cutting edge holds right now. >> Yeah, when you look at the SC22 website, I mean, they're covering all kinds of different areas. They got, you know, parallel clustered systems, AI, storage, you know, servers, system software, application software, security. I mean, wireless HPC is no longer this niche. It really touches virtually every industry, and most industries anyway, and is really driving new advancements in society and research, solving some of the world's hardest problems. So what are some of the topics that you want to cover at SC22? >> Well, I kind of, I touched on some of them. I really want to ask people questions about this idea of HPC moving from just academia into the enterprise. And the question of, does that mean that there are architectural concerns that people have that might not be the same as the concerns that someone in academia or in a lab environment would have? And by the way, just like, little historical context, I can't help it. I just went through the upgrade from iPhone 12 to iPhone 14. This has got one terabyte of storage in it. One terabyte of storage. In 1997, I helped build a one terabyte NAS system that a government defense contractor purchased for almost $2 million. $2 million! This was, I don't even know, it was $9.99 a month extra on my cell phone bill. We had a team of seven people who were going to manage that one terabyte of storage. So, similarly, when we talk about just where are we from a supercompute resource perspective, if you consider it historically, it's absolutely insane. I'm going to be asking people about, of course, what's going on today, but also the near future. You know, what can we expect? What is the sort of singularity that needs to occur where natural language processing across all of the world's languages exists in a perfect way? You know, do we have the compute power now? What's the interface between software and hardware? But really, this is going to be an opportunity that is a little bit unique in terms of the things that we typically cover, because this is a lot about cracking open the box, the server box, and looking at what's inside and carefully considering all of the components. >> You know, Dave, I'm looking at the exhibitor floor. It's like, everybody is here. NASA, Microsoft, IBM, Dell, Intel, HPE, AWS, all the hyperscale guys, Weka IO, Pure Storage, companies I've never heard of. It's just, hundreds and hundreds of exhibitors, Nvidia, Oracle, Penguin Solutions, I mean, just on and on and on. Google, of course, has a presence there, theCUBE has a major presence. We got a 20 x 20 booth. So, it's really, as I say, to your point, HPC is going mainstream. You know, I think a lot of times, we think of HPC supercomputing as this just sort of, off in the eclectic, far off corner, but it really, when you think about big data, when you think about AI, a lot of the advancements that occur in HPC will trickle through and go mainstream in commercial environments. And I suspect that's why there are so many companies here that are really relevant to the commercial market as well. >> Yeah, this is like the Formula 1 of computing. So if you're a Motorsports nerd, you know that F1 is the pinnacle of the sport. SC22, this is where everybody wants to be. Another little historical reference that comes to mind, there was a time in, I think, the early 2000's when Unisys partnered with Intel and Microsoft to come up with, I think it was the ES7000, which was supposed to be the mainframe, the sort of Intel mainframe. It was an early attempt to use... And I don't say this in a derogatory way, commodity resources to create something really, really powerful. Here we are 20 years later, and we are absolutely smack in the middle of that. You mentioned the focus on x86 architecture, but all of the other components that the silicon manufacturers bring to bear, companies like Broadcom, Nvidia, et al, they're all contributing components to this mix in addition to, of course, the microprocessor folks like AMD and Intel and others. So yeah, this is big-time nerd fest. Lots of academics will still be there. The supercomputing.org, this loose affiliation that's been running these SC events for years. They have a major focus, major hooks into academia. They're bringing in legit computer scientists to this event. This is all cutting edge stuff. >> Yeah. So like you said, it's going to be kind of, a lot of techies there, very technical computing, of course, audience. At the same time, we expect that there's going to be a fair amount, as they say, of crossover. And so, I'm excited to see what the coverage looks like. Yourself, John Furrier, Savannah, I think even Paul Gillin is going to attend the show, because I believe we're going to be there three days. So, you know, we're doing a lot of editorial. Dell is an anchor sponsor, so we really appreciate them providing funding so we can have this community event and bring people on. So, if you are interested- >> Dave, Dave, I just have- Just something on that point. I think that's indicative of where this world is moving when you have Dell so directly involved in something like this, it's an indication that this is moving out of just the realm of academia and moving in the direction of enterprise. Because as we know, they tend to ruthlessly drive down the cost of things. And so I think that's an interesting indication right there. >> Yeah, as do the cloud guys. So again, this is mainstream. So if you're interested, if you got something interesting to talk about, if you have market research, you're an analyst, you're an influencer in this community, you've got technical chops, maybe you've got an interesting startup, you can contact David, david.nicholson@siliconangle.com. John Furrier is john@siliconangle.com. david.vellante@siliconangle.com. I'd be happy to listen to your pitch and see if we can fit you onto the program. So, really excited. It's the week of November 13th. I think November 13th is a Sunday, so I believe David will be broadcasting Tuesday, Wednesday, Thursday. Really excited. Give you the last word here, Dave. >> No, I just, I'm not embarrassed to admit that I'm really, really excited about this. It's cutting edge stuff and I'm really going to be exploring this question of where does it fit in the world of AI and ML? I think that's really going to be the center of what I'm really seeking to understand when I'm there. >> All right, Dave Nicholson. Thanks for your time. theCUBE at SC22. Don't miss it. Go to thecube.net, go to siliconangle.com for all the news. This is Dave Vellante for theCUBE and for Dave Nicholson. Thanks for watching. And we'll see you in Dallas. (inquisitive music)
SUMMARY :
And all of the major What are the major trends on some of the things that you brought up. that the world is moving or the storage cards that are in the box, solving some of the across all of the world's languages a lot of the advancements but all of the other components At the same time, we expect and moving in the direction of enterprise. Yeah, as do the cloud guys. and I'm really going to be go to siliconangle.com for all the news.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Danny Hillis | PERSON | 0.99+ |
Steve Chen | PERSON | 0.99+ |
NEC | ORGANIZATION | 0.99+ |
Fujitsu | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Steve Wallach | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Steve Frank | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Seymour Cray | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Unisys | ORGANIZATION | 0.99+ |
1997 | DATE | 0.99+ |
Savannah | PERSON | 0.99+ |
Dallas | LOCATION | 0.99+ |
EU | ORGANIZATION | 0.99+ |
Controlled Data Corporations | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Penguin Solutions | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Tuesday | DATE | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
21st century | DATE | 0.99+ |
iPhone 12 | COMMERCIAL_ITEM | 0.99+ |
10 | QUANTITY | 0.99+ |
Cray | PERSON | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
CDC | ORGANIZATION | 0.99+ |
thecube.net | OTHER | 0.99+ |
Lawrence Livermore Labs | ORGANIZATION | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Kendall Square Research | ORGANIZATION | 0.99+ |
iPhone 14 | COMMERCIAL_ITEM | 0.99+ |
john@siliconangle.com | OTHER | 0.99+ |
$2 million | QUANTITY | 0.99+ |
November 13th | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
over $200 million | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
more than half a billion dollars | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
seven people | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
mid 1960s | DATE | 0.99+ |
three days | QUANTITY | 0.99+ |
Convex | ORGANIZATION | 0.99+ |
70's | DATE | 0.99+ |
SC22 | EVENT | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
late 80's | DATE | 0.98+ |
80's | DATE | 0.98+ |
ES7000 | COMMERCIAL_ITEM | 0.98+ |
today | DATE | 0.98+ |
almost $2 million | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
20 years later | DATE | 0.98+ |
tens of millions of dollars | QUANTITY | 0.98+ |
Sunday | DATE | 0.98+ |
Japanese | OTHER | 0.98+ |
90's | DATE | 0.97+ |
SuperComputing Intro | SuperComputing22
>>Hello everyone. My name is Savannah Peterson, coming to you from the Cube Studios in Palo Alto, California. We're gonna be talking about super computing an event coming up in Dallas this November. I'm joined by the infamous John Furrier. John, thank you for joining me today. >>Great to see you. You look great. >>Thank you. You know, I don't know if anyone's checked out the conference colors for for supercomputing, but I happen to match the accent pink and you are rocking their blue. I got the so on >>There it is. >>We don't always tie our fashion to the tech ladies and gentlemen, but we're, we're a new crew here at, at the Cube and I think it should be a thing that we, that we do moving forward. So John, you are a veteran and I'm a newbie to Supercomputing. It'll be my first time in Dallas. What can I expect? >>Basically it's a hardware nerd fest, basically of the top >>Minds. So it's like ces, >>It's like CES for like, like hardware. It's like really the coolest show if you're into like high performance computing, I mean game changing kind of, you know, physics, laws of physics and hardware. This is the show. I mean this is like the confluence of it's, it's really old. It started when I graduated college, 1988. And back then it was servers, you know, super computing was a concept. It was usually a box and it was hardware, big machine. And it would crank out calculations, simulations and, and you know, you were limited to the processor and all the, the systems components, just the architecture system software, I mean it was technical, it was, it was, it was hardware, it was fun. Very cool back then. But you know, servers got bigger and you got grid computing, you got clusters and then it be really became high performance computing concept. But that's now multiple disciplines, hence it's been around for a while. It's evergreen in the sense it's always changing, attracting talent, students, mentors, scholarships. It's kind of big funding and big companies are behind it. Wl, look, Packard Enterprise, Dell computing startups and hardware matters more than ever. You look at the cloud, what Amazon and, and the cloud hyper skills, they're building the fastest chips down at the root level hardware's back. And I think this show's gonna show a lot of that. >>There isn't the cloud without hardware to support it. So I think it's important that we're all headed here. You, you touched on the evolution there from super computing in the beginning and complex calculations and processing to what we're now calling high performance computing. Can you go a little bit deeper? What is, what does that mean, What does that cover? >>Well, I mean high high performance computing and now is a range of different things. So the super computing needs to be like a thing now. You got clusters and grids that's distributed, you got a backbone, it's well architected and there's a lot involved. This network and security, there's system software. So now it's multiple disciplines in high performance computing and you can do a lot more. And now with cloud computing you can do simulations, say drug research or drug testing. You have, you can do all kinds of cal genome sequencing. I mean the, the, the ability to actually use compute right now is so awesome. The field's got, you know, is rebooting itself in real time, you know, pun intended. So it's like really, it's really good thing. More compute makes things go faster, especially with more data. So high encapsulates all the, the engineering behind it. A lot of robotics coming in the future. All this is gonna be about the edge. You're seeing a lot more hardware making noise around things that are new use cases. You know, your Apple watch that's, you know, very high functionality to a cell tower. Cars again, high performance computing hits all these new use cases. >>It yeah, it absolutely does. I mean high performance computing touches pretty much every aspect of our lives in some capacity at this point and including how we drive our cars to, to get to the studio here in Palo Alto. Do you think that we're entering an era when all of this is about to scale exponentially versus some of the linear growth that we've seen in the space due to the frustration of some of us in the hardware world the last five to 10 years? >>Well, it's a good question. I think everyone has, has seen Moore's law, right? They've seen, you know, that's been, been well documented. I think the world's changing. You're starting to see the trend of more hardware that's specialized like DPU are now out there. You got GPUs, you're seeing the, you know, Bolton hardware, accelerators, you got chi layer software abstraction. So essentially it's, it's a software industry that's in impacted the hardware. So hardware really is software too and it's a lot more software in there. Again, system software's a lot different. So it's, I think it's, it's boomerang back up. I think there's an inflection point because if you look at cyber security and physical devices, they all kind of play in this world where they need compute at the edge. Edge is gonna be a big use case. You can see Dell Technologies there. I think they have a really big opportunity to sell more hardware. H WL Packard Enterprise, others, these are old school >>Box companies. >>So I think the distributed nature of cloud and hybrid and multi-cloud coming on earth and in space means a lot more high performance computing will be sold and and implemented. So that's my take on it. I just think I'm very bullish on this space. >>Ah, yes. And you know me, I get really personally excited about the edge. So I can't wait to see what's in store. Thinking about the variety of vendors and companies, I know we see some of the biggest players in the space. Who are you most excited to see in Dallas coming up in November? >>You know, HP enter, you look back on enterprise has always been informally, HP huge on hpc, Dell and hpe. This is their bread and butter. They've been making servers from many computers to Intel based servers now to arm-based servers and and building their own stuff. So you're gonna start to see a lot more of those players kind of transforming. We're seeing both Dell and HPE transforming and you're gonna see a lot of chip companies there. I'm sure you're gonna see a lot more younger talent, a lot, a lot of young talent are coming, like I said, robotics and the new physical world we're living in is software and IP connected. So it's not like the old school operational technology systems. You have, you know, IP enabled devices that opens up all kinds of new challenges around security vulnerabilities and also capabilities. So it's, I think it's gonna be a lot younger crowd I think than we usually see this year. And you seeing a lot of students, and again universities participating. >>Yeah, I noticed that they have a student competition that's a, a big part of the event. I'm curious when you say younger, are you expecting to see new startups and some interesting players in the space that maybe we haven't heard of before? >>I think we might see more use cases that are different. When I say younger, I don't mean so much on the Democratic but young, younger i new ideas, right? So I think you're gonna see a lot of smart people coming in that might not have the, you know, the, the lens from when it started in 1988 and remember 1988 to now so much has changed. In fact we just did AEG a segment on the cube called does hardware matter because for many, many years, over the past decades, like hardware doesn't matter, it's all about the cloud and we're not a box company. Boxes are coming back. So you know, that's gonna be music for for into the years of Dell Technologies HPE the world. But like hardware does matter and this, you're starting to see that here. So I think you'll see a lot a younger thinking, a little bit different thinking. You're gonna start to see more conf confluence of like machine learning. You're gonna see security and again, I mentioned space. These are areas where you're starting to see where hardware and high performance is gonna be part of all the new systems. And so it's just gonna be industrial to i o is gonna be a big part too. >>Yeah, absolutely. I, I was thinking about some of these use cases, I don't know if you heard about the new drones they're sending up into hurricanes, but it takes literally what a, what an edge use case, how durable it has to be and the rapid processing that has to happen as a result of the software. So many exciting things we could dive down the rabbit hole with. What can folks expect to see here on the cube during supercomputing? >>Well we're gonna talk to a lot of the leaders on the cube from this community, mostly from the practitioner's side, expert side. We're gonna have, we're gonna hear from Dell Technologies, Hewlett Packer Enterprise and a lot of other executives who are investing wanna find out what they're investing in, how it ties into the cloud. Cuz the cloud has become a great environment for multi-cloud with more grid-like capability and what's the future? Where's the hardware going, what's the evolution of the components? How is it being designed? And then how does it fit into the overall software open source market that's booming right now that cloud technology has been doing. So I wanna, we wanna try to connect the dots on the cube. >>Great. So we have a very easy task ahead of us. Hopefully everyone will enjoy the content and the guests that we leaving to, to our table here from from the show floor. When we think about, do you think there's gonna be any trends that we've seen in the past that might not be there? Has anything phased out of the super computing world? You're someone who's been around this game for a while? >>Yeah, that's a good question. I think the game is still the same but the players might shift a little bit. So for example, a lot more with the supply chain challenges you might see that impact. We're gonna watch that very closely to find out what components are gonna be in what. But I'm thinking more about system architecture because the use case is interesting. You know, I was talking to Dell folks about this, you know they have standard machines but then they have use cases for how do you put the equivalent of a data center next to say a mobile cell tower because now you have the capability for wireless and 5g. You gotta put the data center like CAPA speed functionality and capacity for compute at these edges in a smaller form factor. How do you do that? How do you handle all the IO and that's gonna be all these, all these things are nerd again nerdy conversations but they're gonna be very relevant. So I like the new use cases of power more compute in places that they've never been before. So I think that to me is where the exciting part is. Like okay, who's got the, who's really got the real deal going on here? That's something be the fun part. >>I think it allows for a new era in innovation and I don't say that lightly, but when we can put processing power literally anywhere, it certainly thrills the minds of hardware nerds. Like me, my I'm OG hardware, I know you are too, I won't reveal your roots, but I got my, my start in in hardware product design back in the day. So I can't wait >>To, well you then, you know, you know hardware, when you talk about processing power and memory, you can never have enough compute and memory. It's like, it's like the internet bandwidth. You can't never have enough bandwidth. Bandwidth, right? Network power, compute power, you know, bring it on, you know, >>Even battery life, simple things like that when it comes to hardware, especially when we're talking about being on the edge. It's just like our cell phones. Our cell phones are an edge device >>And we get, well when you combine cloud on premises hybrid and then multi-cloud and edge, you now have the ability to get compute at capabilities that were never fathom in the past. And most of the creativity is limited to the hardware capability and now that's gonna be unleashed. I think a lot of creativity. That's again back to the use cases and yes, again, you're gonna start to see more industrial stuff come out edge and I, I, I love the edge. I think this is a great use case for the edge. >>Me too. A absolutely so bold claim. I don't know if you're ready to, to draw a line in the sand. Are we on the precipice of a hardware renaissance? >>Definitely no doubt about it. When we, when we did the does hardware matter segment, it was really kind of to test, you know, everyone's talking about the cloud, but cloud also runs hardware. You look at what AWS is doing, for instance, all the innovation, it's at robotics, it's at that at the physical level, pro, pro, you know you got physics, I mean they're working on so low level engineering and the speed difference. I think from a workload standpoint, whoever can get the best out of the physics and the materials will have a winning formula. Cause you can have a lot more processing specialized processors. That's a new system architecture. And so to me the hype, definitely the HPC high press computing fits perfectly into that construct because now you got more power so that software can be more capable. And I think at the end of the day, nobody wants to write a app on our workload to run on on bad hardware, not have enough compute. >>Amen to that. On that note, John, how can people get in touch with you and us here on the show in anticipation of supercomputing? >>Of course hit the cube handle at the cube at Furrier, my last name F U R R I E R. And of course my dms are always open for scoops and story ideas. And go to silicon angle.com and the cube.net. >>Fantastic. John, I look forward to joining you in Dallas and thank you for being here with me today. And thank you all for joining us for this super computing preview. My name is Savannah Peterson and we're here on the cube live. Well not live prerecorded from Palo Alto. And look forward to seeing you for some high performance computing excitement soon.
SUMMARY :
My name is Savannah Peterson, coming to you from the Cube Studios Great to see you. supercomputing, but I happen to match the accent pink and you are rocking their blue. So John, you are a veteran and I'm a newbie to Supercomputing. So it's like ces, And back then it was servers, you know, super computing was a So I think it's important that we're all headed here. So now it's multiple disciplines in high performance computing and you can do a lot more. Do you think that we're entering an era when all of this is about to scale exponentially I think there's an inflection point because if you look at cyber security and physical devices, So I think the distributed nature of cloud and hybrid and multi-cloud coming on And you know me, I get really personally excited about the edge. So it's not like the old school operational technology systems. I'm curious when you say younger, are you expecting to see new startups and some interesting players in the space that maybe So you know, that's gonna be music for I, I was thinking about some of these use cases, I don't know if you heard about the new Cuz the cloud has become a great environment for multi-cloud with more grid-like When we think about, do you think there's gonna be any So I like the new use cases of Like me, my I'm OG hardware, I know you are too, bring it on, you know, It's just like our cell phones. And most of the creativity is limited to the hardware capability and now that's gonna to draw a line in the sand. it's at that at the physical level, pro, pro, you know you got physics, On that note, John, how can people get in touch with you and us here on And go to silicon angle.com and the cube.net. And look forward to seeing you for some high performance computing excitement
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Dallas | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
1988 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Hewlett Packer Enterprise | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
H WL Packard Enterprise | ORGANIZATION | 0.99+ |
November | DATE | 0.99+ |
hpc | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
today | DATE | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Packard Enterprise | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Cube | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
first time | QUANTITY | 0.97+ |
hpe | ORGANIZATION | 0.95+ |
this year | DATE | 0.95+ |
CES | EVENT | 0.94+ |
10 years | QUANTITY | 0.92+ |
earth | LOCATION | 0.9+ |
Bolton | ORGANIZATION | 0.87+ |
AEG | ORGANIZATION | 0.85+ |
5g | QUANTITY | 0.85+ |
Cube Studios | ORGANIZATION | 0.81+ |
Furrier | ORGANIZATION | 0.81+ |
five | QUANTITY | 0.81+ |
Moore | PERSON | 0.78+ |
Intel | ORGANIZATION | 0.75+ |
cube.net | OTHER | 0.74+ |
this November | DATE | 0.71+ |
silicon angle.com | OTHER | 0.71+ |
past decades | DATE | 0.63+ |
Democratic | ORGANIZATION | 0.55+ |
John Frey & Terry Richardson | Better Together Sustainability
(upbeat music) >> Sustainability has become one of the hottest topics, not just in enterprise tech, but across all industries. The relentless pace of technology improvement over the decades and orders of magnitude increases in density have created heat, power and cooling problems that are increasingly challenging to remediate. Intense efforts have been implemented over the years around data center design techniques to dissipate heat, use ambient air, liquid cooling and many other approaches that have been brought to bear to get power usage effectiveness, PUE, as close to one as possible. Welcome to Better Together Sustainability, presented by the CUBE and brought to you by Hewlett Packard Enterprise and AMD. In this program we'll lay out today's challenges and how leading companies are engineering solutions to the problems just introduced, along with some recommendations, best practices and resources as to how you can initiate or enhance your sustainability journey. First up to help us better understand this important topic are John Fry, senior technologist IT efficiency and sustainability at Hewlett Packard Enterprise and Terry Richardson, North America channel chief for AMD. Gents, welcome. >> Great to be here. >> (indistinct). >> John, let's start at the high level here. Why is sustainability such an important topic today? Why now? Why is it such a challenge for customers and, and how are you guys approaching the solutions? >> The topic has been an important topic for a number of years, but what we're seeing across the world is more and more corporations are putting in place climate targets and sustainability goals. And at the same time, boards and CEOs are starting to be asked about the topic as well, making this topic much more important for technology leaders across the globe. At the same time, technology leaders are fighting with space, power and cooling constraints that caused them to rethink their approach to IT. To get a better sense of how wide this challenge is, we did a survey last year and we asked 500 technology leaders across the globe if they were implementing sustainable IT goals and metrics and programs within their infrastructure. Personally, I thought the answer would be about 40% of them had these programs. Actually it turned out to be 96% of them. And so when we asked them why they were implementing these programs and what was the primary driver, what we heard from them was three things. Those of them that were the early adopters and the ones that move were moving the fastest told us they were putting these programs in place to attract and retain institutional investors. If they're a publicly traded company, their investors were already asking their boards, their CEOs, wanting to know what their company was doing to drive efficiency within their technology operations. Those companies in the middle, the ones that were just moving along at the same pace as many other companies around the world, told us they were putting these programs in place to attract and retain their customers. Customers are increasingly asking the companies they do business with about their sustainability aspirations specifically how technology contributes to their carbon emissions and their sustainability goals. And so these customers want to make sure that they can keep their own customers. And finally, a third group, the digital followers, that group of companies that's a little slower adopting programs, more conservative in nature. They said they were implementing these programs to attract and retain employees. In fact, over the last year or so, every customer we've talked to when they describe their pain points and their challenges that we can try to help them meet, has had a difficulty in finding employees. And so what we know is these younger employees coming into the workforce, if you can show them how what they are going to be doing connects to the purpose of their company and connects to making the world a better place, you can attract them easier and you can retain them longer. So a variety of business reasons why companies are looking at these programs, but what we know is when they implement these programs they often reduce over-provisioning. They save money, they have a lower environmental footprint, and again they have an easier time attracting and retaining employees. So for all of these reasons, driving sustainability into your IT operations is a great thing to do. >> Yeah, I never would've expected 96%. And of course, investors, customers and employees. I mean, this is the big three. Terry what's AMD's perspective on this topic? In other words, what do you bring to the table and the partnership? I mean, I know processors, but what's unique about AMD's contribution? >> Yeah. Thanks Dave. And, and John, great to be with you. Appreciate the opportunity and the partnership. You know, we too are very focused on sustainability and enjoy our partnership with HPE very much in this area. You know, since 2017, when AMD introduced its epic processor family, there's been a couple of core design elements in that technology. One has to do with performance. And the second has to do with efficiency. Both are critically important to today's topic of sustainability because increasingly, customers are understanding and measuring performance per watt and fortunately, AMD really excels in this area. So whether we're talking about the larger super computers in the world, or even general purpose servers, customers can fundamentally do more with fewer AMD servers than competitive alternatives. And so, so we, we really bring a technology element on the processor side, CPU and GPU, to play a role in delivering real ability for customers to meet some of their core sustainability goals. And of course, in partnership with HPE, together we have really a compelling story. >> Great. Thank you, Terry. And, and John, wonder if you could talk to the differentiation that you bring from HPE's perspective, the total package. >> Yeah, of course. The first thing as partnership. As Terry mentioned, AMD and HPE have been working together since HPE was founded actually, to drive power efficiency up to meet the demands of our customers. At the same time, as our customers have asked more and more questions around technology sustainability, we've realized that we needed to not only develop a point of view on that from an HPE perspective, but actually write the white papers that give the customer guidance for sustainable IT strategies, for tech refresh cycles, give them some guidance on what are the right questions to ask technology vendors when they're buying technology equipment. So a series of white papers and you might not appreciate why, but this is a topic that you can't go get a college degree in and frankly can't even buy a book on. So for customers to get that knowledge, they want to get it from experienced professionals around the globe. And in fact, in the survey that I mentioned earlier, we asked customers, where's the number one place that you expect to get your sustainable IT information from? And they said, our technology vendors. So for us, it's really about driving that point of view, sharing it with customers, helping customers get better and even pointing out some of the unintended consequences. So a great example, Dave, you mentioned PUE earlier. Many customers have been driving PUE down for a number of years, but often the way that they did that was optimizing the data center building infrastructure. They got PUE pretty low. Now, one of the things that happens and customers need to be aware of this, particularly if they're focused on PUE as their primary metric, is when they optimize their IT stack and make that smaller, PUE actually goes up. And at first they think, well, wait a minute, that metric is going in the wrong direction. But when you remember it's a ratio, if you get that IT stack component smaller, then you're driving efficiency even if PUE goes the wrong direction. So part of the conversation then is you might want to look at PUE internally, but perhaps you've outgrown PUE and now have an opportunity to look at other metrics like carbon emissions per workload, or or power consumption per piece of equipment or rack. So all of this drives back to that upward trajectory that Terry was talking about where customers are really interested in power performance. So as we share those stories with customers, share the expertise how to move along this journey, that really provides great differentiation for HPE and AMD together. >> So that's interesting. So PUE is not necessarily the holy grail metric. There are other metrics that you, you should look at. Number one, and number two the way you interpret PUE is changing for the better. So thank you for that context. I wonder Terry, do, do you have any like proof points or examples that you can share? >> Yeah, so one that immediately comes to mind that was a manifestation of some terrific collaboration between AMD and HPE was their recently announced implementation of the Frontier supercomputer. That was a project that we collaborated on for a long time. And, and where we ended up was turning over to the government a supercomputer that is currently the highest performing in the world, broke the exaFLOP barrier. And probably even more importantly is number one on the Green500 list of the top super computers. And, and together we enjoy favorable rankings in other systems, but that's the one that, that really stands out in terms of at scale implementation to shine really a spotlight on what we can do together. Certainly for other customers doesn't have to be the world's largest super computer. It's not uncommon that we see customers just kind of in general purpose business applications in their data centers to be able to do more with less, you know, meaning, you know, you know a third of the servers oftentimes delivering not only a very strong TCO but the environmental benefit that gets associated with significantly reduced energy that can be expressed in reduction in, in overall CO2 emissions and other, other ways to express the benefit, whether it's, you know the equivalent of, of planting you know, acres of forest or whatever. So we're really proud of the proof points that we have and and look forward to the opportunity to together explain this more fully to customers and partners. >> Right? So John, Terry sort of alluded to this being more broad based. I know HPE has a very strong focus on HPC. Sorry for all the acronyms, but high performance computing. But the, so this is more broad based than just the super computing business, right John? >> Yeah, absolutely. We see these performance benefits for customers and industry standard servers as well. In fact, many customers, that's the primary type of equipment they use and they want better power performance. They either want to as Terry alluded to, use less equipment to do the same amount of work, or if they've run into a space or power or cooling constraint in their data centers, they want to be able to increase workloads in the same footprint. So it allows them to take better use of their data centers. And for some customers even the data center enclosure that they started with they can actually use a much smaller amount of space. In fact, we have some that even move over to co-location facilities as they improve that performance per watt, and can do more work in the smaller space. So it starts an industry standard server, but increasingly we're seeing customers considering liquid cooling solutions and that generally moves them into the high performance compute space as well right now. So those performance improvements exist across that entire spectrum. >> So since you brought up liquid cooling John, I mean can you share any best practices? I mean, like what do you do with all that heated liquid? >> Yeah, it's a great, great question. And we have seen a lot more interest from customers in liquid cooling and there's a variety of things that you can do, but if you're considering liquid cooling the opportunities to think broader than just the IT stack. So if you're going to use a cooling loop anyhow and you're going to generate warm liquid coming off the it equipment as waste, think about what you can do with that. We have a, a government customer here in the United States that designed their high performance computer while they were designing the building it went in. So they're able to use that hot air, hot water, excuse me coming off the IT equipment to heat the entire building. And that provides a great use of that warm water. In many parts of the world, that warm water can either be used on a hot water utility grid or it can even be used on a steam grid if you can get it warm enough. Other places we're aware of customers (indistinct) and greenhouses next to data centers and using both the warm air and the warm water from the data center to heat the greenhouse as well. So we're encouraging customers to take a step back, look at the entire system, look at anything coming out of that system that once was waste and start to think about how can we use that what was waste now as an input to another process. >> Right, that's system thinking and some, some pragmatic examples there. Can, can you each summarize, maybe start Terry, with you AMD's and HPE's respective climate goals that may, Terry then John chime in please. >> Yeah, I'll go first. We actually have four publicly stated goals. The first one is I think very aggressive but we've got a track record of doing something similar in our client business. And, and so kind of goal number one is a 30 X increase in energy efficiency for AMD processors and accelerators powering servers for AI and HPC by 2025. The second is broad based across the corporation is a 50% absolute reduction in greenhouse gas emissions from AMD operations by 2030. And then the third is 100% of AMD manufacturing suppliers will have published greenhouse gas emissions reduction goals by 2025. And we've declared that 80% or greater of our manufacturing suppliers will source renewable energy by 2025. Those are the, those are the four big publicly stated goals and objectives that we have in this area. >> You know what I like about those Terry? A lot of, a lot of these sustainability goals these moonshot goals is like by 2050, it's like, okay. But I, I like the focus on '25 and then of course there's one in there at the end of the decade. All right, John, maybe you could share with us HPE's approach. >> Yeah, absolutely. And we've had almost two decades of emissions reduction goals and our current goals, which we accelerated by 10 years last year, are to be carbon new or excuse me, net zero by 2040. And that's a science based target-approved goal. In fact, one of the first in the world. And we're doing that because we believe that 2050 is too long to wait. And so how we reach that net zero goal by 2040, is by 2030, an interim step is to reduce our scopes one and two, our direct and energy related emissions by 70% from 2020. And that includes sourcing 100% renewable energy across all of our operations. At the same time, the bigger part of our footprint is in our supply chain and when our customers use our products, so we're going to leverage our as a service strategy HPE GreenLake and our energy efficient portfolio of products to reduce our scope three carbon emissions 42% over that 2020 baseline by 2030, and as with AMD as well, we have a goal to have 80% of our suppliers by spend have their own science based targets so that we know that their commitments are scientifically validated. And then the longer step, how we reach net zero by 2040 is by reducing our entire footprint scopes one, two and three by 90% and then balance the rest. >> Yeah. So again, I mean, you know 2030 is only eight years away, a little more. And so if, if, if you have a, a target of 2030 you have to figure out, okay, how are you going to get there? The, if you say, you know, longer, you know in the century you got this balloon payment, you know that you're thinking about. So, so great job, both, both companies and and really making more specific goals that we can quantify you know, year by year. All right, last question, John. Are there any resources that you can share to help customers, you know, get started maybe if they want to get started on their own sustainability strategy or maybe they're part way through and they just want to see how they're doing. >> Yeah, absolutely much of what Terry and I have talked about are available in an executive workbook that we wrote called "Six Steps For Implementing a Sustainable IT Strategy" and that workbook's freely available online and we'll post the URL so that you can get a copy of it. And we really developed that workbook because what we found is, although we had white papers on a variety of these topics, executives said we really need a little bit more specific steps to work through this and implement that sustainable IT strategy. And the reason for that, by the way is that so many of our customers when they start this sustainable IT journey, they take a a variety of tactical steps, but they don't have an overarching strategy that they're really trying to drive. And often they don't do things like bring all the stakeholders they need together. Often they make improvements without measuring their baseline first. So in this workbook, we lead them step by step how to gather the right resources internally, how to make the progress, talk about the progress in a credible way, and then make decisions on where they go next to drive efficiencies. >> Yeah, really that system thinking is, is, is critical. Guys. Thanks so much for your time. Really appreciate it. >> Thank you. >> Okay guys, thanks for your time today. I really appreciate it. In a moment, We're going to toss it over to Lisa Martin out of our Palo Alto studio and bring in Dave Faffel, chief technology officer at WEI, along with John and Terry, to talk about what WEI is doing in this space to address sustainability challenges. You're watching Better Together Sustainability brought to you by HPE and AMD in collaboration with the CUBE, your leader in enterprise and emerging tech coverage. (lilting music)
SUMMARY :
presented by the CUBE and brought to you at the high level here. and the ones that move to the table and the partnership? And the second has to do with efficiency. to the differentiation that you bring share the expertise how to the way you interpret PUE the opportunity to together Sorry for all the acronyms, in the same footprint. from the data center to with you AMD's and HPE's and objectives that we have in this area. But I, I like the focus on '25 and then In fact, one of the first in the world. in the century you got this And the reason for that, by the way Yeah, really that system brought to you by HPE
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Terry | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Faffel | PERSON | 0.99+ |
John Fry | PERSON | 0.99+ |
WEI | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
2025 | DATE | 0.99+ |
2030 | DATE | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
2050 | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
96% | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
Terry Richardson | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
42% | QUANTITY | 0.99+ |
2040 | DATE | 0.99+ |
Six Steps For Implementing a Sustainable IT Strategy | TITLE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
90% | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
three things | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
eight years | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
both companies | QUANTITY | 0.98+ |
500 technology leaders | QUANTITY | 0.98+ |
First | QUANTITY | 0.98+ |
John Frey | PERSON | 0.98+ |
Karan Batta, Kris Rice | Supercloud22
(upbeat music) >> Welcome back to Supercloud22, #Supercloud22, this is Dave Vellante. In 2019, Oracle and Microsoft announced a collaboration to bring interoperability between OCI, Oracle Cloud Infrastructure and Azure clouds. It was Oracle's initial foray into so-called multi-cloud and we're joined by Karan Batta, who's the vice president for product management at OCI, and Kris Rice, is the vice president of software development at Oracle database. And we're going to talk about how this technology's evolving and whether it fits our view of what we call, Supercloud. Welcome, gentlemen. Thank you. >> Thanks for having us. >> Thanks for having us. >> So you recently just last month announced the new service. It extends on the initial partnership with Microsoft Oracle Interconnect with Azure, and you refer to this as a secure private link between the two clouds across 11 regions around the world. Under two milliseconds data transmission, sounds pretty cool. It enables customers to run Microsoft applications against data stored in Oracle databases without any loss in efficiency or presumably performance. So we use this term Supercloud to describe a service or sets of services built on hyperscale infrastructure that leverages the core primitives and APIs of an individual cloud platform, but abstracts that underlying complexity to create a continuous experience across more than one cloud. Is that what you've done? >> Absolutely. I think, you know, it starts at the, you know, at the top layer in terms of, you know, just making things very simple for the customer, right. I think at the end of the day we want to enable true workloads running across two different clouds, where you're potentially running maybe the app layer in one and the database layer or the back in another, and the integration I think, starts with, you know, making it ease of use. Right? So you can start with things like, okay can you log into your second or your third cloud with the first cloud provider's credentials? Can you make calls against another cloud using another cloud's APIs? Can you peer the networks together? Can you make it seamless? I think those are all the components that are sort of, they're kind of the ingredients to making a multi-cloud or Supercloud experience successful. >> Oh, thank you for that, Karan. So, I guess as a question for Kris is trying to understand what you're really solving for, what specific customer problems are you focused on? What's the service optimized for presumably its database but maybe you could double click on that. >> Sure. So, I mean, of course it's database so it's a super fast network so that we can split the workload across two different clouds leveraging the best from both, but above the networking, what we had to do is we had to think about what a true multi-cloud or what you're calling Supercloud experience would be. It's more than just making the network bytes flow. So what we did is, we took a look as Karan hinted at, right? Is where is my identity? Where is my observability? How do I connect these things across how it feels native to that other cloud? >> So what kind of engineering do you have to do to make that work? It's not just plugging stuff together. Maybe you could explain in a little bit more detail, the resources that you had to bring to bear and the technology behind the architecture? >> Sure. >> I think, you know, it starts with actually, you know, what our goal was, right? Our goal was to actually provide customers with a fully managed experience. What that means is we had to basically create a brand new service. So, you know, we have obviously an Azure like portal and an experience that allows customers to do this but under the covers, we actually have a fully managed service that manages the networking layer that the physical infrastructure, and it actually calls APIs on both sides of the fence. It actually manages your Azure resources, creates them, but it also interacts with OCI at the same time. And under the covers this service actually takes Azure primitives as inputs, and then it sort of like essentially translates them to OCI action. So, so we actually truly integrated this as a service that's essentially built as a PaaS layer on top of these two clouds. >> So, so the customer doesn't really care, or know, maybe they know, coz they might be coming through, you know, an Azure experience, but you can run work on either Azure and or OCI, and it's a common experience across those clouds, is that correct? >> That's correct. So, like you said, the customer does know that they know there is a relationship with both clouds but thanks to all the things we built there's this thing we invented, we created called a multi-cloud control plane. This control plane does operate against both clouds at the same time to make it as seamless as possible so that maybe they don't notice, you know, the power of the interconnect is extremely fast networking, as fast as what we could see inside a single cloud, if you think about how big a data center might be from edge to edge in that cloud. Going across the interconnect makes it so that that workload is not important that it's spanning two clouds anymore. >> So you say extremely fast networking. I remember I used to, I wrote a piece a long time ago. Hey, Larry Ellison loves InfiniBand. I presume we've moved on from them, but maybe not. What is that interconnect? >> Yeah, so it's funny, you mentioned interconnect, you know, my previous history comes from HPC where we actually inside inside OCI today, we've moved from, you know, InfiniBand as its part of Exadata's core, to what we call RoCEv2. So that's just another RDMA network. We actually use it very successfully, not just for Exadata but we use it for our standard computers, you know, that we provide to, you know, high performance computing customers. >> And the multi-cloud control plane, runs... Where does that live? Does it live on OCI? Does it live on Azure? Yes? >> So it does. It lives on our side. >> Yeah. >> Our side of the house, and it is part of our Oracle OCI control plane. And it is the veneer that makes these two clouds possible so that we can wire them together. So it knows how to take those Azure primitives and the OCI primitives and wire them at the appropriate levels together. >> Now I want to talk about this PaaS layer. Part of Supercloud, we said, to actually make it work you're going to have to have a super PaaS. I know, we're taking this term a little far but it's still, it's instructive in that, what we, what we surmised was, you're probably not going to just use off the shelf, plain old vanilla PaaS, you're actually going to have a purpose built PaaS to solve for the specific problem. So, as an example, if you're solving for ultra low latency, which I think you're doing, you're probably, no offense to my friends at Red Hat, but you're probably not going to develop this on OpenShift, but tell us about that, that PaaS layer or what we call the super PaaS layer. >> Go ahead, Kris. >> Well, so you're right. We weren't going to build it out on OpenShift. So we have Oracle OCI, you know, the standard is Terraform. So the back end of everything we do is based around Terraform. Today, what we've done, is we built that control plane and it will be API drivable. It'll be drivable from the UI and it will let people operate and create primitives across both sides. So you can, you, you mentioned developers developers love automation, right? Because it makes our lives easy. We will be able to automate a multi-cloud workload, from ground up, Config is code these days. So we can Config an entire multi-cloud experience from one place. >> So, double click Kris on that developer experience, you know, what is that like? They're using the same tool set irrespective of, you know, which cloud we're running on is, is it and it's specific to this service or is it more generic across other Oracle services? >> There's two parts to that. So one is the, we've only onboarded a portion. So the database portfolio and other services will be coming into this multi-cloud. For the majority of Oracle cloud the automation, the Config layer is based on Terraform. So using Terraform, anyone can configure everything from a mid tier to an Exadata, all the way soup to nuts from smallest thing possible to the largest. What we've not done yet is is integrated truly with the Azure API, from command line drivable, that is coming in the future. It will be, it is on the roadmap. It is coming, then they could get into one tool but right now they would have half their automation for the multi-cloud Config on the Azure tool set and half on the OCI tool set. >> But we're not crazy saying from a roadmap standpoint that will provide some benefit to developers and is a reasonable direction for the industry generally but Oracle and, and, and Microsoft specifically? >> Absolutely. I'm a developer at heart. And so one of the things we want to make sure is that developers' lives are as easy as possible. >> And, and is there a Metadata management layer or intelligence that you've built in to optimize for performance or low latency or cost across the, the respective clouds? >> Yeah, definitely. I think, you know, latency's going to be an important factor. You know, the, the service that we've initially built isn't going to serve, you know, the sort of the tens of microseconds but most applications that are sort of in, you know, running on top of, the enterprise applications that are running on top of the database are in the several millisecond range. And we've actually done a lot of work on the networking pairing side to make sure that when we launch, when we launch these resources across the two clouds we actually pick the right trial site, we pick the right region, we pick the right availability zone or domain. So we actually do the due diligence under the cover, so the customer doesn't have to do the trial and error and try to find the right latency range, you know, and this is actually one of the big reasons why we only launched this service on the interconnect regions. Even though we have close to, I think, close to 40 regions at this point in OCI, this, this, this service is only built for the regions that we have an interconnect relationship with with Microsoft. >> Okay. So, so you've, you started with Microsoft in 2019 you're going deeper now in that relationship, is there is there any reason that you couldn't, I mean technically what would you have to do to go to other clouds? Would you just, you talked about understanding the primitives and leveraging the primitives of Azure. Presumably if you wanted to do this with AWS or Google or Alibaba, you would have to do similar engineering work, is that correct? Or does what you've developed just kind of pour it over to any cloud? >> Yeah, that's, that's absolutely correct, Dave, I think, you know, Kris talked a lot about kind of the multi-cloud control plane, right? That's essentially the, the, the control plane that goes and does stuff on other clouds. We would have to essentially go and build that level of integration into the other clouds. And I think, you know, as we get more popularity and as as more products come online through these services I think we'll listen to what customers want, whether it's you know, maybe it's the other way around too, Dave maybe it's the fact that they want to use Oracle cloud but they want to use other complimentary services within Oracle cloud. So I think it can go both ways. I think, you know, kind of the market and the customer base will dictate that. >> Yeah. So if I understand that correctly, somebody from another cloud Google cloud could say, "Hey, we actually want to run this service on OCI coz we want to expand our market and..." >> Right. >> And if TK gets together with his old friends and figures that out but we're just, you know, hypothesizing here, but but like you said, it can, can go both ways. And then, and I have another question related to that. So you multi-clouds. Okay, great. Supercloud. How about the edge? Do you ever see a day where that becomes part of the equation? Certainly the, the near edge would, you know, a a home Depot or a Lowe's store or a bank, but what about like the far edge, the tiny edge. Do, do you, can you talk about the edge and and where that fits in your vision? >> Yeah, absolutely. I think edge is a interestingly, it's a, it's a it's getting fuzzier and fuzzier day by day. I think there's the term, you know, we, obviously every cloud has their own sort of philosophy in what edge is, right? We have our own, you know, it starts from, you know, if you if you do want to do far edge, you know, we have devices like red devices, which is our ruggedized servers that that talk back to our, our control plane in OCI you could deploy those things in like, you know, into war zones and things like that underground. But then we also have things like Cloud@Customer where customers can actually deploy components of our infrastructure, like Compute or Exadata into a facility where they only need that certain capability. And then a few years ago we launched, you know, what's now called Dedicated Region. And that actually is a, is a different take on edge in some sense where you get the entire capability of our public commercial region, but within your facility. So imagine if, if, if a customer was to essentially point to, you know, point to, point a finger on a commercial map and say, "Hey, look, that region is just mine." Essentially, that's the capability that we're providing to our customers, where if you have a white space if you have a facility if you're exiting out of your data center space you could essentially place an OCI region within your confines behind your firewall. And then you could interconnect that to a cloud provider if you wanted to. and get the same multi-cloud capability that you get in a commercial region. So we have all the spectrums of possibilities there. >> Guys, super interesting discussion. It's very clear to us that the next 10 years of cloud ain't going to be like the last 10. There's a whole new layer developing. Data is a big key to that. We see industries getting involved. We obviously didn't, didn't get into the Oracle Cerner acquisitions a little too early for that but we we've actually predicted that companies like Cerner and you've seen it with Goldman Sachs and Capital One, they're actually building services on the cloud. So this is a really exciting new area and I really appreciate you guys coming on the Supercloud22 event and sharing your insights. Thanks for your time. >> Thank very much. >> Thank very much. >> Okay. Keep it right there. #Supercloud22. We'll be right back with more great content right after this short break. (upbeat music)
SUMMARY :
and Kris Rice, is the vice president and you refer to this and the integration I think, but maybe you could double click on that. so that we can split the workload the resources that you it starts with actually, you know, so that maybe they don't notice, you know, So you say extremely fast networking. you know, InfiniBand as And the multi-cloud So it does. and the OCI primitives call the super PaaS layer. So we have Oracle OCI, you and half on the OCI tool set. And so one of the things isn't going to serve, you know, the and leveraging the primitives of Azure. And I think, you know, as we "Hey, we actually want to but we're just, you know, we launched, you know, and I really appreciate you guys coming on right after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Karan Batta | PERSON | 0.99+ |
OCI | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Kris Rice | PERSON | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Kris | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Lowe | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
Karan | PERSON | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
two parts | QUANTITY | 0.99+ |
Cerner | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
two clouds | QUANTITY | 0.99+ |
11 regions | QUANTITY | 0.99+ |
third cloud | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.99+ |
Supercloud22 | EVENT | 0.99+ |
both clouds | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
more than one cloud | QUANTITY | 0.99+ |
Supercloud | ORGANIZATION | 0.99+ |
both ways | QUANTITY | 0.99+ |
one place | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
both sides | QUANTITY | 0.98+ |
last month | DATE | 0.98+ |
40 regions | QUANTITY | 0.98+ |
tens of microseconds | QUANTITY | 0.98+ |
Azure | TITLE | 0.98+ |
Exadata | ORGANIZATION | 0.98+ |
HPC | ORGANIZATION | 0.98+ |
one tool | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
single cloud | QUANTITY | 0.94+ |
TK | PERSON | 0.94+ |
InfiniBand | ORGANIZATION | 0.93+ |
Config | TITLE | 0.93+ |
Under two milliseconds | QUANTITY | 0.92+ |
few years ago | DATE | 0.91+ |
Oracle Cloud Infrastructure | ORGANIZATION | 0.91+ |
RoCEv2 | COMMERCIAL_ITEM | 0.91+ |
Azure | ORGANIZATION | 0.91+ |
PaaS | TITLE | 0.9+ |
first cloud provider | QUANTITY | 0.87+ |
Day 2 Wrap Up | HPE Discover 2022
>>The cube presents HPE discover 2022 brought to you by HPE. >>Welcome back to the Cube's coverage. We're wrapping up day two, John furrier and Dave ante. We got some friends and colleagues, longtime friends, Crawford Del Pret is the president of IDC. Matt Eastwood is the senior vice president of infrastructure and cloud guys. Thanks for coming on spending time. Great to you guys. >>That's fun to do it. Awesome. >>Cravin I want to ask you, I, I think this correct me if I'm wrong, but this was your first physical directions as, as president. Is that true or did you do one in 2019? >>Uh, no, we did one in 20. We did, we did one in 20. I was president at the time and then, and then everything started, >>Well, how was directions this year? You must have been stoked to get back together. Yeah, >>It was great. I mean, it was actually pretty emotional, you know, it's, it's a community, right? I mean, we have a lot of customers that have been coming to that event for a long, long time and to stand up on the stage and look out and see people, you know, getting a little bit emotional and a lot of hugs and a lot of bringing people together. And this year in Boston, we were the first event really of any size that kind of came back. And when I kind of didn't see that coming in terms of how people, how ready people were to be together. Cause >>When did you did it April >>In Boston? Yeah, we did it March in March. Yeah, it was, it was, it was, it was a game day decision. I mean, we were, we had negotiated it, we were going back and forth and then I kind of made the call at the last minute, say, let's go and do it. And in Santa Clara, I felt like we were kind of opening up the crypt at the convention center. I mean, all the production people said, you know what? You guys were really the first event to be back. And attendance was really strong. You know, we, we, we got over a thousand. It was, it was really good. >>Good. It's always a fun when I was there. It was, it's a big deal. You guys prepare for it. Yeah. Some new faces up on the stage. Yeah. So, so Matt, um, you've been doing the circuit. I take it like, like all top analysts, super busy. Right. This is kind of end of the spring. I mean, I know it's summer, right. That's right. But, um, how do you look at, at discover relative some, some of the other events you've been at? >>So I think if you go back to what Crawford was just talking about our event in March, I mean, March was sort of the, the reopening and there was, I think people just felt so happy to be, to be back out there. You still get a little bit at, at these events. I mean, cuz for each, each company it's their first time back at it, but I think we're starting to get down what these events are gonna feel like going forward. Um, and it, I mean, there's good energy here. There's been a good attendance. I think the, the interest in getting back live and having face to face meetings is clearly strong. >>Yeah. I mean, this definitely shows that hybrids, the steady state, both events cloud. Yeah. Virtualization remotes. So what are you guys seeing with that hybrid mode? Just from a workforce, certainly people excited to get back together, but it's gonna continue. You're starting to see that digital piece. How is that impacting some of the, some of the customers you're tracking, who's winning and who's losing, coming out of the pandemic. What's the big picture look like? >>Yeah. I mean, if you, if you take a look at hybrid work, um, people are testing many, many, many different models. And I think as we move from a pandemic to an em, we're gonna have just waves and waves and waves of people needing that flexibility for a lot of different reasons, whether they have, uh, you know, preexisting conditions, whether they're just not comfortable, whether they have people who can't be vaccinated at home. So I think we're gonna be in this hybrid work for a long, long time. I do think though that we are gonna transition back into some kind of a normal, um, and I, and I think the big difference is that I think leaders back in the day, a long time ago, when people weren't coming into work, it was kind of like, oh, I know nothing's going on there. People aren't getting worked. And I think we're over that stage. Yeah. I think we're now into a stage where we know people can be productive. We know people can effectively work from home and now we're into the reason to be in the office. And the reason to be in the office is that collaboration, it's that mentoring it's that, you know, think about your 25 year old self. Do you wanna be staring at a windshield all day long and not kind of building those relationships? People want face to face, it's difficult. They want face >>To face and I would, and you guys had a great culture and it's a young culture. How are you handling it as an executive in terms of, is there a policy for hybrid or >>Yeah, so, so, so at IDC, what we did is we're in a pilot period and we've kind of said that the summertime is gonna be a pilot period and we've asked people, we're actually serving shocker, we're >>Serving, >>But we're, but we're, but, but we're actually asking people to work with their manager on what works for them. And then we'll come up with, you know, whether you are in, out of the office worker, which will be less than two days a hybrid worker, which will be three days or, uh, in, in the office, which is more than three days a week. And you know, we all know there's, there's, there's limitation, there's, there's, there's variability in that, but that's kind of what we're shooting for. And we'd like to be able to have that in place in the fall. >>Are you pretty much there? >>Yeah, I am. I, I am there three days a week. I I, Mondays and Fridays, unless, >>Because you got the CEO radius, right? Yeah. >><laugh>, <laugh> >>The same way I'm in the office, the smaller, smaller office. But so, uh, let's talk a little bit about the, the numbers we were chatting earlier, trying to squint through you guys are, you know, obviously the gold standard for what the market does, what happened in, you know, during the pandemic, what happened in 2021 and what do you expect to happen in, in 2022 in terms of it spending growth? >>Yeah. So this is, this is a crazy time, right? We've never seen this. You and I have a long history of, uh, of tracking this. So we saw in, in, in, in 2020, the market decelerated dramatically, um, the GDP went down to a negative like it always does in these cases, it was, you know, probably negative six in that, in that, in that kind of range for the first time, since I've been tracking it, which goes back over 30 years, tech didn't go negative tech went to about just under 3%. And then as we went to 2021, we saw, you know, everything kind of snap back, we saw tech go up to about 11% growth. And then of course we saw, you know, GDP come back to about a 4%, you know, ki kind of range growth. Now what's I think the story there is that companies and you saw this anecdotally everywhere companies leaned into tech, uh, company. >>You know, I think, you know, Matt, you have a great statistic that, you know, 80% of companies used COVID as their point to pivot into digital transformation, right. And to invest in a different way. And so what we saw now is that tech is now where I think companies need to focus. They need to invest in tech. They need to make people more productive with tech and it played out in the numbers now. So this year what's fascinating is we're looking at two Fastly different markets. We've got gasoline at $7 a gallon. We've got that affecting food prices. Uh, interesting fun fact recently it now costs over $1,000 to fill an 18 Wheeler. All right. Based on, I mean this just kind of can't continue. So you think about it, don't put the boat >>In the wall. Yeah. Yeah. >>Good, good, good, good luck. It's good. Yeah, exactly. <laugh> so a family has kind of this bag of money, right? And that bag of money goes up by maybe three, 4% every year, depending upon earnings. So that is sort of sloshing around. So if food and fuel and rent is taking up more gadgets and consumer tech are not, you know, you're gonna use that iPhone a little longer. You're gonna use that Android phone a little longer. You're gonna use that TV a little longer. So consumer tech is getting crushed, you know, really it's very, very, and you saw it immediately and ad spending, you've seen it in meta. You've seen it in Facebook. Consumer tech is doing very, very it's tough enterprise tech. We haven't been in the office for two and a half years. We haven't upgraded whether that be campus wifi, whether that be, uh, servers, whether that be, uh, commercial PCs, as much as we would have. So enterprise tech, we're seeing double digit order rates. We're seeing strong, strong demand. Um, we have combined that with a component shortage and you're seeing some enterprise companies with a quarter of backlog. I mean, that's, you know, really unheard at higher >>Prices, which >>Also, and therefore that drives that >>Drives. It shouldn't be that way. If there's a shortage of chips, it shouldn't be that way, >>But it is, but it is, but it is. And then you look at software and we saw this, you know, we've seen this in previous cycles, but we really saw it in the COVID downturn where, uh, in software, the stickiness of SaaS means that you just, you're not gonna take that stuff out. So the, the second half of last year we saw double digit rates in software surprise. We're seeing high single digit revenue growth in software now, so that we think is gonna sustain, which means that overall it demand. We expect to be between five and 6% this year. Okay, fine. We have a war going on. We have, you know, potentially, uh, a recession. We think if we do, it'll be with a lower case, R maybe you see a banded down to maybe 4% growth, but it's gonna grow this. >>Is it, is it both the structural change of the disruption of COVID plus the digital transformation yeah. Together? Or is it, >>I, I think you make a great point. Um, I, I, I think that we are entering a new era for tech. I think that, you know, Andrew's famous wall street journal oped 10 years ago, software is even world was absolutely correct. And now we're finding that software is, is eing into every nook and cranny people have to invest. They, they know disruptors are coming around every single corner. And if I'm not leaning into digital transformation, I'm dead. So >>The number of players in tech is, is growing, >>Cuz there's well, the number of players in tech number >>Industry's coming >>In. Yeah. The industry's coming in. So I think the interesting dynamic you're gonna see there is now we have high interest rates. Yeah. Which means that the price of funding these companies and buying them and putting data on is gonna get higher and higher, which means that I think you could, you could see another wave of consolidation. Mm-hmm <affirmative> because tech large install based tech companies are saying, oh, you know what? I like that now >>4 0 9 S are being reset too. That's another point. >>Yeah. I mean, so if you think about this, this transformation, right. So it's all about apps, absent data and differentiating and absent data. What the, the big winner the last couple years was cloud. And I would just say that if this is the first potential recession that we're talking about, where the cloud service providers. So I think a cloud as an operating model, not necessarily a destination, but for these cloud service providers, they've actually never experienced a slowdown. So how, and, and if you think about the numbers, 30% of, of the typical it budget is now quote, unquote cloud and 30% of all expenditures are it related. So there's a lot of exposure there. And I think you're gonna see a lot of, a lot of focus on how we can rationalize some of those investments. >>Well, that's a great point. I want to just double click on that. So yeah, the cloud did well during the pandemic. We saw that with SAS, have you guys tracked like the Tams of what got pulled forward? So the bit, a big discussion about something that pulled forward because of the pandemic, um, like zoom, for instance, obviously everyone's using zoom. Yeah, yeah, yeah. Was there fake Tams? There was one, uh, couple analysts who were pointing out that some companies were hot during the pandemic will go away that that Tam doesn't really exist, but there's some that got pulled forward early. That's where the growth is. So is there a, is there a line between the, I call fake Tam or pulled forward TA that was only for the pandemic situationally, um, devices might be like virtual event, virtual event. Software was one, I know Hoppin got laid a lot of layoffs. And so that was kind of gone coming, coming and going. And you got SAS which got pulled forward. Yep. And it's not going away, but it's >>Sustaining. Yeah. Yeah. But it's, but, but it's sustaining, um, you know, I definitely think there was a, there was a lot of spending that absolutely got pulled forward. And I think it's really about CEO's ability to control expectations and to kind of message what it, what it looks like. Um, you know, I think I look, I, I, I think virtual event platforms probably have a role. I think you can, you can definitely, you know, raise your margins in the event, business, significantly using those platforms. There's a role for them. But if you were out there thinking that this thing was gonna continue, then you know, that that was unrealistic, you know, Dave, to, to your point on devices, I'm not necessarily, you know. Sure. I think, I think we definitely got ahead of our expectations and things like consumer PCs, those things will go back to historical growth >>Rates. Yeah. I mean, you got the install base is pretty young right now, but I think the one way to look at it too, is there was some technical debt brought in because people didn't necessarily expect that we'd be moving to a permanent hybrid state two years ago. So now we have to actually invest on both. We have to make, create a little bit more permanency around the hybrid world. And then also like Crawford's talking about the permanency of, of having an office and having people work in, in multiple modes. Yeah. It actually requires investment in both the office. And >>Also, so you're saying operationally, you gotta run the company and do the digital transformation to level up the hybrid. >>Yeah. Yeah. Just the way people work. Right. So, so, you know, you basically have to, I mean, even for like us internally, Crawford was saying, we're experimenting with what works for us. My team before the pandemic was like one third virtual. Now it's two third virtual, which means that all of our internal meetings are gonna be on, on teams or zoom. Right. Yeah. They're not gonna necessarily be, Hey, just coming to the office today, cuz two thirds of people aren't in the Boston area. >>Right. Matt, you said if you see cloud as an operating model, not necessarily a place. I remember when you were out, I was in the, on the, on the, on the zoom when, when first met Adam Celski yeah. Um, he said, you were asking him about, you know, the, the on-prem guys and he's like, nah, it's not cloud. And he kind of was very dismissive of it. Yeah. Yeah. I wanna get your take on, you know, what we're seeing with as Azure service GreenLake, apex, Cisco's got their version. IBM. Fewer is doing it. Is that cloud. >>I think if it's, I, I don't think all of it is by default. I think it is. If I actually think what HPE is doing is cloud, because it's really about how you present the services and how you allow customers to engage with the platform. So they're actually creating a cloud model. I think a lot of people get lost in the transition from, you know, CapEx to OPEX and the financing element of this. But the reality is what HPE is doing and they're sort of setting the standard. I think for the industry here is actually setting up what I would consider a cloud model. >>Well, in the early days of, of GreenLake, for sure it was more of a financial, you >>Know, it was kind of bespoke, right. But now you've got 70 services. And so you can, you can build that out. But >>You know, we were talking to Keith Townsend right after the keynote and we were sort of UN unpacking it a little bit. And I, I asked the question, you know, if you, if you had to pin this in terms of AWS's maturity, where are we? And the consensus was 2014 console filling, is that fair or unfair? >>Oh, that's a good question. I mean, um, I think it's, well, clouds come a long way, right? So it'd be, I, I, I think 20, fourteen's probably a little bit too far back because >>You have more modern tools I Kubernetes is. Yeah. >>And, but you also have, I would say the market still getting to a point of, of, of readiness and in terms of buying this way. So if you think about the HP's kind of strategy around edge, the core platform as a, as a service, you know, we're all big believers in edge and the apps follow the data and the data's being created in new locations and you gotta put the infrastructure there. And for an end user, there's a lot of risk there because they don't know how to actually plan for capacity at the edge. So they're gonna look to offload that, but this is a long term play to actually, uh, build out and deploy at the edge. It's not gonna happen tomorrow. It's a five, 10 year play. >>Yeah. I mean, I like the operating model. I'd agree with you, Matt, that if it's, if it's cloud operations, DevSecOps and all that, all that jazz it's cloud it's cloud operating and, and, and public cloud is a public cloud hyperscaler on premise. And the storage folks were presented. That's a single pane of glass. That's old school concepts, but cloud based. Yep. Shipping hardwares, auto figures. Yeah. That's the kind of consumption they're going for now. I like it. Then I, then they got the partner led thing is the partner piece. How do you guys see that? Because if I'm a partner, there's two things, wait a minute, am I at bottleneck to the direct self-service? Or is that an enabler to get more cash, to make more money? If I'm a partner. Cause you see what Essentia's doing with what they do with Amazon and Deloitte and et C. Yeah. You know, it's interesting, right? Like they've a channel partner, I'm making more cash. >>Yeah. I mean, well, and those channel partners are all in transition too. They're trying to yeah. Right. Figure out. Right, right. Are they, you know, what are their managed services gonna look like? You know, what kind of applications are they gonna stand up? They're they're not gonna just be >>Reselling, bought a big house in a boat. The box is not selling. I wanna ask you guys about growth because you know, the big three cloud, big four growing pick a number, I dunno, 30, 35% revenue big. And like you said, it's 30% of the business now. I think Dell's growing double digits. I don't know how much of that is sustainable. A lot of that is PCs, but still strong growth. Yep. I think Cisco has promised 9% >>In, in that. Right, right. >>About that. Something like that. I think IBM Arvin is at 6%. Yep. And I think HPE has said, Hey, we're gonna do three to 4%. Right. Which is so really sort of lagging and which I think a lot of people in wall street is like, okay, well that's not necessarily so compelling. Right. What does HPE have to do to double that growth? Or even triple that growth. >>Yeah. So they're gonna need, so, so obviously you're right. I mean, being able to show growth is Tanem out to this company getting, you know, more attention, more heat from, from investors. I think that they're rightly pointing to the triple digit growth that they've seen on green lake. I think if you look at the trailing, you know, 12 month bookings, you got over, you know, 7 billion, which means that in a year, you're gonna have a significant portion of the company is as a service. And you're gonna see that revenue that's rat being, you know, recognized over a series of months. So I think that this is sort of the classic SAS trough that we've seen applied to an infrastructure company where you're basically have to kind of be in the desert for a long time. But if they can, I think the most important number for HPE right now is that GreenLake booking snow. >>And if you look at that number and you see that number, you know, rapidly come down, which it hasn't, I mean off a very large number, you're still in triple digits. They will ultimately start to show revenue growth, um, in the business. And I think the one thing people are missing about HPE is there aren't, there are a lot of companies that want to build a platform, but they're small and nobody cares. And nobody let's say they throw a party and nobody comes. HP has such a significant installed base that if they do build a platform, they can attract partners to that platform. What I mean by that is partners that deliver services on GreenLake that they're not delivering. They have the girth to really start to change an industry and change the way stuff is being built. And that's the be they're making. And frankly, they are showing progress in that direction. >>So I buy that. But the one thing that concerns me is they kind of hide the ball on services. Right. And I, and I worry about that is like, is this a services kind of just, you know, same wine, new bottle or, >>Or, yeah. So, so I, I, I would argue that it's not about hiding the ball. It's about eliminating confusion of the marketplace. This is the company that bought EDS only to spin it off <laugh>. Okay. And so you don't wanna have a situation where you're getting back into services. >>Yeah. They're the only one >>They're product, not the only ones who does, I mean, look at the way IBM used to count and still >>I get it. I get it. But I think it's, it's really about clarity of mission. Well, I point next they are in the Ts business, absolutely. Point of it. It's important prop >>Drive for them at the top. Right. The global 50 say there's still a lot of uniqueness in what they want to buy. So there's definitely a lot of bespoke kind of delivery. That's still happening there. The real promise here is when you get into the global 2000 and yeah. And can start them to getting them to consume very standardized offers. And then the margins are, are healthy >>And they got they're what? Below 30, 33, 30 3%. I think 34% last quarter gross margin. Yeah. That that's solid. Just compare that with Dell is, I don't know. They're happy with 20, 21% of correct. You get that, which is, you know, I I'll come back. Go ahead. I want, I wanna ask >>Guys. No, I wanna, I wanna just, he said one thing I like, which was, I think he nailed it. They have such, um, big install base. They have a great channel. They know how to use it. Right. That's a real asset. Yeah. And Microsoft, I remember when their stock was trading at 26 when Baltimore was CEO. Yep. What they did with no, they had office and windows, so a little bit different. Yep. But similar strategy, leverage our install base, bring something up to them. That's what you're kind of connecting the >>Absolutely. You have this velocity, uh, machine with a significant girth that you can now move to a new model. They move that to a new model. To Matt's point. They lead the industry, they change the way large swath the customers buy and you will see it in steady revenue growth over time. Okay. So I just in that, well, >>So your point is the focus and there the right it's the right focus. And I would agree what's >>What's the other move. What's their other move, >>The problem. Triple digit booking growth off a number that gets bigger >>Inspired. Okay. >>Whats what's the scoreboard. Okay. Now they're go at the growth. That's the scoreboard. What are the signals? Are you looking at on the scoreboard Crawford and Matt in terms of success? What are the benchmarks? Is it ecosystem growth, number of services, triple growth. Yeah. What's the, what are some of the metrics that you guys are gonna be watching and we should be watching? >>Yeah. I mean, I dunno if >>You wanna jump in, I mean, I think ecosystem's really critical. Yeah. You want to, you want to have well and, and you need to sell both ways like HPE needs to be selling their technology on other cloud providers and vice versa. You need to have the VMs of the world on, you know, offering services on your platform and, and kind of capturing some, some motion off that. I think that's pretty critical. The channel definitely. I mean, you have to help and what you're gonna see happen there is there will be channel partners that succeed in transforming and succeeding and there'll be a lot that go away and that some, some of that's, uh, generational there'll be people that just kind of age outta the system and, and just go home. >>Yeah. Yeah. So I would argue it's, it's, it's, it's gonna be, uh, bookings growth rate. It's gonna be retention rate of the, of, of, of the customers, uh, that they have. And then it's gonna be that, that, um, you know, ultimately you're gonna see revenue, um, growth, and which is that revenue growth is gonna have to be correlated to the booking's growth for green lake cross. >>What's the Achilles heel on, on HPE. If you had to do the SWAT, what's the, what's the w for HPE that they really need to pay >>Attention to. I mean, they, they need to continue their relentless focus on cost, particularly in the, in the core compute, you know, segment they need to be, they need to be able to be as cost effective as possible while the higher profit dollars associated with GreenLake and other services come in and then increase the overall operating margin and gross margin >>Picture for the, I mean, I think the biggest thing is they just have, they have to continue the motion that they've been on. Right. And they've been consistent about that. Mm-hmm, <affirmative> what you see where others have, have kind of slipped up is when you go to, to customers and you present the, the OPEX as a service and the traditional CapEx side by side, and the customers put in this position of trying to detangle what's in that OPEX service, you don't wanna do that obviously. And, and HP has not done that, but we've seen others kind of slip up. And, but >>A lot of companies still wanna buy CapEx. Right. Absolutely liquid. And, and I think, >>But you shouldn't do a, you shouldn't do that bake off by putting those two offers out. You should basically ascertain what they want to do. >>What's kind of what Dell does. Right. Hey, how, what do you want? We got this, we got >>This on one hand, we got this, the, we got that, right. Uh, the two hand sales rep, no, this CapEx. Thing's interesting. And if you're Amazon and Azure and, and GCP, what are they thinking right now? Cause remember what, four years ago outpost was launched, which essentially hardware. Yeah. This is cloud operating model. Yep. Yeah. They're essentially bringing outpost. This is what they got basically is Amazon and Azure, like, is this ABL on the radar for them? How would you, what, what are they thinking in your mind if we're on, if we're in their office, in their brain trust, are they laughing? Are they like saying, oh, they're scared. Is this real threat >>Opportunity? I, I, I mean, I wouldn't say they're laughing at all. I, I would say they're probably discounting a little bit and saying, okay, fine. You know, that's a strategy that a traditional hardware company is moving to. But I think if you look underneath the covers, you know, two years ago it was, you know, pretty basic stuff they were offering. But now when you start getting into some, you know, HPC is a service, you start getting into data fabric, you start getting into some of the more, um, sophisticated services that they're offering. And, and I think what's interesting about HP. What my, my take is that they're not gonna go after the 250 services the Amazon's offering, they're gonna basically have a portfolio of services that really focus on the core use cases of their infrastructure set. And, and I think one of the danger things, one, one of the, one of the red flags would be, if they start going way up the stack and wanting to offer the entire application stack, that would be like a big flashing warning sign, cuz it's not their sweet spot. It's not, not what they have. >>So machine learning, machine learning and quantum, okay. One you can argue might be up the stack machine learning quantum should be in their wheelhouse. >>I would argue machine learning is not up the stack because what they would focus on is inference. They'd focus on learning. If they came out and said, machine learning all the way up to the, you know, what a, what, what a drug discovery company needs to do. >>So they're bringing it down. >>Yeah. Yeah. Well, no, I think they're focusing on that middle layer, right? That, that, that data layer. And I think that helping companies manage their data make more sense outta their data structure, their data that's core to what they wanna do. >>I, I feel as though what they're doing now is table stakes. Honestly, I do. I do feel like, okay, Hey finally, you know, I say the same thing about apex, you >>Know, we finally got, >>It's like, okay guys, the >>Party. Great. Welcome to the, >>But the one thing I would just say about, about AWS and the other big clouds is whether they might be a little dismissive of what's truly gonna happen at the edge. I think the traditional OEMs that are transforming are really betting on that edge, being a huge play and a huge differentiator for them where the public cloud obviously have their own bets there. But I think they were pretty dismissive initially about how big that went. >>I don't, and I don't think anybody's really figured out the edge yet. >>Well, that's an, it's a battleground. That's what he's saying. I think you're >>Saying, but on the ecosystem, I wanna say up the stack, I think it's the ecosystem. That's gotta fill that out. You gotta see more governance tools and catalogs and AI tools and, and >>It immediately goes more, it goes more vertical when you go edge, you're gonna have different conversations and >>They're >>Lacking. Yeah. And they, but they're in there though. They're in the verticals. HP's in the, yeah, >>For sure. But they gotta build out an ego. Like you walk around here, the data, the number of data companies here. I mean, Starburst is here. I'm actually impressed that Starburst is here. Cause I think they're a forward thinking company. I wanna see that times a hundred. Right. I mean, that's >>You see HP's in all the verticals. That's I think the point here, >>So they should be able to attract that ecosystem and build that, that flywheel that's the, that's the hallmark of a cloud that marketplace. >>Yeah, it is. But I think there's a, again, I go back to, they really gotta stay focused on that infrastructure and data management. Yeah. >>But they'll be focused on that, but, but their ecosystem, >>Their ecosystem will then take it up from there. And I think that's the next stage >>And that ecosystem's gotta include OT players and communications technologies players as well. Right. Because that stuff gets kind of sucked up in that, in that edge play. Do >>You feel like HPE has a, has a leg up on that or like a little, a little bit of a lead or is it pretty much, you know, even raced right now? >>I think they've, I think the big infrastructure companies have all had OEM businesses and they've all played there. It's it's, it's also helping those OT players actually convert their own needs into more of a software play and, and not so much of >>Physical. You've been, you've been following and you guys both have been following HP and HPE for years. They've been on the edge for a long time. I've been focused on this edge. Yeah. Now they might not have the product traction that's right. Or they might not develop as fast, but industrial OT and IOT they've been talking about it, focused on it. I think Amazon was mostly like, okay, we gotta get to the edge and like the enterprise. And, and I think HP's got a leg up in my opinion on that. Well, I question is can they execute? >>Yeah. I mean, PTC was here years ago on stage talking >>About, but I mean, you think about, if you think about the edge, right. I mean, I would argue one of the best acquisitions this company ever did was Aruba. Right. I mean, it basically changed the whole conversation of the edge changed the whole conversation. >>If >>Became GreenLake, it was GreenLake. >>Well, it became a big department. They gave a big, but, but, but I mean, you know, I mean they, they, they went after going selling edge line servers and frankly it's very difficult to gain traction there. Yeah. Aruba, huge area. And I think the March announcement was when they brought Aruba management into. Yeah. Yeah. >>Totally. >>Last question. Love >>That. >>What are you guys saying about the, the Broadcom VMware acquisition? What's the, what are the implications for the ecosystem for companies like HPE and just generally for the it business? >>Yeah. So >>You start. Yeah, sure. I'll start, I'll start there. So look, you know, we've, you know, spent some time, uh, going through it spent some time, you know, speaking, uh, to the, to the, to the folks involved and, and, and I gotta tell you, I think this is a really interesting moment for Broadcom. This is Broadcom's opportunity to basically build a different kind of a conversation with developers to, uh, try to invest in. I mean, just for perspective, right? These numbers may not be exact. And I know a dollar is not a dollar, but in 2001, anybody, remember what HP paid for? Compact >>8,000,000,020, >>So 25 billion, 25 billion. Wow. VMware just got sold for 61 billion. Wow. Okay. Unbill dollars. Okay. That gives you a perspective. No, again, I know a dollar is not a dollar 2000. >>It's still big numbers, >>2022. So having said that, if you just did it to, to, to basically build your DCF model and say, okay, over this amount of time, I'll pay you this. And I'll take the money out of this period of time, which is what people have criticized them for. I think that's a little shortsighted. I, yeah, I think this is Broadcom's opportunity to invest in that product and really try to figure out how to get a seat at the table in software and pivot their company to enterprise software in a different way. They have to prove that they're willing to do that. And then frankly, that they can develop the skills to do that over time. But I do believe this is a, a different, this is a pivot point. This is not >>CA this is not CA >>It's not CA >>In my, in my mind, it can't be CA they would, they would destroy too much. Now you and I, Dave had some, had some conversations on Twitter. I, I don't think it's the step up to them sort of thinking differently about semiconductor, dying, doing some custom semi I, I don't think that's. Yeah. I agree with that. Yeah. I think I, I think this is really about, I got two aspiration for them pivoting the company. They could >>Justify the >>Price to the, getting a seat at the adults table in software is, >>Well, if, if Broadcom has been squeezing their supplies, we all hear the scutle butt. Yeah. If they're squeezing, they can use VMware to justify the prices. Yeah. Maybe use that hostage. And that installed base. That's kind of Mike conspiracy. >>I think they've told us what they're gonna do. >><laugh> I do. >>Maybe it's not like C what's your conspiracy theory like Symantec, but what >>Do you think? Well, I mean, there's still, I mean, so VMware there's really nobody that can do all the things that VMware does say. So really impossible for an enterprise to just rip 'em out. But obviously you can, you can sour people's taste and you can very much influence the direction they head in with the collection of, of providers. One thing, interesting thing here is, was the 37% of VMware's revenues sold through Dell. So there's, there's lots of dependencies. It's not, it's not as simple as I think John, you you're right. You can't just pull the CA playbook out and rerun it here. This is a lot more complex. Yeah. It's a lot more volume of, of, of distribution, but a fair amount of VMware's install >>Base Dell's influence is still there basically >>Is in the mid-market. It's not, it's not something that they're gonna touch directly. >>You think about what VMware did. I mean, they kept adding new businesses, buying new businesses. I mean, is security business gonna stay >>Networking security, I think are interesting. >>Same >>Customers >>Over and over. Haven't done anything. VMware has the same customers. What new >>Customers. So imagine simplifying VMware. Right, right. Becomes a different equation. It's really interesting. And to your point, yeah. I mean, I think Broadcom is, I mean, Tom Crouse knows how to run a business. >>Yeah. He knows how to run a business. He's gonna, I, I think it's gonna be, you know, it's gonna be an efficient business. It's gonna be a well run business, but I think it's a pivot point for >>Broadcom. It's amazing to me, Broadcom sells to HPE. They sell it to Dell and they've got a market cap. That's 10 X, you know? Yes. Yeah. All we gotta go guys. Awesome. Great conversation guys. >>A lot. Thanks for having us on. >>Okay. Listen, uh, day two is a, is a wrap. We'll be here tomorrow, all day. Dave ante, John furrier, Lisa Martin, Lisa. Hope you're feeling okay. We'll see you tomorrow. Thanks for watching the cube, your leader in enterprise tech, live coverage.
SUMMARY :
Great to you guys. That's fun to do it. Is that true or did you do one in 2019? I was president at the time and then, You must have been stoked to get back together. I mean, it was actually pretty emotional, you know, it's, it's a community, right? I mean, all the production people said, you know what? But, um, how do you look at, at discover relative some, So I think if you go back to what Crawford was just talking about our event in March, I mean, March was sort of the, So what are you guys seeing with that hybrid mode? And I think as we move from a pandemic to an em, To face and I would, and you guys had a great culture and it's a young culture. And then we'll come up with, you know, whether you are in, out of the office worker, which will be less than two days a I I, Mondays and Fridays, Because you got the CEO radius, right? you know, during the pandemic, what happened in 2021 and what do you expect to happen in, in 2022 And then of course we saw, you know, GDP come back to about a 4%, you know, ki kind of range growth. You know, I think, you know, Matt, you have a great statistic that, you know, 80% of companies used COVID as their point to pivot In the wall. I mean, that's, you know, really unheard at higher It shouldn't be that way. And then you look at software and we saw this, you know, Is it, is it both the structural change of the disruption of COVID plus I think that, you know, Andrew's famous wall street journal oped 10 years ago, software is even world was absolutely on is gonna get higher and higher, which means that I think you could, you could see another That's another point. And I think you're gonna see a lot of, a lot of focus on how we can rationalize some of those investments. We saw that with SAS, have you guys tracked like the Tams of what got pulled forward? I think you can, you can definitely, create a little bit more permanency around the hybrid world. the hybrid. So, so, you know, you basically have to, I remember when you were the transition from, you know, CapEx to OPEX and the financing element of this. And so you can, you can build that out. And I, I asked the question, you know, if you, if you had to pin this in terms of AWS's maturity, I mean, um, I think it's, well, clouds come a long way, right? Yeah. the core platform as a, as a service, you know, we're all big believers in edge and the apps follow And the storage folks were presented. Are they, you know, what are their managed services gonna look like? I wanna ask you guys about growth because In, in that. And I think HPE has said, I think if you look at the trailing, you know, 12 month bookings, you got over, you know, 7 billion, which means that in a And I think the one thing people are missing about HPE is there aren't, there are a lot of companies that want And I, and I worry about that is like, is this a services kind of just, you know, And so you don't wanna have a situation where you're But I think it's, it's really about clarity of mission. The real promise here is when you get into the global 2000 and yeah. You get that, which is, you know, I I'll come back. They know how to use it. You have this velocity, uh, machine with a significant girth that you can now move And I would agree what's What's the other move. Triple digit booking growth off a number that gets bigger Okay. What's the, what are some of the metrics that you guys are gonna be watching I mean, you have to help and what you're gonna see And then it's gonna be that, that, um, you know, ultimately you're gonna see revenue, If you had to do the SWAT, what's the, what's the w for HPE that I mean, they, they need to continue their relentless focus on cost, Mm-hmm, <affirmative> what you see where others have, have kind of slipped up is when you go A lot of companies still wanna buy CapEx. But you shouldn't do a, you shouldn't do that bake off by putting those two offers out. Hey, how, what do you want? And if you're Amazon and Azure and, and GCP, But I think if you look underneath the covers, you know, two years ago it was, One you can argue might be up the stack machine learning quantum should If they came out and said, machine learning all the way up to the, you know, what a, what, what a drug discovery company needs to do. And I think that helping companies manage their data make more sense outta their data structure, their data that's core to okay, Hey finally, you know, I say the same thing about apex, you Welcome to the, But I think they were pretty dismissive initially about how big that went. I think you're Saying, but on the ecosystem, I wanna say up the stack, I think it's the ecosystem. They're in the verticals. Cause I think they're a forward thinking company. You see HP's in all the verticals. So they should be able to attract that ecosystem and build that, that flywheel that's the, But I think there's a, again, I go back to, they really gotta stay focused And I think that's the next stage And that ecosystem's gotta include OT players and communications technologies players as well. I think they've, I think the big infrastructure companies have all had OEM businesses and they've all played there. I think Amazon was mostly like, okay, we gotta get to the edge and like the enterprise. I mean, it basically changed the whole conversation of the edge changed the whole conversation. And I think the March announcement was when they brought So look, you know, we've, you know, spent some time, uh, going through it spent some time, That gives you a perspective. And I'll take the money out of this period of time, which is what people have criticized them for. I think I, I think this is really about, I got two aspiration for them pivoting the company. And that installed base. think John, you you're right. Is in the mid-market. I mean, they kept adding new businesses, buying new businesses. VMware has the same customers. I mean, I think Broadcom is, I mean, Tom Crouse knows how to run a business. He's gonna, I, I think it's gonna be, you know, it's gonna be an efficient business. That's 10 X, you know? Thanks for having us on. We'll see you tomorrow.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Matt Eastwood | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
25 year | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Tom Crouse | PERSON | 0.99+ |
Adam Celski | PERSON | 0.99+ |
three days | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
2014 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
March | DATE | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
34% | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Santa Clara | LOCATION | 0.99+ |
20 | QUANTITY | 0.99+ |
John furrier | PERSON | 0.99+ |
61 billion | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
2022 | DATE | 0.99+ |
30 | QUANTITY | 0.99+ |
2001 | DATE | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
12 month | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
30% | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Andrew | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
25 billion | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
five | QUANTITY | 0.99+ |
Symantec | ORGANIZATION | 0.99+ |
33 | QUANTITY | 0.99+ |
Crawford Del Pret | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
9% | QUANTITY | 0.99+ |
GreenLake | ORGANIZATION | 0.99+ |
Crawford | PERSON | 0.99+ |
37% | QUANTITY | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
70 services | QUANTITY | 0.99+ |
7 billion | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
less than two days | QUANTITY | 0.99+ |
OPEX | ORGANIZATION | 0.99+ |
250 services | QUANTITY | 0.99+ |
Day One Wrap | HPE Discover 2022
>>The cube presents HPE discover 2022 brought to you by HPE. >>Hey everyone. Welcome back to the Cube's day one coverage of HPE discover 22 live from the Venetian in Las Vegas. I got a power panel here, Lisa Martin, with Dave Valante, John furrier, Holger Mueller also joins us. We are gonna wrap this, like you've never seen a rap before guys. Lot of momentum today, lot, lot of excitement, about 8,000 or so customers, partners, HPE leaders here. Holger. Let's go ahead and start with you. What are some of the things that you heard felt saw observed today on day one? >>Yeah, it's great to be back in person. Right? 8,000 people events are rare. Uh, I'm not sure. Have you been to more than 8,000? <laugh> yeah, yeah. Okay. This year, this year. I mean, historically, yes, but, um, >>Snowflake was 10. Yeah. >>So, oh, wow. Okay. So 8,000 was my, >>Cisco was, they said 15, >>But is my, my 8,000, my record, I let us down with 7,000 kind of like, but it's in the Florida swarm. It's not nicely. Like, and there's >>Usually what SFI, there's usually >>20, 20, 30, 40, 50. I remember 50 in the nineties. Right. That was a different time. But yeah. Interesting. Yeah. Interesting what people do and it depends how much time there is to come. Right. And know that it happens. Right. But yeah, no, I think it's interesting. We, we had a good two analyst track today. Um, interesting. Like HPE is kind of like back not being your grandfather's HPE to a certain point. One of the key stats. I know Dave always for the stats, right. Is what I found really interesting that over two third of GreenLake revenue is software and services. Now a love to know how much of that services, how much of that software. But I mean, I, I, I, provocate some, one to ones, the HP executives saying, Hey, you're a hardware company. Right. And they didn't even come back. Right. But Antonio said, no, two thirds is, uh, software and services. Right. That's interesting. They passed the one exabyte, uh, being managed, uh, as a, as a hallmark. Right. I was surprised only 120,000 users if I had to remember the number. Right, right. So that doesn't seem a terrible high amount of number of users. Right. So, but that's, that's, that's promising. >>So what software is in there, cuz it's gotta be mostly services. >>Right? Well it's the 70 plus cloud services, right. That everybody's talking about where the added eight of them shockingly back up and recovery, I thought that was done at launch. Right. >>Still who >>Keep recycling storage and you back. But now it's real. Yeah. >>But the company who knows the enterprise, right. HPE, what I've been doing before with no backup and recovery GreenLake. So that was kind of like, okay, we really want to do this now and nearly, and then say like, oh, by the way, we've been doing this all the time. Yeah. >>Oh, what's your take on the installed base of HP. We had that conversation, the, uh, kickoff or on who's their target, what's the target audience environment look like. It certainly is changing. Right? If it's software and services, GreenLake is resonating. Yeah. Um, ecosystems responding. What's their customers cuz managed services are up too Kubernetes, all the managed services what's what's it like what's their it transformation base look like >>Much of it is of course install base, right? The trusted 20, 30 plus year old HP customer. Who's keeping doing stuff of HP. Right. And call it GreenLake. They've been for so many name changes. It doesn't really matter. And it's kind of like nice that you get the consume pain only what you consume. Right. I get the cloud broad to me then the general markets, of course, people who still need to run stuff on premises. Right. And there's three reasons of doing this performance, right. Because we know the speed of light is relative. If you're in the Southern hemisphere and even your email servers in Northern hemisphere, it takes a moment for your email to arrive. It's a very different user experience. Um, local legislation for data, residency privacy. And then, I mean Charles Phillips who we all know, right. Former president of uh, info nicely always said, Hey, if the CIOs over 50, I don't have to sell qu. Right. So there is not invented. I'm not gonna do cloud here. And now I've kind of like clouded with something like HP GreenLake. That's the customers. And then of course procurement is a big friend, right? Yeah. Because when you do hardware refresh, right. You have to have two or three competitors who are the two or three competitors left. Right. There's Dell. Yeah. And then maybe Lenovo. Right? So, so like a >>Little bit channels, the strength, the procurement physicians of strength, of course install base question. Do you think they have a Microsoft opportunity where, what 365 was Microsoft had office before 365, but they brought in the cloud and then everything changed. Does HP have that same opportunity with kind of the GreenLake, you know, model with their existing stuff. >>It has a GreenLake opportunity, but there's not much software left. It's a very different situation like Microsoft. Right? So, uh, which green, which HP could bring along to say, now run it with us better in the cloud because they've been selling much of it. Most of it, of their software portfolio, which they bought as an HP in the past. Right. So I don't see that happening so much, but GreenLake as a platform itself course interesting because enterprise need a modern container based platform. >>I want, I want to double click on this a little bit because the way I see it is HP is going to its installed base. I think you guys are right on say, this is how we're doing business now. Yeah. You know, come on along. But my sense is, some customers don't want to do the consumption model. There are actually some customers that say, Hey, of course I got, I don't have a cash port problem. I wanna pay for it up front and leave me alone. >>I've been doing this since 50 years. Nice. As I changed it, now <laugh> two know >>Money's wants to do it. And I don't wanna rent because rental's more expensive and blah, blah, blah. So do you see that in the customer base that, that some are pushing back? >>Of course, look, I have a German accent, right? So I go there regularly and uh, the Germans are like worried about doing anything in the cloud. And if you go to a board in Germany and say, Hey, we can pay our usual hardware, refresh, CapEx as usual, or should we bug consumption? And they might know what we are running. <laugh> so not whole, no offense against the Germans out. The German parts are there, but many of them will say, Hey, so this is change with COVID. Right. Which is super interesting. Right? So the, the traditional boards non-technical have been hearing about this cloud variable cost OPEX to CapEx and all of a sudden there's so much CapEx, right. Office buildings, which are not being used truck fleets. So there's a whole new sensitivity by traditional non-technical boards towards CapEx, which now the light bulb went on and say, oh, that's the cloud thing about also. So we have to find a way to get our cost structure, to ramp up and ramp down as our business might be ramping up through COVID through now inflation fears, recession, fears, and so on. >>So, okay. HP's, HP's made the statement that anything you can do in the cloud you can do in GreenLake. Yes. And I've said you can't run on snowflake. You can't run Mongo Atlas, you can't run data bricks, but that's okay. That's fine. Let's be, I think they're talking about, there's >>A short list of things. I think they're talking about the, their >>Stuff, their, >>The operating experience. So we've got single sign on through a URL, right. Uh, you've got, you know, some level of consistency in terms of policy. It's unclear exactly what that is. You've got storage backup. Dr. What, some other services, seven other services. If you had to sort of take your best guess as to where HP is now and peg it toward where Amazon was in which year? >>20 14, 20 14. >>Yeah. Where they had their first conference or the second we invent here with 3000 people and they were thinking, Hey, we're big. Yeah. >>Yeah. And I think GreenLake is the building blocks. So they quite that's the >>Building. Right? I mean similar. >>Okay. Well, I mean they had E C, Q and S3 and SQS, right. That was the core. And then the rest of those services were, I mean, base stock was one of that first came in behind and >>In fairness, the industry has advanced since then, Kubernetes is further along. And so HPE can take advantage of that. But in terms of just the basic platform, I, I would agree. I think it's >>Well, I mean, I think, I mean the software, question's a big one. I wanna bring up because the question is, is that software is getting the world. Hardware is really software scales, everything, data, the edge story. I love their story. I think HP story is wonderful Aruba, you know, hybrid cloud, good story, edge edge. But if you look under the covers, it's weak, right? It's like, it's not software. They don't have enough software juice, but the ecosystem opportunity to me is where you plug and play. So HP knows that game. But if you look historically over the past 25 years, HP now HPE, they understand plug and play interoperability. So the question is, can they thread the needle >>Right. >>Between filling the gaps on the software? Yeah. With partners, >>Can they get the partners? Right. And which have been long, long time. Right. For a long time, HP has been the number one platform under ICP, right? Same thing. You get certified for running this. Right. I know from my own history, uh, I joined Oracle last century and the big thing was, let's get your eBusiness suite certified on HP. Right? Like as if somebody would buy H Oracle work for them, right. This 20 years ago, server >>The original exit data was HP. Oracle. >>Exactly. Exactly. So there's this thinking that's there. But I think the key thing is we know that all modern forget about the hardware form in the platforms, right? All modern software has to move to containers and snowflake runs in containers. You mentioned that, right? Yeah. If customers force snowflake and HPE to the table, right, there will be a way to make it work. Right. And which will help HPE to be the partner open part will bring the software. >>I, I think it's, I think that's an opportunity because that changes the game and agility and speed. If HP plays their differentiation, right. Which we asked on their opening segment, what's their differentiation. They got size scale channel, >>What to the enterprise. And then the big benefit is this workload portability thing. Right? You understand what is run in the public cloud? I need to run it local. For whatever reason, performance, local residency of data. I can move that. There that's the big benefit to the ISVs, the sales vendors as well. >>But they have to have a stronger data platform story in my that's right. Opinion. I mean, you can run Oracle and HPE, but there's no reason they shouldn't be able to do a deal with, with snowflake. I mean, we saw it with Dell. Yep. We saw it with, with, with pure and I, if our HPE I'd be saying, Hey, because the way the snowflake deal worked, you probably know this is your reading data into the cloud. The compute actually occurs in the cloud viral HB going snowflake saying we can separate compute and storage. Right. And we have GreenLake. We have on demand. Why don't we run the compute on-prem and make it a full class, first class citizen, right. For all of our customers data. And that would be really innovative. And I think Mongo would be another, they've got OnPrem. >>And the question is, how many, how many snowflake customers are telling snowflake? Can I run you on premise? And how much defo open years will they hear from that? Right? This is >>Why would they deal Dell? That >>Deal though, with that, they did a deal. >>I think they did that deal because the customer came to them and said, you don't exactly that deal. We're gonna spend the >>Snowflake >>Customers think crazy things happen, right? Even, even put an Oracle database in a Microsoft Azure data center, right. Would off who, what as >>Possible snowflake, >>Oracle. So on, Aw, the >>Snow, the snowflakes in the world have to make a decision. Dave on, is it all snowflake all the time? Because what the reality is, and I think, again, this comes back down to the, the track that HP could go up or down is gonna be about software. Open source is now the software industry. There's no such thing as proprietary software, in my opinion, relatively speaking, cloud scale and integrated, integrated integration software is proprietary. The workflows are proprietary. So if they can get that right with the partners, I would focus on that. I think they can tap open source, look at Amazon with open source. They sucked it up and they integrated it in. No, no. So integration is the deal, not >>Software first, but Snowflake's made the call. You were there, Lisa. They basically saying it's we have, you have to be in snowflake in order to get the governance and the scalability, all that other wonderful stuff. Oh, but we we'll do Apache iceberg. We'll we'll open it up. We'll do Python. Yeah. >>But you can't do it data clean room unless you are in snowflake. Exactly. Snowflake on snowflake. >>Exactly. >>But got it. Isn't that? What you heard from AWS all the time till they came out outposts, right? I mean, snowflake is a market leader for what they're doing. Right. So that they want to change their platform. I mean, kudos to them. They don't need to change the platform. They will be the last to change their platform to a ne to anything on premises. Right. But I think the trend already shows that it's going that way. >>Well, if you look at outpost is an signal, Dave, the success of outpost launched what four years ago, they announced it. >>What >>EKS is beating, what outpost is doing. Outpost is there. There's not a lot of buzz and talk to the insiders and the open source community, uh, EKS and containers. To your point mm-hmm <affirmative> is moving faster on, I won't say commodity hardware, but like could be white box or HP, Dell, whatever it's gonna be that scale differentiation and the edge story is, is a good one. And I think with what we're seeing in the market now it's the industrial edge. The back office was gen one cloud back office data center. Now it's hybrid. The focus will be industrial edge machine learning and AI, and they have it here. And there's some, some early conversations with, uh, I heard it from, uh, this morning, you guys interviewed, uh, uh, John Schultz, right? With the world economic 4k birth Butterfield. She was amazing. And then you had Justin bring up a Hoar, bring up quantum. Yes. That is a differentiator. >>HP. >>Yes. Yeah. You, they have the computing shops. They had the R and D can they bring it to the table >>As, as HPC, right. To what they Schultz for of uh, the frontier system. Right. So very impressed. >>So the ecosystem is the key for them is because that's how they're gonna fill the gaps. They can't, they can't only, >>They could, they could high HPC edge piece. I wouldn't count 'em out of that game yet. If you co-locate a box, I'll use the word box, particularly at a telco tower. That's a data center. Yep. Right. If done properly. Yep. So, you know, what outpost was supposed to do actually is a hybrid opportunity. Aruba >>Gives them a unique, >>But the key thing is right. It's a yin and yang, right? It's the ecosystem it's partners to bring those software workload. Absolutely. Right. But HPE has to keep the platform attractive enough. Right. And the key thing there is that you have this workload capability thing that you can bring things, which you've built yourself. I mean, look at the telcos right. Network function, visualization, thousands of man, years into these projects. Right. So if I can't bring it to your edge box, no, I'm not trying to get to your Xbox. Right. >>Hold I gotta ask you since in the Dave too, since you guys both here and Lisa, you know, I said on the opening, they have serious customers and those customers have serious problems, cyber security, ransomware. So yeah. I teach transformation now. Industrial transformation machine learning, check, check, check. Oh, sounds good. But at the end of the day, their customers have some serious problems. Right? Cyber, this is, this is high stakes poker. Yeah. What do you think HP's position for in the security? You mentioned containers, you got all this stuff, you got open source, supply chain, you have to left supply chain issues. What is their position with security? Cuz that's the big one. >>I, I think they have to have a mature attitude that customers expect from HPE. Right? I don't have to educate HP on security. So they have to have the partner offerings again. We're back at the ecosystem to have what probably you have. So bring your own security apart from what they have to have out of the box to do business with them. This is why the shocker this morning was back up in recovery coming. <laugh> it's kind like important for that. Right? Well >>That's, that's, that's more ransomware and the >>More skeleton skeletons in the closet there, which customers should check of course. But I think the expectations HP understands that and brings it along either from partner or natively. >>I, I think it's, I think it's services. I think point next is the point of integration for their security. That's why two thirds is software and services. A lot of that is services, right? You know, you need security, we'll help you get there. We people trust HP >>Here, but we have nothing against point next or any professional service. They're all hardworking. But if I will have to rely on humans for my cyber security strategy on a daily level, I'm getting gray hair and I little gray hair >>Red. Okay. I that's, >>But >>I think, but I do think that's the camera strategy. I mean, I'm sure there's a lot of that stuff that's beginning to be designed in, but I, my guess is a lot of it is services. >>Well, you got the Aruba. Part of the booth was packed. Aruba's there. You mentioned that earlier. Is that good enough? Because the word zero trust is kicked around a lot. On one hand, on the other hand, other conversations, it's all about trust. So supply chain and software is trusting trust, trust and verified. So you got this whole mentality of perimeter gone mentality. It's zero trust. And if you've got software trust, interesting thoughts there, how do you reconcile zero trust? And then I need trust. What's what's you? What are you seeing older on that? Because I ask people all the time, they're like, uh, I'm zero trust or is it trust? >>Yeah. The middle ground. Right? Trusted. The meantime people are man manipulating what's happening in your runtime containers. Right? So, uh, drift control is a new password there that you check what's in your runtime containers, which supposedly impenetrable, but people finding ways to hack them. So we'll see this cat and mouse game going on all the time. Yeah. Yeah. There's always gonna be the need for being in a secure, good environment from that perspective. Absolutely. But the key is edge has to be more than Aruba, right? If yeah. HV goes away and says, oh yeah, we can manage your edge with our Aruba devices. That's not enough. It's the virtual probability. And you said the important thing before it's about the data, right? Because the dirty secret of containers is yeah, I move the code, but what enterprise code works without data, right? You can't say as enterprise, okay, we're done for the day check tomorrow. We didn't persist your data, auditor customer. We don't have your data anymore. So filling a way to transport the data. And there just one last thought, right? They have a super interesting asset. They want break lands for the venerable map R right. Which wrote their own storage drivers and gives you the chance to potentially do something in that area, which I'm personally excited about. But we'll see what happens. >>I mean, I think the holy grail is can I, can I put my data into a cloud who's ever, you know, call it a super cloud and can I, is it secure? Is it governed? Can I share it and be confident that it's discoverable and that the, the person I give it to has the right to use it. Yeah. And, and it's the correct data. There's not like a zillion copies running. That's the holy grail. And I, I think the answer today is no, you can, you can do that maybe inside of AWS or maybe inside of Azure, look maybe certainly inside of snowflake, can you do that inside a GreenLake? Well, you probably can inside a GreenLake, but then when you put it into the cloud, is it cross cloud? Is it really out to the edge? And that's where it starts to break down, but that's where the work is to be done. That's >>The one Exide is in there already. Right. So men being men. Yeah. >>But okay. But it it's in there. Yeah. Okay. What do you do with it? Can you share that data? What can you actually automate governance? Right? Uh, is that data discoverable? Are there multiple copies of that data? What's the, you know, master copy. Here's >>A question. You guys, here's a question for you guys analyst, what do you think the psychology is of the CIO or CSO when HP comes into town with GreenLake, uh, and they say, what's your relationship with the hyperscalers? Cause I'm a CIO. I got my environment. I might be CapEx centric or Hey, I'm open model. Open-minded to an operating model. Every one of these enterprises has a cloud relationship. Yeah. Yeah. What's the dynamic. What do you think the psychology is of the CIO when they're rationalizing their, their trajectory, their architecture, cloud, native scale integration with HPE GreenLake or >>HP service. I think she or he hears defensiveness from HPE. I think she hears HPE or he hears HPE coming in and saying, you don't need to go to the cloud. You know, you could keep it right here. I, I don't think that's the right posture. I think it should be. We are your cloud. And we can manage whether it's OnPrem hybrid in AWS, Azure, Google, across those clouds. And we have an edge story that should be the vision that they put forth. That's the super cloud vision, but I don't hear it >>From these guys. What do you think psycho, do you agree with that? >>I'm totally to make, sorry to be boring, but I totally agree with, uh, Dave on that. Right? So the, the, the multi-cloud capability from a trusted large company has worked for anybody up and down the stack. Right? You can look historically for, uh, past layers with cloud Foundry, right? It's history vulnerable. You can look for DevOps of Hashi coop. You can look for database with MongoDB right now. So if HPE provides that data access, right, with all the problems of data gravity and egres cost and the workability, they will be doing really, really well, but we need to hear it more, right. We didn't hear much software today in the keynote. Right. >>Do they have a competitive offering vis-a-vis or Azure? >>The question is, will it be an HPE offering or will, or the software platform, one of the offerings and you as customer can plug and play, right. Will software be a differentiator for HP, right. And will be close, proprietary to the point to again, be open enough for it, or will they get that R and D format that, or will they just say, okay, ES MES here on the side, your choice, and you can use OpenShift or whatever, we don't matter. That's >>The, that's the key question. That's the key question. Is it because it is a competitive strategy? Is it highly differentiated? Oracle is a highly differentiated strategy, right? Is Dell highly differentiated? Eh, Dell differentiates based on its breadth. What? >>Right. Well, let's try for the control plane too. Dell wants to be an, >>Their, their vision is differentiated. Okay. But their execution today is not >>High. All right. Let me throw, let me throw this out at you then. I'm I'm, I'm sorry. I'm I'm HPE. I wanna be the glue layer. Is that, does that fly? >>What >>Do you mean? The group glue layer? I'll I wanna be, you can do Amazon, but I wanna be the glue layer between the clouds and our GreenLake will. >>What's the, what's the incremental value that, that glue provides, >>Provides comfort and reliability and control for the single pane of glass for AWS >>And comes back to the data. In my opinion. Yeah. >>There, there there's glue levels on the data level. Yeah. And there's glue levels on API level. Right. And there's different vendors in the different spaces. Right. Um, I think HPE will want to play on the data side. We heard lots of data stuff. We >>Hear that, >>But we have to see it. Exactly. >>Yeah. But it's, it's lacking today. And so, Hey, you know, you guys know better than I APIs can be fragile and they can be, there's a lot of diversity in terms of the quality of APIs and the documentation, how they work, how mature they are, what, how, what kind of performance they can provide and recoverability. And so just saying, oh wow. We are living the API economy. You know, the it's gonna take time to brew, chime in here. Hi. >><laugh> oh, so guys, you've all been covering HPE for a long time. You know, when Antonio stood up on stage three years ago and said by 2022, and here we are, we're gonna be delivering everything as a service. He's saying we've, we've done it, but, and we're a new company. Do you guys agree with that? >>Definitely. >>I, yes. Yes. With the caveat, I think, yes. The COVID pandemic slowed them down a lot because, um, that gave a tailwind to the hyperscalers, um, because of the, the force of massive O under forecasting working at home. I mean, everyone I talked to was like, no one forecasted a hundred percent work at home, the, um, the CapEx investments. So I think that was an opportunity that they'd be much farther along if there's no COVID people >>Thought it wasn't impossible. Yeah. But so we had the old work from home thing right. Where people trying to get people fired at IBM and Yahoo. Right. So I would've this question covering the HR side and my other hat on. Right. And I would ask CHS let's assume, because I didn't know about COVID shame on me. Right. I said, big California, earthquake breaks. Right. Nobody gets hurt, but all the buildings have to be retrofitted and checked for seism logic down. So everybody's working from home, ask CHS, what kind of productivity gap hit would you get by forcing everybody working from home with the office unsafe? So one, one gentleman, I won't know him, his name, he said 20% and the other one's going ha you're smoking. It's 40 50%. We need to be in the office. We need to meet it first night. And now we went for this exercise. Luckily not with the California. Right. Well, through the price of COVID and we've seen what it can do to, to productivity well, >>The productivity, but also the impact. So like with all the, um, stories we've done over two years, the people that want came out ahead were the ones that had good cloud action. They were already in the cloud. So I, I think they're definitely in different company in the sense of they, I give 'em a pass. I think they're definitely a new company and I'm not gonna judge 'em on. I think they're doing great. But I think pandemic definitely slowed 'em down that about >>It. So I have a different take on this. I think. So we've go back a little history. I mean, you' said this, I steal your line. Meg Whitman took one for the Silicon valley team. Right. She came in. I don't think she ever was excited that I, that you said, you said that, and I think you wrote >>Up, get tape on that one. She >>Had to figure out how do I deal with this mess? I have EDS. I got PC. >>She never should have spun off the PC, but >>Okay. But >>Me, >>Yeah, you can, you certainly could listen. Maybe, maybe Gerstner never should have gone all in on services and IBM would dominate something other than mainframes. They had think pads even for a while, but, but, but so she had that mess to deal with. She dealt with it and however, they dealt with it, Antonio came in, he, he, and he said, all right, we're gonna focus the company. And we're gonna focus the mission on not the machine. Remember those yeah. Presentations, but you just make your eyes glaze over. We're going all in on Azure service >>And edge. He was all on. >>We're gonna build our own cloud. We acquired Aruba. He made some acquisitions in HPC to help differentiate. Yep. And they are definitely a much more focused company now. And unfortunately I wish Antonio would CEO in 2015, cuz that's really when this should have started. >>Yeah. And then, and if you remember back then, Dave, we were interviewing Docker with DevOps teams. They had composability, they were on hybrid really early. I think they might have even coined the term hybrid before VMware tri-state credit for it. But they were first on hybrid. They had DevOps, they had infrastructure risk code. >>HPE had an HP had an awesome cloud team. Yeah. But, and then, and then they tried to go public cloud. Yeah. You know, and then, you know, just made them, I mean, it was just a mess. The focus >>Is there. I give them huge props. And I think, I think the GreenLake to me is exciting here because it's much better than it was two years ago. When, when we talked to, when we started, it's >>Starting to get real. >>It's, it's a real thing. And I think the, the tell will be partners. If they make that right, can pull their different >>Ecosystem, >>Their scale and their customers and fill the software gas with partners mm-hmm <affirmative> and then create that integration opportunity. It's gonna be a home run if they don't do that, they're gonna miss the operating, >>But they have to have their own to your point. They have to have their own software innovation. >>They have to good infrastructure ways to build applications. I don't wanna build with somebody else. I don't wanna take a Microsoft stack on open source stack. I'm not sure if it's gonna work with HP. So they have to have an app dev answer. I absolutely agree with that. And the, the big thing for the partners is, which is a good thing, right? Yep. HPE will not move into applications. Right? You don't have to have the fear of where Microsoft is with their vocal large. Right. If AWS kind of like comes up with APIs and manufacturing, right. Google the same thing with their vertical push. Right. So HPE will not have the CapEx, but >>Application, >>As I SV making them, the partner, the bonus of being able to on premise is an attractive >>Part. That's a great point. >>Hold. So that's an inflection point for next 12 months to watch what we see absolutely running on GreenLake. >>Yeah. And I think one of the things that came out of the, the last couple events this past year, and I'll bring this up, we'll table it and we'll watch it. And it's early in this, I think this is like even, not even the first inning, the machine learning AI impact to the industrial piece. I think we're gonna see a, a brand new era of accelerated digital transformation on the industrial physical world, back office, cloud data center, accounting, all the stuff. That's applications, the app, the real world from space to like robotics. I think that HP edge opportunity is gonna be visible and different. >>So guys, Antonio Neri is on tomorrow. This is only day one. If you can imagine this power panel on day one, can you imagine tomorrow? What is your last question for each of you? What is your, what, what question would you want to ask him tomorrow? Hold start with you. >>How is HPE winning in the long run? Because we know their on premise market will shrink, right? And they can out execute Dell. They can out execute Lenovo. They can out Cisco and get a bigger share of the shrinking market. But that's the long term strategy, right? So why should I buy HPE stock now and have a good return put in the, in the safe and forget about it and have a great return 20 years from now? What's the really long term strategy might be unfair because they, they ran in survival mode to a certain point out of the mass post equipment situation. But what is really the long term strategy? Is it more on the hardware side? Is it gonna go on the HPE, the frontier side? It's gonna be a DNA question, which I would ask Antonio. >>John, >>I would ask him what relative to the macro conditions relative to their customer base, I'd say, cuz the customers are the scoreboard. Can they create a value proposition with their, I use the Microsoft 365 example how they kind of went to the cloud. So my question would be Antonio, what is your core value proposition to CIOs out there who want to transform and take a step function, increase for value with HPE? Tell me that story. I wanna hear. And I don't want to hear, oh, we got a portfolio and no, what value are you enabling your customers to do? >>What and what should that value be? >>I think it's gonna be what we were kind of riffing on, which is you have to provide either what their product market fit needs are, which is, are you solving a problem? Is it a pain point is a growth driver. Uh, and what's the, what's that tailwind. And it's obviously we know at cloud we know edge. The story is great, but what's the value proposition. But by going with HPE, you get X, Y, and Z. If they can explain that clearly with real, so qualitative and quantitative data it's home >>Run. He had a great line of the analyst summit today where somebody asking questions, I'm just listening to the customer. So be ready for this Steve jobs photo, listening to the customer. You can't build something great listening to the customer. You'll be good for the next quarter. The next exponential >>Say, what are the customers saying? <laugh> >>So I would make an observation. And my question would, so my observation would be cloud is growing collectively at 35%. It's, you know, it's approaching 200 billion with a big, big four. If you include Alibaba, IBM has actually said, Hey, we're gonna gr they've promised 6% growth. Uh, Cisco I think is at eight or 9% growth. Dow's growing in double digits. Antonio and HPE have promised three to 4% growth. So what do you have to do to actually accelerate growth? Because three to 4%, my view, not enough to answer Holger's question is why should I buy HPE stock? Well, >>If they have product, if they have customer and there's demand and traction to me, that's going to drive the growth numbers. And I think the weak side of the forecast means that they don't have that fit yet. >>Yeah. So what has to happen for them to get above five, 6% growth? >>That's what we're gonna analyze. I mean, I, I mean, I don't have an answer for that. I wish I had a better answer. I'd tell them <laugh> but I feel, it feels, it feels like, you know, HP has an opportunity to say here's the new HPE. Yeah. Okay. And this is what we stand for. And here's the one thing that we're going to do that consistently drives value for you, the customer. And that's gonna have to come into some, either architectural cloud shift or a data thing, or we are your store for blank. >>All of the above. >>I guess the other question is, would, would you know, he won't answer a rude question, would suspending things like dividends and stock buybacks and putting it into R and D. I would definitely, if you have confidence in the market and you know what to do, why wouldn't you just accelerate R and D and put the money there? IBM, since 2007, IBM spent is the last stat. And I'm looking go in 2007, IBM way, outspent, Google, and Amazon and R and D and, and CapEx two, by the way. Yep. Subsequent to that, they've spent, I believe it's the numbers close to 200 billion on stock buyback and dividends. They could have owned cloud. And so look at this business, the technology business by and large is driven by innovation. Yeah. And so how do you innovate if >>You have I'm buying, I'm buying HP because they're reliable high quality and they have the outcomes that I want. Oh, >>Buy their products and services. I'm not sure I'd buy the stock. Yeah. >>Yeah. But she has to answer ultimately, because a public company. Right. So >>Right. It's this job. Yeah. >>Never a dull moment with the three of you around <laugh> guys. Thank you so much for sharing your insights, your, an analysis from day one. I can't imagine what day two is gonna bring tomorrow. Debut and I are gonna be anchoring here. We've got a jam packed day, lots going on, hearing from the ecosystem from leadership. As we mentioned, Antonio is gonna be Tony >>Alma Russo. I'm dying. Dr. >>EDMA as well as on the CTO gonna be another action pack day. I'm excited for it, guys. Thanks so much for sharing your insights and for letting me join this power panel. >>Great. Great to be here. >>Power panel plus me. All right. For Holger, John and Dave, I'm Lisa, you're watching the cube our day one coverage of HPE discover wraps right now. Don't go anywhere, cuz we'll see you tomorrow for day two, live from Vegas, have a good night.
SUMMARY :
What are some of the things that you heard I mean, So, oh, wow. but it's in the Florida swarm. I know Dave always for the stats, right. Well it's the 70 plus cloud services, right. Keep recycling storage and you back. But the company who knows the enterprise, right. We had that conversation, the, uh, kickoff or on who's their target, I get the cloud broad to me then the general markets, of course, people who still need to run stuff on premises. with kind of the GreenLake, you know, model with their existing stuff. So I don't see that happening so much, but GreenLake as a platform itself course interesting because enterprise I think you guys are right on say, this is how we're doing business now. As I changed it, now <laugh> two know And I don't wanna rent because rental's more expensive and blah, And if you go to a board in Germany and say, Hey, we can pay our usual hardware, refresh, HP's, HP's made the statement that anything you can do in the cloud you I think they're talking about the, their If you had to sort of take your best guess as to where Yeah. So they quite that's the I mean similar. And then the rest of those services But in terms of just the basic platform, I, I would agree. I think HP story is wonderful Aruba, you know, hybrid cloud, Between filling the gaps on the software? I know from my own history, The original exit data was HP. But I think the key thing is we know that all modern I, I think it's, I think that's an opportunity because that changes the game and agility and There that's the big benefit to the ISVs, if our HPE I'd be saying, Hey, because the way the snowflake deal worked, you probably know this is I think they did that deal because the customer came to them and said, you don't exactly that deal. Customers think crazy things happen, right? So if they can get that right with you have to be in snowflake in order to get the governance and the scalability, But you can't do it data clean room unless you are in snowflake. But I think the trend already shows that it's going that way. Well, if you look at outpost is an signal, Dave, the success of outpost launched what four years ago, And I think with what we're seeing in the market now it's They had the R and D can they bring it to the table So very impressed. So the ecosystem is the key for them is because that's how they're gonna fill the gaps. So, you know, I mean, look at the telcos right. I said on the opening, they have serious customers and those customers have serious problems, We're back at the ecosystem to have what probably But I think the expectations I think point next is the point of integration for their security. But if I will have to rely on humans for I mean, I'm sure there's a lot of that stuff that's beginning Because I ask people all the time, they're like, uh, I'm zero trust or is it trust? I move the code, but what enterprise code works without data, I mean, I think the holy grail is can I, can I put my data into a cloud who's ever, So men being men. What do you do with it? You guys, here's a question for you guys analyst, what do you think the psychology is of the CIO or I think she hears HPE or he hears HPE coming in and saying, you don't need to go to the What do you think psycho, do you agree with that? So if HPE provides that data access, right, with all the problems of data gravity and egres one of the offerings and you as customer can plug and play, right. That's the key question. Right. But their execution today is not I wanna be the glue layer. I'll I wanna be, you can do Amazon, but I wanna be the glue layer between the clouds and And comes back to the data. And there's glue levels on API level. But we have to see it. And so, Hey, you know, you guys know better than I APIs can be fragile and Do you guys agree with that? I mean, everyone I talked to was like, no one forecasted a hundred percent work but all the buildings have to be retrofitted and checked for seism logic down. But I think pandemic definitely slowed I don't think she ever was excited that I, that you said, you said that, Up, get tape on that one. I have EDS. Presentations, but you just make your eyes glaze over. And edge. I wish Antonio would CEO in 2015, cuz that's really when this should have started. I think they might have even coined the term You know, and then, you know, just made them, I mean, And I think, I think the GreenLake to me is And I think the, the tell will be partners. It's gonna be a home run if they don't do that, they're gonna miss the operating, But they have to have their own to your point. You don't have to have the fear of where Microsoft is with their vocal large. the machine learning AI impact to the industrial piece. If you can imagine this power panel But that's the long term strategy, And I don't want to hear, oh, we got a portfolio and no, what value are you enabling I think it's gonna be what we were kind of riffing on, which is you have to provide either what their product So be ready for this Steve jobs photo, listening to the customer. So what do you have to do to actually accelerate growth? And I think the weak side of the forecast means that they don't I feel, it feels, it feels like, you know, HP has an opportunity to say here's I guess the other question is, would, would you know, he won't answer a rude question, You have I'm buying, I'm buying HP because they're reliable high quality and they have the outcomes that I want. I'm not sure I'd buy the stock. So Yeah. Never a dull moment with the three of you around <laugh> guys. Thanks so much for sharing your insights and for letting me join this power panel. Great to be here. Don't go anywhere, cuz we'll see you tomorrow for day two, live from Vegas,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Germany | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
2015 | DATE | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Charles Phillips | PERSON | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Antonio | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
Steve | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
John Schultz | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
40 | QUANTITY | 0.99+ |
Vegas | LOCATION | 0.99+ |
Holger | PERSON | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Antonio Neri | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
eight | QUANTITY | 0.99+ |
35% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Holger Mueller | PERSON | 0.99+ |
Alma Russo | PERSON | 0.99+ |
6% | QUANTITY | 0.99+ |
Justin | PERSON | 0.99+ |
200 billion | QUANTITY | 0.99+ |
John furrier | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
this year | DATE | 0.99+ |
This year | DATE | 0.99+ |
Justin Hotard, HPE | HPE Discover 2022
>>The cube presents HPE discover 2022 brought to you by HPE. >>Hey everyone. Welcome back to the Cube's coverage of HPE. Discover 22 live from the Sans expo center in Las Vegas. Lisa Martin, here with Dave Velante. We've an alumni back joining us to talk about high performance computing and AI, Justin ARD, EVP, and general manager of HPC and AI at HPE. That's a mouthful. Welcome back. >>It is no, it's great to be back and wow, it's great to be back in person as well. >>It's it's life changing to be back in person. The keynote this morning was great. The Dave was saying the energy that he's seen is probably the most out of, of any discover that you've been at and we've been feeling that and it's only day one. >>Yeah, I, I, I agree. And I think it's a Testament to the places in the market that we're leading the innovation we're driving. I mean, obviously the leadership in HPE GreenLake and, and enabling as a service for, for every customer, not just those in the public cloud, providing that, that capability. And then obviously what we're doing at HPC and AI breaking, uh, you know, breaking records and, uh, advancing the industry. So >>I just saw the Q2 numbers, nice revenue growth there for HPC and AI. Talk to us about the lay of the land what's going on, what are customers excited about? >>Yeah. You know, it's, it's a, it's a really fascinating time in this, in this business because we're, you know, we just, we just delivered the first, the world's first exo scale system. Right. And that's, uh, you know, that's a huge milestone for our industry, a breakthrough, you know, 13 years ago, we did the first Petta scale system. Now we're doing the first exo scale system, huge advance forward. But what's exciting too, is these systems are enabling new applications, new workloads, breakthroughs in AI, the beginning of being able to do proper quantum simulations, which will lead us to a much, you know, brighter future with quantum and then actually better and more granular models, which have the ability to really change the world. >>I was telling Lisa that during the pandemic we did, uh, exo scale day, it was like this co yep. You know, produce event. And we weren't quite at exo scale yet, but we could see it coming. And so it was great to see in frontier and, and the keynote you guys broke through that, is that a natural evolution of HPC or is this we entering a new era? >>Yeah, I, I think it's a new era and I think it's a new era for a few reasons because that, that breakthrough really, it starts to enable a different class of use cases. And it's combined with the fact that I think, you know, you look at where the rest of the enterprises data set has gone, right? We've got a lot more data, a lot more visibility to data. Um, but we don't know how to use it. And now with this computing power, we can start to create new insights and new applications. And so I think this is gonna be a path to making HPC more broadly available. And of course it introduces AI models at scale. And that's, that's really critical cause AI is a buzzword. I mean, lots of people say they're doing AI, but when you know, to, to build true intelligence, not, not effectively, you know, a machine that learns data and then can only handle that data, but to build true intelligence where you've got something that can continue to learn and understand and grow and evolve, you need this class of system. And so I think we're at, we're at the forefront of a lot of exciting innovation. H how, >>In terms of innovation, how important is it that you're able to combine as a service and HPC? Uh, what does that mean for, for customers for experimentation and innovation? >>You know, a couple things I've been, I've actually been talking to customers about that over the last day and a half. And, you know, one is, um, you think about these, these systems are, they're very large and, and they're, they're pretty, you know, pretty big bets if you're a customer. So getting early access to them right, is, is really key, making sure that they're, they can migrate their software, their applications, again, in our space, most of our applications are custom built, whether you're a, you know, a government or a private sector company, that's using these systems, you're, you're doing these are pretty specialized. So getting that early access is important. And then actually what we're seeing is, uh, with the growth and explosion of insight that we can enable. And some of the diversity of, you know, new, um, accelerator partners and new processors that are on the market is actually the attraction of diversity. And so making things available where customers can use multimodal systems. And we've seen that in this era, like our customer Lumi and Finland number, the number three fastest system in the world actually has two sides to their system. So there's a compute side, dense compute side and a dense accelerator side. >>So Oak Ridge national labs was on stage with Antonio this morning, the, the talking about frontier, the frontier system, I thought what a great name, very apropo, but it was also just named the number one to the super computing, top 500. That's a pretty big accomplishment. Talk about the impact of what that really means. >>Yeah. I, I think a couple things, first of all, uh, anytime you have this breakthrough of number one, you see a massive acceleration of applications. And if you really, if you look at the applications that were built, because when the us department of energy funded these Exoscale products or platforms, they also funded app a set of applications. And so it's the ability to get more accurate earth models for long term climate science. It's the ability to model the electrical grid and understand better how to build resiliency into that grid. His ability is, um, Dr. Te Rossi talked about a progressing, you know, cancer research and cancer breakthroughs. I mean, there's so many benefits to the world that we can bring with these systems. That's one element. The other big part of this breakthrough is actually a list, a lesser known list from the top 500 called the green 500. >>And that's where we measure performance over power consumption. And what's a huge breakthrough in this system. Is that not only to frontier debut at number one on the top 500, it's actually got the top two spots, uh, because it's got a small test system that also is up there, but it's got the top two spots on the green 500 and that's actually a real huge breakthrough because now we're doing a ton more computation at far lesser power. And that's really important cuz you think about these systems, ultimately you can, you can't, you know, continue to consume power linearly with scaling up performance. There's I mean, there's a huge issue on our impact on our environment, but it's the impact to the power grid. It's the impact to heat dissipation. There's a lot of complexities. So this breakthrough with frontier also enables us no pun intended to really accelerate, you know, the, the capacity and scale of these systems and what we can deliver. >>It feels like we're entering a new Renaissance of HPC. I mean, I'm old enough to remember. I, it was, it wasn't until recently my wife, not recently, maybe five, six years ago, my wife threw out my, my green thinking machines. T-shirt that Danny Hillis gave you guys probably both too young to remember, but you had thinking machines, Ken to square research convex tried to mini build a mini computer HPC. Okay. And there was a lot of innovation going on around that time and then it just became too expensive and, and, and other things X 86 happened. And, and, but it feels like now we're entering a, a new era of, of HPC. Is that valid or is it true? What's that mean for HPC as an industry and for industry? >>Yeah, I think, I think it's a BR I think it's a breadth. Um, it's a market that's opening and getting much more broader the number of applications you can run, you know, and we've traditionally had, you know, scientific applications, obviously there's a ton in energy and, and you know, physics and some of the traditional areas that obviously the department of energy sponsor, but, you know, we saw this with, with even the COVID pandemic, right? Our, our supercomputers were used to identify the spike protein to, to help and validate and test these vaccines and bring them to market and record time. We saw some of the benefits of these breakthroughs. And I think it's this combination of that, that we actually have the data, you know, it's, it's digital, it's captured, um, we're capturing it at, you know, at the edge, we're capturing it and, and storing it obviously more broadly. So we have the access to the data and now we have the compute power to run it. And the other big thing is the techniques around artificial intelligence. I mean, what we're able to do with neural networks, computer vision, large language models, natural language processing. These are breakthroughs that, um, one require these large systems, but two, as you give them a large systems, you can actually really enable acceleration of how sophisticated these, these applications can get. >>Let's talk about the impact of the convergence of HPC and AI. What are some of the things that you're seeing now and what are some of the things that we're gonna see? >>Yeah. So, so I, one thing I like to talk about is it's, it's really, it's not a convergence. I think it's it. Sometimes it gets a little bit oversimplified. It's actually, it's traditional modeling and simulation leveraging machine learning to, to refine the simulation. And this is a, is one of the things we talk about a lot in AI, right? It's using machine learning to actually create code in real time, rather than humans doing it, that ability to refine the model as you're running. So we have an example. We did a, uh, we, we actually launched an open source solution called smart SIM. And the first application of that was climate science. And it's what it's doing is it's actually learning the data from the model as the simulation is running to provide more accurate climate prediction. But you think about that, that could be run for, you know, anything that has a complex model. >>You could run that for financial modeling, you can use AI. And so we're seeing things like that. And I think we'll continue to see that the other side of that is using modeling and simulation to actually represent what you see in AI. So we were talking about the grid. This is one of the Exoscale compute projects you could actually use once you actually get, get the data and you can start modeling the behavior of every electrical endpoint in a city. You know, the, the meter in your house, the substation, the, the transformers, you can start measuring the FX of that. You can then build equations. Well, once you build those equations, you can then take a model, cuz you've learned what actually happens in the real world, build the equation. And then you can provide that to someone who doesn't need a extra scale supercomputer to run it, but that, you know, your local energy company can better understand what's happening and they'll know, oh, there's a problem here. We need to shift the grid or respond more, more dynamically. And hopefully that avoids brownouts or, you know, some of the catastrophic outages we've >>Seen so they can deploy that model, which, which inherently has that intelligence on sort of more cost effective systems and then apply it to a much broader range. Do any of those, um, smart simulations on, on climate suggest that it's, it's all a hoax. You don't have to answer that question. <laugh> um, what, uh, >>The temperature outside Dave might, might give you a little bit of an argument to that. >>Tell us about quantum, what's your point of view there? Is it becoming more stable? What's H HPE doing there? >>Yeah. So, so look, I think there's, there's two things to understand with quantum there's quantum hardware, right? Fundamentally, um, how, um, how that runs very differently than, than how we run traditional computers. And then there's the applications. And ultimately a quantum application on quantum hardware will be far more efficient in the future than, than anything else. We, we see the opportunity for, uh, much like we see with, you know, with HPC and AI, we just talked about for quantum to be complimentary. It runs really well with certain applications that fabricate themselves as quantum problems and some great examples are, you know, the, the life sciences, obviously quantum chemistry, uh, you see some, actually some opportunities in, in, uh, in AI and in other areas where, uh, quantum has a very, very, it, it just lends itself more naturally to the behavior of the problem. And what we believe is that in the short term, we can actually model quantum effectively on these, on these super computers, because there's not a perfect quantum hardware replacement over time. You know, we would anticipate that will evolve and we'll see quantum accelerators much. Like we see, you know, AI accelerators today in this space. So we think it's gonna be a natural evolution in progression, but there's certain applications that are just gonna be solved better by quantum. And that's the, that's the future we'll we'll run into. And >>You're suggesting if I understood it correctly, you can start building those applications and, and at least modeling what those applications look like today with today's technology. That's interesting because I mean, I, I think it's something rudimentary compared to quantum as flash storage, right? When you got rid of the spinning disc, it changed the way in which people thought about writing applications. So if I understand it, new applications that can take advantage of quantum are gonna change the way in which developers write, not one or a zero it's one and virtually infinite <laugh> combinations. >>Yeah. And I actually, I think that's, what's compelling about the opportunity is that you can, if you think about a lot of traditional the traditional computing industry, you always had to kind of wait for the hardware to be there, to really write, write, and test the application. And we, you know, we even see that with our customers and HPC and, and AI, right? They, they build a model and then they, they actually have to optimize it across the hardware once they deploy it at scale. And with quantum what's interesting is you can actually, uh, you can actually model and, and, and make progress on the software. And then, and then as the hardware becomes available, optimize it. And that's, you know, that's why we see this. We talk about this concept of quantum accelerators as, as really interesting, >>What are the customer conversations these days as there's been so much evolution in HPC and AI and the technology so much change in the world in the last couple of years, is it elevating up the CS stack in terms of your conversations with customers wanting to become familiar with Exoscale computing? For example? >>Yeah. I, I think two things, uh, one, one is we see a real rise in digital sovereignty and Exoscale and HPC as a core fund, you know, fundamental foundation. So you see what, um, you know, what Europe is doing with the, the, the Euro HPC initiative, as one example, you know, we see the same kind of leadership coming out of the UK with the system. We deployed with them in Archer two, you know, we've got many customers across the globe deploying next generation weather forecasting systems, but everybody feels, they, they understand the foundation of having a strong supercomputing and HPC capability and competence and not just the hardware, the software development, the scientific research, the, the computational scientists to enable them to remain competitive economically. It's important for defense purposes. It's important for, you know, for helping their citizens, right. And providing, you know, providing services and, and betterment. >>So that's one, I'd say that's one big theme. The other one is something Dave touched on before around, you know, as a service and why we think HP GreenLake will be, uh, a beautiful marriage with our, with our HPC and AI systems over time, which is customers also, um, are going to scale up and build really complex models. And then they'll simplify them and deploy them in other places. And so there's a number of examples. We see them, you know, we see them in places like oil and gas. We see them in manufacturing where I've gotta build a really complex model, figure out what it looks like. Then I can reduce it to a, you know, to a, uh, certain equation or application that I can then deploy. So I understand what's happening and running because you, of course, as much as I would love it, you're not gonna have, uh, every enterprise around the world or every endpoint have an exit scale system. Right. So, so that ability to, to, to really provide an as a service element with HP GreenLake, we think is really compelling. >>HP's move into HPC, the acquisitions you've made it really have become a differentiator for the company. Hasn't it? >>Yeah. And I, and I think what's unique about us today. If you look at the landscape is we're, we're really the only system provider globally. Yeah. You know, there are, there are local players that we compete with. Um, but we are the one true global system provider. And we're also the only, I would say the only holistic innovator at the system level to, to, you know, to credit my team on the work they're doing. But, you know, we're, we're also very committed to open standards. We're investing in, um, you know, in a number of places where we contribute the dev the software assets to open source, we're doing work with standards bodies to progress and accelerate the industry and enable the ecosystem. And, uh, and I think that, you know, ultimately the, the, the last thing I'd say is we, we are so connected in, um, with, through our, through the legacy or the, the legend of H Hewlett Packard labs, which now also reports into me that we have these really tight ties into advanced research and that some of that advanced research, which isn't just, um, around kind of core processing Silicon is really critical to enabling better applications, better use cases and accelerating the outcomes we see in these systems going forward. >>Can >>You double click on that? I, I, I wasn't aware that kind of reported into your group. Yeah. So, you know, the roots of HP are invent, right? Yeah. HP labs are, are renowned. It kinda lost that formula for a while. And now it's sounds like it's coming back. What, what, what are some of the cool things that you guys are working on? Well, >>You know, let me, let me start with a little bit of recent history. So we just talked about the exo scale program. I mean, that was a, that's a great example of where we had a public private partnership with the department of energy and it, and it wasn't just that we, um, you know, we built a system and delivered it, but if you go back a decade ago, or five years ago, there were, there were innovations that were built, you know, to accelerate that system. One is our Slingshot fabric as an example, which is a core enable of, of acceler, you know, of, of this accelerated computing environment, but others in software applications and services that allowed us to, you know, to really deliver a, a complete solution into the market. Um, today we're looking at things around trustworthy and ethical AI, so trustworthy AI in the sense that, you know, the models are accurate, you know, and that's, that's a challenge on two dimensions, cuz one is the, model's only as good as the data it's studying. >>So you need to validate that the data's accurate and then you need to really study how, you know, how do I make sure that even if the data is accurate, I've got a model that then, you know, is gonna predict the right things and not call a, a dog, a cat, or a, you know, a, a cat, a mouse or whatever that is. But so that's important. And, uh, so that's one area. The other is future system architectures because, um, as we've talked about before, Dave, you have this constant tension between the fabric, uh, you know, the interconnect, the compute and the, and the storage and, you know, constant, constantly balancing it. And so we're really looking at that, how do we do more, you know, shared memory access? How do we, you know, how do we do more direct rights? Like, you know, looking at some future system architectures and thinking about that. And we, you know, we think that's really, really critical in this part of the business because these heterogeneous systems, and not saying I'm gonna have one monolithic application, but I'm gonna have applications that need to take advantage of different code, different technologies at different times. And being able to move that seamlessly across the architecture, uh, we think is gonna be the, you know, a part of the, the hallmark of the Exoscale era, including >>Edge, which is a completely different animal. I think that's where some disruption is gonna gonna bubble up here in the next decade. >>So, yeah know, and, and that's, you know, that's the last thing I'd say is, is we look at AI at scale, which is another core part of the business that can run on these large clusters. That means getting all the way down to the edge and doing inference at scale, right. And, and inference at scale is, you know, I, I was, um, about a month ago, I was at the world economic forum. We were talking about the space economy and it's a great, you know, to me, it's the perfect example of inference, because if you get a set of data that you know, is, is out at Mars, it doesn't matter whether, you know, whether you wanna push all that data back to, uh, to earth for processing or not. You don't really have a choice, cuz it's just gonna take too long. >>Don't have that time. Justin, thank you so much for spending some of your time with Dave and me talking about what's going on with HBC and AI. The frontier just seems endless and very exciting. We appreciate your time on your insights. >>Great. Thanks so much. Thanks. >>Yes. And don't call a dog, a cat that I thought I learned from you. A dog at no, Nope. <laugh> Nope. <laugh> for Justin and Dave ante. I'm Lisa Martin. You're watching the Cube's coverage of day one from HPE. Discover 22. The cube is, guess what? The leader, the leader in live tech coverage will be right back with our next guest.
SUMMARY :
Welcome back to the Cube's coverage of HPE. It's it's life changing to be back in person. And then obviously what we're doing at HPC and AI breaking, uh, you know, breaking records and, I just saw the Q2 numbers, nice revenue growth there for HPC and AI. And that's, uh, you know, that's a huge milestone for our industry, a breakthrough, And so it was great to see in frontier and, and the keynote you guys broke through that, And it's combined with the fact that I think, you know, you know, one is, um, you think about these, these systems are, they're very large and, Talk about the impact of what that really means. And if you really, if you look at the applications that you know, continue to consume power linearly with scaling up performance. T-shirt that Danny Hillis gave you guys probably that obviously the department of energy sponsor, but, you know, we saw this with, with even the COVID pandemic, What are some of the things that you're seeing now and that could be run for, you know, anything that has a complex model. And hopefully that avoids brownouts or, you know, some of the catastrophic outages we've You don't have to answer that question. that fabricate themselves as quantum problems and some great examples are, you know, You're suggesting if I understood it correctly, you can start building those applications and, and at least modeling what And we, you know, we even see that with our customers and HPC And providing, you know, providing services and, and betterment. Then I can reduce it to a, you know, to a, uh, certain equation or application that I can then deploy. HP's move into HPC, the acquisitions you've made it really have become a differentiator for the company. at the system level to, to, you know, to credit my team on the work they're doing. So, you know, the roots of HP are invent, right? the sense that, you know, the models are accurate, you know, and that's, that's a challenge on two dimensions, And so we're really looking at that, how do we do more, you know, shared memory access? I think that's where some disruption is gonna gonna So, yeah know, and, and that's, you know, that's the last thing I'd say is, is we look at AI at scale, which is another core Justin, thank you so much for spending some of your time with Dave and me talking about what's going on with HBC The leader, the leader in live tech coverage will be right back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Velante | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
Justin Hotard | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
two sides | QUANTITY | 0.99+ |
Danny Hillis | PERSON | 0.99+ |
Justin ARD | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Te Rossi | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
UK | LOCATION | 0.99+ |
two things | QUANTITY | 0.99+ |
two dimensions | QUANTITY | 0.99+ |
one element | QUANTITY | 0.99+ |
next decade | DATE | 0.99+ |
Lumi | ORGANIZATION | 0.98+ |
HPC | ORGANIZATION | 0.98+ |
13 years ago | DATE | 0.98+ |
Mars | LOCATION | 0.98+ |
five years ago | DATE | 0.98+ |
Antonio | PERSON | 0.97+ |
one example | QUANTITY | 0.97+ |
first application | QUANTITY | 0.97+ |
Oak Ridge | ORGANIZATION | 0.97+ |
H Hewlett Packard | ORGANIZATION | 0.97+ |
HBC | ORGANIZATION | 0.96+ |
both | QUANTITY | 0.96+ |
five | DATE | 0.96+ |
today | DATE | 0.96+ |
six years ago | DATE | 0.95+ |
22 | QUANTITY | 0.95+ |
one area | QUANTITY | 0.94+ |
EVP | PERSON | 0.93+ |
about a month ago | DATE | 0.93+ |
COVID pandemic | EVENT | 0.92+ |
a decade ago | DATE | 0.91+ |
first exo scale system | QUANTITY | 0.91+ |
HPE GreenLake | ORGANIZATION | 0.91+ |
one thing | QUANTITY | 0.89+ |
last day | DATE | 0.89+ |
one big theme | QUANTITY | 0.88+ |
One | QUANTITY | 0.88+ |
pandemic | EVENT | 0.87+ |
2022 | DATE | 0.87+ |
two spots | QUANTITY | 0.86+ |
this morning | DATE | 0.86+ |
day one | QUANTITY | 0.85+ |
Exoscale | DATE | 0.85+ |
zero | QUANTITY | 0.84+ |
Slingshot | ORGANIZATION | 0.82+ |
500 | QUANTITY | 0.81+ |
GreenLake | COMMERCIAL_ITEM | 0.81+ |
Ken | PERSON | 0.81+ |
Dr. | PERSON | 0.77+ |
earth | LOCATION | 0.76+ |
Q2 | DATE | 0.76+ |
Sans expo | LOCATION | 0.76+ |
number one | QUANTITY | 0.74+ |
Keith White, HPE | HPE Discover 2022
>> Announcer: theCube presents HPE Discover 2022, brought to you by HPE. >> Hey, everyone. Welcome back to Las Vegas. This is Lisa Martin with Dave Vellante live at HPE Discover '22. Dave, it's great to be here. This is the first Discover in three years and we're here with about 7,000 of our closest friends. >> Yeah. You know, I tweeted out this, I think I've been to 14 Discovers between the U.S. and Europe, and I've never seen a Discover with so much energy. People are not only psyched to get back together, that's for sure, but I think HPE's got a little spring in its step and it's feeling more confident than maybe some of the past Discovers that I've been to. >> I think so, too. I think there's definitely a spring in the step and we're going to be unpacking some of that spring next with one of our alumni who joins us, Keith White's here, the executive vice president and general manager of GreenLake Cloud Services. Welcome back. >> Great. You all thanks for having me. It's fantastic that you're here and you're right, the energy is crazy at this show. It's been a lot of pent up demand, but I think what you heard from Antonio today is our strategy's changing dramatically and it's really embracing our customers and our partners. So it's great. >> Embracing the customers and the partners, the ecosystem expansion is so critical, especially the last couple of years with the acceleration of digital transformation. So much challenge in every industry, but lots of momentum on the GreenLake side, I was looking at the Q2 numbers, triple digit growth in orders, 65,000 customers over 70 services, eight new services announced just this morning. Talk to us about the momentum of GreenLake. >> The momentum's been fantastic. I mean, I'll tell you, the fact that customers are really now reaccelerating their digital transformation, you probably heard a lot, but there was a delay as we went through the pandemic. So now it's reaccelerating, but everyone's going to a hybrid, multi-cloud environment. Data is the new currency. And obviously, everyone's trying to push out to the Edge and GreenLake is that edge to cloud platform. So we're just seeing tons of momentum, not just from the customers, but partners, we've enabled the platform so partners can plug into it and offer their solutions to our customers as well. So it's exciting and it's been fun to see the momentum from an order standpoint, but one of the big numbers that you may not be aware of is we have over a 96% retention rate. So once a customer's on GreenLake, they stay on it because they're seeing the value, which has been fantastic. >> The value is absolutely critically important. We saw three great big name customers. The Home Depot was on stage this morning, Oak Ridge National Laboratory was as well, Evil Geniuses. So the momentum in the enterprise is clearly present. >> Yeah. It is. And we're hearing it from a lot of customers. And I think you guys talk a lot about, hey, there's the cloud, data and Edge, these big mega trends that are happening out there. And you look at a company like Barclays, they're actually reinventing their entire private cloud infrastructure, running over a hundred thousand workloads on HPE GreenLake. Or you look at a company like Zenseact, who's basically they do autonomous driving software. So they're doing massive parallel computing capabilities. They're pulling in hundreds of petabytes of data to then make driving safer and so you're seeing it on the data front. And then on the Edge, you look at anyone like a Patrick Terminal, for example. They run a whole terminal shipyard. They're getting data in from exporters, importers, regulators, the works and they have to real-time, analyze that data and say, where should this thing go? Especially with today's supply chain challenges, they have to be so efficient, that it's just fantastic. >> It was interesting to hear Fidelma, Keith, this morning on stage. It was the first time I'd really seen real clarity on the platform itself and that it's obviously her job is, okay, here's the platform, now, you guys got to go build on top of it. Both inside of HPE, but also externally, so your ecosystem partners. So, you mentioned the financial services companies like Barclays. We see those companies moving into the digital world by offering some of their services in building their own clouds. >> Keith: That's right. >> What's your vision for GreenLake in terms of being that platform, to assist them in doing that and the data component there? >> I think that was one of the most exciting things about not just showcasing the platform, but also the announcement of our private cloud enterprise, Cloud Service. Because in essence, what you're doing is you're creating that framework for what most companies are doing, which is they're becoming cloud service providers for their internal business units. And they're having to do showback type scenarios, chargeback type scenarios, deliver cloud services and solutions inside the organization so that open platform, you're spot on. For our ecosystem, it's fantastic, but for our customers, they get to leverage it as well for their own internal IT work that's happening. >> So you talk about hybrid cloud, you talk about private cloud, what's your vision? You know, we use this term Supercloud. This in a layer that goes across clouds. What's your thought about that? Because you have an advantage at the Edge with Aruba. Everybody talks about the Edge, but they talk about it more in the context of near Edge. >> That's right. >> We talked to Verizon and they're going far Edge, you guys are participating in that, as well as some of your partners in Red Hat and others. What's your vision for that? What I call Supercloud, is that part of the strategy? Is that more longer term or you think that's pipe dream by Dave? >> No, I think it's really thoughtful, Dave, 'cause it has to be part of the strategy. What I hear, so for example, Ford's a great example. They run Azure, AWS, and then they made a big deal with Google cloud for their internal cars and they run HPE GreenLake. So they're saying, hey, we got four clouds. How do we sort of disaggregate the usage of that? And Chris Lund, who is the VP of information technology at Liberty Mutual Insurance, he talked about it today, where he said, hey, I can deliver these services to my business unit. And they don't know, am I running on the public cloud? Am I running on our HPE GreenLake cloud? Like it doesn't matter to the end user, we've simplified that so much. So I think your Supercloud idea is super thoughtful, not to use the super term too much, that I'm super excited about because it's really clear of what our customers are trying to accomplish, which it's not about the cloud, it's about the solution and the business outcome that gets to work. >> Well, and I think it is different. I mean, it's not like the last 10 years where it was like, hey, I got my stuff to work on the different clouds and I'm replicating as much as I can, the cloud experience on-prem. I think you guys are there now and then to us, the next layer is that ecosystem enablement. So how do you see the ecosystem evolving and what role does Green Lake play there? >> Yeah. This has been really exciting. We had Tarkan Maner who runs Nutanix and Karl Strohmeyer from Equinix on stage with us as well. And what's happening with the ecosystem is, I used to say, one plus one has to equal three for our customers. So when you bring these together, it has to be that scenario, but we are joking that one plus one plus one equals five now because everything has a partner component to it. It's not about the platform, it's not about the specific cloud service, it's actually about the solution that gets delivered. And that's done with an ISV, it's done with a Colo, it's done even with the Hyperscalers. We have Azure Stack HCI as a fully integrated solution. It happens with managed service providers, delivering managed services out to their folks as well. So that platform being fully partner enabled and that ecosystem being able to take advantage of that, and so we have to jointly go to market to our customers for their business needs, their business outcomes. >> Some of the expansion of the ecosystem. we just had Red Hat on in the last hour talking about- >> We're so excited to partner with them. >> Right, what's going on there with OpenShift and Ansible and Rel, but talk about the customer influence in terms of the expansion of the ecosystem. We know we've got to meet customers where they are, they're driving it, but we know that HPE has a big presence in the enterprise and some pretty big customer names. How are they from a demand perspective? >> Well, this is where I think the uniqueness of GreenLake has really changed HPE's approach with our customers. Like in all fairness, we used to be a vendor that provided hardware components for, and we talked a lot about hardware costs and blah, blah, blah. Now, we're actually a partner with those customers. What's the business outcome you're requiring? What's the SLA that we offer you for what you're trying to accomplish? And to do that, we have to have it done with partners. And so even on the storage front, Qumulo or Cohesity. On the backup and recovery disaster recovery, yes, we have our own products, but we also partner with great companies like Veeam because it's customer choice, it's an open platform. And the Red Hat announcement is just fantastic. Because, hey, from a container platform standpoint, OpenShift provides 5,000 plus customers, 90% of the fortune 500 that they engage with, with that opportunity to take GreenLake with OpenShift and implement that container capabilities on-prem. So it's fantastic. >> We were talking after the keynote, Keith Townsend came on, myself and Lisa. And he was like, okay, what about startups? 'Cause that's kind of a hallmark of cloud. And we felt like, okay, startups are not the ideal customer profile necessarily for HPE. Although we saw Evil Geniuses up on stage, but I threw out and I'd love to get your thoughts on this that within companies, incumbents, you have entrepreneurs, they're trying to build their own clouds or Superclouds as I use the term, is that really the target for the developer audience? We've talked a lot about OpenShift with their other platforms, who says as a partner- >> We just announced another extension with Rancher and- >> Yeah. I saw that. And you have to have optionality for developers. Is that the way we should think about the target audience from a developer standpoint? >> I think it will be as we go forward. And so what Fidelma presented on stage was the new developer platform, because we have come to realize, we have to engage with the developers. They're the ones building the apps. They're the ones that are delivering the solutions for the most part. So yeah, I think at the enterprise space, we have a really strong capability. I think when you get into the sort of mid-market SMB standpoint, what we're doing is we're going directly to the managed service and cloud service providers and directly to our Disty and VARS to have them build solutions on top of GreenLake, powered by GreenLake, to then deliver to their customers because that's what the customer wants. I think on the developer side of the house, we have to speak their language, we have to provide their capabilities because they're going to start articulating apps that are going to use both the public cloud and our on-prem capabilities with GreenLake. And so that's got to work very well. And so you've heard us talk about API based and all of that sort of scenario. So it's an exciting time for us, again, moving HPE strategy into something very different than where we were before. >> Well, Keith, that speaks to ecosystem. So I don't know if you were at Microsoft, when the sweaty Steve Ballmer was working with the developers, developers. That's about ecosystem, ecosystem, ecosystem. I don't expect we're going to see Antonio replicating that. But that really is the sort of what you just described is the ecosystem developing on top of GreenLake. That's critical. >> Yeah. And this is one of the things I learned. So, being at Microsoft for as long as I was and leading the Azure business from a commercial standpoint, it was all about the partner and I mean, in all fairness, almost every solution that gets delivered has some sort of partner component to it. Might be an ISV app, might be a managed service, might be in a Colo, might be with our hybrid cloud, with our Hyperscalers, but everything has a partner component to it. And so one of the things I learned with Azure is, you have to sell through and with your ecosystem and go to that customer with a joint solution. And that's where it becomes so impactful and so powerful for what our customers are trying to accomplish. >> When we think about the data gravity and the value of data that put massive potential that it has, even Antonio talked about it this morning, being data rich but insights poor for a long time. >> Yeah. >> Every company in today's day and age has to be a data company to be competitive, there's no more option for that. How does GreenLake empower companies? GreenLake and its ecosystem empower companies to really live being data companies so that they can meet their customers where they are. >> I think it's a really great point because like we said, data's the new currency. Data's the new gold that's out there and people have to get their arms around their data estate. So then they can make these business decisions, these business insights and garner that. And Dave, you mentioned earlier, the Edge is bringing a ton of new data in, and my Zenseact example is a good one. But with GreenLake, you now have a platform that can do data and data management and really sort of establish and secure the data for you. There's no data latency, there's no data egress charges. And which is what we typically run into with the public cloud. But we also support a wide range of databases, open source, as well as the commercial ones, the sequels and those types of scenarios. But what really comes to life is when you have to do analytics on that and you're doing AI and machine learning. And this is one of the benefits I think that people don't realize with HPE is, the investments we've made with Cray, for example, we have and you saw on stage today, the largest supercomputer in the world. That depth that we have as a company, that then comes down into AI and analytics for what we can do with high performance compute, data simulations, data modeling, analytics, like that is something that we, as a company, have really deep, deep capabilities on. So it's exciting to see what we can bring to customers all for that spectrum of data. >> I was excited to see Frontier, they actually achieve, we hosted an event, co-produced event with HPE during the pandemic, Exascale day. >> Yeah. >> But we weren't quite at Exascale, we were like right on the cusp. So to see it actually break through was awesome. So HPC is clearly a differentiator for Hewlett Packard Enterprise. And you talk about the egress. What are some of the other differentiators? Why should people choose GreenLake? >> Well, I think the biggest thing is, that it's truly is a edge to cloud platform. And so you talk about Aruba and our capabilities with a network attached and network as a service capabilities, like that's fairly unique. You don't see that with the other companies. You mentioned earlier to me that compute capabilities that we've had as a company and the storage capabilities. But what's interesting now is that we're sort of taking all of that expertise and we're actually starting to deliver these cloud services that you saw on stage, private cloud, AI and machine learning, high performance computing, VDI, SAP. And now we're actually getting into these industry solutions. So we talked last year about electronic medical records, this year, we've talked about 5g. Now, we're talking about customer loyalty applications. So we're really trying to move from these sort of baseline capabilities and yes, containers and VMs and bare metal, all that stuff is important, but what's really important is the services that you run on top of that, 'cause that's the outcomes that our customers are looking at. >> Should we expect you to be accelerating? I mean, look at what you did with Azure. You look at what AWS does in terms of the feature acceleration. Should we expect HPE to replicate? Maybe not to that scale, but in a similar cadence, we're starting to see that. Should we expect that actually to go faster? >> I think you couched it really well because it's not as much about the quantity, but the quality and the uses. And so what we've been trying to do is say, hey, what is our swim lane? What is our sweet spot? Where do we have a superpower? And where are the areas that we have that superpower and how can we bring those solutions to our customers? 'Cause I think, sometimes, you get over your skis a bit, trying to do too much, or people get caught up in the big numbers, versus the, hey, what's the real meat behind it. What's the tangible outcome that we can deliver to customers? And we see just a massive TAM. I want to say my last analysis was around $42 billion in the next three years, TAM and the Azure service on-prem space. And so we think that there's nothing but upside with the core set of workloads, the core set of solutions and the cloud services that we bring. So yeah, we'll continue to innovate, absolutely, amen, but we're not in a, hey we got to get to 250 this and 300 that, we want to keep it as focused as we can. >> Well, the vast majority of the revenue in the public cloud is still compute. I mean, not withstanding, Microsoft obviously does a lot in SaaS, but I'm talking about the infrastructure and service. Still, well, I would say over 50%. And so there's a lot of the services that don't make any revenue and there's that long tail, if I hear your strategy, you're not necessarily going after that. You're focusing on the quality of those high value services and let the ecosystem sort of bring in the rest. >> This is where I think the, I mean, I love that you guys are asking me about the ecosystem because this is where their sweet spot is. They're the experts on hyper-converged or databases, a service or VDI, or even with SAP, like they're the experts on that piece of it. So we're enabling that together to our customers. And so I don't want to give you the impression that we're not going to innovate. Amen. We absolutely are, but we want to keep it within that, that again, our swim lane, where we can really add true value based on our expertise and our capabilities so that we can confidently go to customers and say, hey, this is a solution that's going to deliver this business value or this capability for you. >> The partners might be more comfortable with that than, we only have one eye sleep with one eye open in the public cloud, like, okay, what are they going to, which value of mine are they grab next? >> You're spot on. And again, this is where I think, the power of what an Edge to cloud platform like HPE GreenLake can do for our customers, because it is that sort of, I mentioned it, one plus one equals three kind of scenario for our customers so. >> So we can leave your customers, last question, Keith. I know we're only on day one of the main summit, the partner growth summit was yesterday. What's the feedback been from the customers and the ecosystem in terms of validating the direction that HPE is going? >> Well, I think the fantastic thing has been to hear from our customers. So I mentioned in my keynote recently, we had Liberty Mutual and we had Texas Children's Hospital, and they're implementing HPE GreenLake in a variety of different ways, from a private cloud standpoint to a data center consolidation. They're seeing sustainability goals happen on top of that. They're seeing us take on management for them so they can take their limited resources and go focus them on innovation and value added scenarios. So the flexibility and cost that we're providing, and it's just fantastic to hear this come to life in a real customer scenario because what Texas Children is trying to do is improve patient care for women and children like who can argue with that. >> Nobody. >> So, yeah. It's great. >> Awesome. Keith, thank you so much for joining Dave and me on the program, talking about all of the momentum with HPE Greenlake. >> Always. >> You can't walk in here without feeling the momentum. We appreciate your insights and your time. >> Always. Thank you you for the time. Yeah. Great to see you as well. >> Likewise. >> Thanks. >> For Keith White and Dave Vellante, I'm Lisa Martin. You're watching theCube live, day one coverage from the show floor at HPE Discover '22. We'll be right back with our next guest. (gentle music)
SUMMARY :
brought to you by HPE. This is the first Discover in three years I think I've been to 14 Discovers a spring in the step and the energy is crazy at this show. and the partners, and GreenLake is that So the momentum in the And I think you guys talk a lot about, on the platform itself and and solutions inside the organization at the Edge with Aruba. that part of the strategy? and the business outcome I mean, it's not like the last and so we have to jointly go Some of the expansion of the ecosystem. to partner with them. in terms of the expansion What's the SLA that we offer you that really the target Is that the way we should and all of that sort of scenario. But that really is the sort and leading the Azure business gravity and the value of data so that they can meet their and secure the data for you. with HPE during the What are some of the and the storage capabilities. in terms of the feature acceleration. and the cloud services that we bring. and let the ecosystem I love that you guys are the power of what an and the ecosystem in terms So the flexibility and It's great. about all of the momentum We appreciate your insights and your time. Great to see you as well. from the show floor at HPE Discover '22.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Keith | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Steve Ballmer | PERSON | 0.99+ |
Chris Lund | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Barclays | ORGANIZATION | 0.99+ |
Keith White | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Ford | ORGANIZATION | 0.99+ |
GreenLake | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Karl Strohmeyer | PERSON | 0.99+ |
Zenseact | ORGANIZATION | 0.99+ |
Liberty Mutual Insurance | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
90% | QUANTITY | 0.99+ |
GreenLake Cloud Services | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Tarkan Maner | PERSON | 0.99+ |
65,000 customers | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
this year | DATE | 0.99+ |
Evil Geniuses | TITLE | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
Texas Children's Hospital | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Liberty Mutual | ORGANIZATION | 0.99+ |
around $42 billion | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
Aruba | ORGANIZATION | 0.99+ |
eight new services | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Texas Children | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Home Depot | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.98+ |
Equinix | ORGANIZATION | 0.98+ |
Fidelma | PERSON | 0.98+ |
Both | QUANTITY | 0.98+ |
Supercloud | ORGANIZATION | 0.98+ |
TAM | ORGANIZATION | 0.98+ |
U.S. | LOCATION | 0.97+ |
both | QUANTITY | 0.97+ |
over 50% | QUANTITY | 0.97+ |
5,000 plus customers | QUANTITY | 0.97+ |
Antonio | PERSON | 0.97+ |
hundreds of petabytes | QUANTITY | 0.97+ |
14 Discovers | QUANTITY | 0.97+ |
Edge | ORGANIZATION | 0.97+ |
Disty | ORGANIZATION | 0.97+ |
Red Hat | ORGANIZATION | 0.96+ |
Rancher | ORGANIZATION | 0.96+ |
Marcel Hild, Red Hat & Kenneth Hoste, Ghent University | Kubecon + Cloudnativecon Europe 2022
(upbeat music) >> Announcer: theCUBE presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Welcome to Valencia, Spain, in KubeCon CloudNativeCon Europe 2022. I'm your host Keith Townsend, along with Paul Gillon. And we're going to talk to some amazing folks. But first Paul, do you remember your college days? >> Vaguely. (Keith laughing) A lot of them are lost. >> I think a lot of mine are lost as well. Well, not really, I got my degree as an adult, so they're not that far past. I can remember 'cause I have the student debt to prove it. (both laughing) Along with us today is Kenneth Hoste, systems administrator at Ghent University, and Marcel Hild, senior manager software engineering at Red Hat. You're working in office of the CTO? >> That's absolutely correct, yes >> So first off, I'm going to start off with you Kenneth. Tell us a little bit about the research that the university does. Like what's the end result? >> Oh, wow, that's a good question. So the research we do at university and again, is very broad. We have bioinformaticians, physicists, people looking at financial data, all kinds of stuff. And the end result can be very varied as well. Very often it's research papers, or spinoffs from the university. Yeah, depending on the domain I would say, it depends a lot on. >> So that sounds like the perfect environment for cloud native. Like the infrastructure that's completely flexible, that researchers can come and have a standard way of interacting, each team just use it's resources as they would, the Navana for cloud native. >> Yeah. >> But somehow, I'm going to guess HPC isn't quite there yet. >> Yeah, not really, no. So, HPC is a bit, let's say slow into adopting new technologies. And we're definitely seeing some impact from cloud, especially things like containers and Kubernetes, or we're starting to hear these things in HPC community as well. But I haven't seen a lot of HPC clusters who are really fully cloud native. Not yet at least. Maybe this is coming. And if I'm walking around here at KubeCon, I can definitely, I'm being convinced that it's coming. So whether we like it or not we're probably going to have to start worrying about stuff like this. But we're still, let's say, the most prominent technologies of things like NPI, which has been there for 20, 30 years. The Fortran programming language is still the main language, if you're looking at compute time being spent on supercomputers, over 1/2 of the time spent is in Fortran code essentially. >> Keith: Wow. >> So either the application itself where the simulations are being done is implemented in Fortran, or the libraries that we are talking to from Python for example, for doing heavy duty computations, that backend library is implemented in Fortran. So if you take all of that into account, easily over 1/2 of the time is spent in Fortran code. >> So is this because the libraries don't migrate easily to, distributed to that environment? >> Well, it's multiple things. So first of all, Fortran is very well suited for implementing these type of things. >> Paul: Right. >> We haven't really seen a better alternative maybe. And also it'll be a huge effort to re-implement that same functionality in a newer language. So, the use case has to be very convincing, there has to be a very good reason why you would move away from Fortran. And, at least the HPC community hasn't seen that reason yet. >> So in theory, and right now we're talking about the theory and then what it takes to get to the future. In theory, I can take that Fortran code put it in a compiler that runs in a container? >> Yeah, of course, yeah. >> Why isn't it that simple? >> I guess because traditionally HPC is very slow at adopting new stuff. So, I'm not saying there isn't a reason that we should start looking at these things. Flexibility is a very important one. For a lot of researchers, their compute needs are very picky. So they're doing research, they have an idea, they want you to run lots of simulations, get the results, but then they're silent for a long time writing the paper, or thinking about how to, what they can learn from the results. So there's lots of peaks, and that's a very good fit for a cloud environment. I guess at the scale of university you have enough diversity end users that all those peaks never fall at the same time. So if you have your big own infrastructure you can still fill it up quite easily and keep your users happy. But this busty thing, I guess we're seeing that more and more or so. >> So Marcel, talk to us about, Red Hat needing to service these types of end users. That it can be on both ends I'd imagine that you have some people still in writing in Fortran, you have some people that's asking you for objects based storage. Where's Fortran, I'm sorry, not Fortran, but where is Red Hat in providing the underlay and the capabilities for the HPC and AI community? >> Yeah. So, I think if you look at the user base that we're looking at, it's on this spectrum from development to production. So putting AI workloads into production, it's an interesting challenge but it's easier to solve, and it has been solved to some extent, than the development cycle. So what we're looking at in Kenneth's domain it's more like the end user, the data scientist, developing code, and doing these experiments. Putting them into production is that's where containers live and thrive. You can containerize your model, you containerize your workload, you deploy it into your OpenShift Kubernetes cluster, done, you monitor it, done. So the software developments and the SRE, the ops part, done, but how do I get the data scientist into this cloud native age where he's not developing on his laptop or on a machine, where he SSH into and then does some stuff there. And then some system admin comes and needs to tweak it because it's running out of memory or whatnot. But how do we take him and make him, well, and provide him an environment that is good enough to work in, in the browser, and then with IDE, where the workload of doing the computation and the experimentation is repeatable, so that the environment is always the same, it's reliable, so it's always up and running. It doesn't consume resources, although it's up and running. Where it's, where the supply chain and the configuration of... And the, well, the modules that are brought into the system are also reliable. So all these problems that we solved in the traditional software development world, now have to transition into the data science and HPC world, where the problems are similar, but yeah, it's different sets. It's more or less, also a huge educational problem and transitioning the tools over into that is something... >> Well, is this mostly a technical issue or is this a cultural issue? I mean, are HPC workloads that different from more conventional OLTP workloads that they would not adapt well to a distributed containerized environment? >> I think it's both. So, on one hand it's the cultural issue because you have two different communities, everybody is reinventing the wheel, everybody is some sort of siloed. So they think, okay, what we've done for 30 years now we, there's no need to change it. And they, so it's, that's what thrives and here at KubeCon where you have different communities coming together, okay, this is how you solved the problem, maybe this applies also to our problem. But it's also the, well, the tooling, which is bound to a machine, which is bound to an HPC computer, which is architecturally different than a distributed environment where you would treat your containers as kettle, and as something that you can replace, right? And the HPC community usually builds up huge machines, and these are like the gray machines. So it's also technical bit of moving it to this age. >> So the massively parallel nature of HPC workloads you're saying Kubernetes has not yet been adapted to that? >> Well, I think that parallelism works great. It's just a matter of moving that out from an HPC computer into the scale out factor of a Kubernetes cloud that elastically scales out. Whereas the traditional HPC computer, I think, and Kenneth can correct me here is, more like, I have this massive computer with 1 million cores or whatnot, and now use it. And I can use my time slice, and book my time slice there. Whereas this a Kubernetes example the concept is more like, I have 1000 cores and I declare something into it and scale it up and down based on the needs. >> So, Kenneth, this is where you talked about the culture part of the changes that need to be happening. And quite frankly, the computer is a tool, it's a tool to get to the answer. And if that tool is working, if I have a 1000 cores on a single HPC thing, and you're telling me, well, I can't get to a system with 2000 cores. And if you containerized your process and move it over then maybe I'll get to the answer 50% faster maybe I'm not that... Someone has to make that decision. How important is it to get people involved in these types of communities from a researcher? 'Cause research is very tight-knit community to have these conversations and help that see move happen. >> I think it's very important to that community should, let's say, the cloud community, HPC research community, they should be talking a lot more, there should be way more cross pollination than there is today. I'm actually, I'm happy that I've seen HPC mentioned at booths and talks quite often here at KubeCon, I wasn't really expecting that. And I'm not sure, it's my first KubeCon, so I don't know, but I think that's kind of new, it's pretty recent. If you're going to the HPC community conferences there containers have been there for a couple of years now, something like Kubernetes is still a bit new. But just this morning there was a keynote by a guy from CERN, who was explaining, they're basically slowly moving towards Kubernetes even for their HPC clusters as well. And he's seeing that as the future because all the flexibility it gives you and you can basically hide all that from the end user, from the researcher. They don't really have to know that they're running on top of Kubernetes. They shouldn't care. Like you said, to them it's just a tool, and they care about if the tool works, they can get their answers and that's what they want to do. How that's actually being done in the background they don't really care. >> So talk to me about the AI side of the equation, because when I talk to people doing AI, they're on the other end of the spectrum. What are some of the benefits they're seeing from containerization? >> I think it's the reproducibility of experiments. So, and data scientists are, they're data scientists and they do research. So they care about their experiment. And maybe they also care about putting the model into production. But, I think from a geeky perspective they are more interested in finding the next model, finding the next solution. So they do an experiment, and they're done with it, and then maybe it's going to production. So how do I repeat that experiment in a year from now, so that I can build on top of it? And a container I think is the best solution to wrap something with its dependency, like freeze it, maybe even with the data, store it away, and then come to it back later and redo the experiment or share the experiment with some of my fellow researchers, so that they don't have to go through the process of setting up an equivalent environment on their machines, be it their laptop, via their cloud environment. So you go to the internet, download something doesn't work, container works. >> Well, you said something that really intrigues me you know in concept, I can have a, let's say a one terabyte data set, have a experiment associated with that. Take a snapshot of that somehow, I don't know how, take a snapshot of that and then share it with the rest of the community and then continue my work. >> Marcel: Yeah. >> And then we can stop back and compare notes. Where are we at in a maturity scale? Like, what are some of the pitfalls or challenges customers should be looking out for? >> I think you actually said it right there, how do I snapshot a terabyte of data? It's, that's... >> It's a terabyte of data. (both conversing) >> It's a bit of a challenge. And if you snapshot it, you have two terabytes of data or you just snapshot the, like and get you to do a, okay, this is currently where we're at. So that's why the technology is evolving. How do we do source control management for data? How do we license data? How do we make sure that the data is unbiased, et cetera? So that's going more into the AI side of things. But at dealing with data in a declarative way in a containerized way, I think that's where currently a lot of innovation is happening. >> What do you mean by dealing with data in a declarative way? >> If I'm saying I run this experiment based on this data set and I'm running this other experiment based on this other data set, and I as the researcher don't care where the data is stored, I care that the data is accessible. And so I might declare, this is the process that I put on my data, like a data processing pipeline. These are the steps that it's going through. And eventually it will have gone through this process and I can work with my data. Pretty much like applying the concept of pipelines through data. Like you have these data pipelines and then now you have cube flow pipelines as one solution to apply the pipeline concept, to well, managing your data. >> Given the stateless nature of containers, is that an impediment to HPC adoption because of the very large data sets that are typically involved? >> I think it is if you have terabytes of data. Just, you have to get it to the place where the computation will happen, right? And just uploading that into the cloud is already a challenge. If you have the data sitting there on a supercomputer and maybe it was sitting there for two years, you probably don't care. And typically a lot of universities the researchers don't necessarily pay for the compute time they use. Like, this is also... At least in Ghent that's the case, it's centrally funded, which means, the researchers don't have to worry about the cost, they just get access to the supercomputer. If they need two terabytes of data, they get that space and they can park it on the system for years, no problem. If they need 200 terabytes of data, that's absolutely fine. >> But the university cares about the cost? >> The university cares about the cost, but they want to enable the researchers to do the research that they want to do. >> Right. >> And we always tell researchers don't feel constrained about things like compute power, storage space. If you're doing smaller research, because you're feeling constrained, you have to tell us, and we will just expand our storage system and buy a new cluster. >> Paul: Wonderful. >> So you, to enable your research. >> It's a nice environment to be in. I think this might be a Jevons paradox problem, you give researchers this capability you might, you're going to see some amazing things. Well, now the people are snapshoting, one, two, three, four, five, different versions of a one terabytes of data. It's a good problem to have, and I hope to have you back on theCUBE, talking about how Red Hat and Ghent have solved those problems. Thank you so much for joining theCUBE. From Valencia, Spain, I'm Keith Townsend along with Paul Gillon. And you're watching theCUBE, the leader in high tech coverage. (upbeat music)
SUMMARY :
brought to you by Red Hat, do you remember your college days? A lot of them are lost. the student debt to prove it. that the university does. So the research we do at university Like the infrastructure I'm going to guess HPC is still the main language, So either the application itself So first of all, So, the use case has talking about the theory I guess at the scale of university and the capabilities for and the experimentation is repeatable, And the HPC community usually down based on the needs. And quite frankly, the computer is a tool, And he's seeing that as the future What are some of the and redo the experiment the rest of the community And then we can stop I think you actually It's a terabyte of data. the AI side of things. I care that the data is accessible. for the compute time they use. to do the research that they want to do. and we will just expand our storage system and I hope to have you back on theCUBE,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul Gillon | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Kenneth | PERSON | 0.99+ |
Kenneth Hoste | PERSON | 0.99+ |
Marcel Hild | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two years | QUANTITY | 0.99+ |
Keith | PERSON | 0.99+ |
Marcel | PERSON | 0.99+ |
1 million cores | QUANTITY | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
Fortran | TITLE | 0.99+ |
1000 cores | QUANTITY | 0.99+ |
30 years | QUANTITY | 0.99+ |
two terabytes | QUANTITY | 0.99+ |
CERN | ORGANIZATION | 0.99+ |
2000 cores | QUANTITY | 0.99+ |
Ghent | LOCATION | 0.99+ |
Valencia, Spain | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
Ghent | ORGANIZATION | 0.99+ |
one terabytes | QUANTITY | 0.99+ |
each team | QUANTITY | 0.99+ |
one solution | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
today | DATE | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
Ghent University | ORGANIZATION | 0.99+ |
Kubernetes | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
HPC | ORGANIZATION | 0.98+ |
two different communities | QUANTITY | 0.96+ |
terabytes of data | QUANTITY | 0.96+ |
both ends | QUANTITY | 0.96+ |
over 1/2 | QUANTITY | 0.93+ |
two | QUANTITY | 0.93+ |
Cloudnativecon | ORGANIZATION | 0.93+ |
CloudNativeCon Europe 2022 | EVENT | 0.92+ |
this morning | DATE | 0.92+ |
a year | QUANTITY | 0.91+ |
five | QUANTITY | 0.9+ |
theCUBE | ORGANIZATION | 0.89+ |
Fortran | ORGANIZATION | 0.88+ |
KubeCon | ORGANIZATION | 0.87+ |
two terabytes of data | QUANTITY | 0.86+ |
KubeCon CloudNativeCon Europe 2022 | EVENT | 0.86+ |
Europe | LOCATION | 0.85+ |
years | QUANTITY | 0.81+ |
a terabyte of data | QUANTITY | 0.8+ |
Navana | ORGANIZATION | 0.8+ |
200 terabytes of | QUANTITY | 0.79+ |
Kubecon + | ORGANIZATION | 0.77+ |
Rajesh Pohani and Dan Stanzione | CUBE Conversation, February 2022
(contemplative upbeat music) >> Hello and welcome to this CUBE Conversation. I'm John Furrier, your host of theCUBE, here in Palo Alto, California. Got a great topic on expanding capabilities for urgent computing. Dan Stanzione, he's Executive Director of TACC, the Texas Advanced Computing Center, and Rajesh Pohani, VP of PowerEdge, HPC Core Compute at Dell Technologies. Gentlemen, welcome to this CUBE Conversation. >> Thanks, John. >> Thanks, John, good to be here. >> Rajesh, you got a lot of computing in PowerEdge, HPC, Core Computing. I mean, I get a sense that you love compute, so we'll jump right into it. And of course, I got to love TACC, Texas Advanced Computing Center. I can imagine a lot of stuff going on there. Let's start with TACC. What is the Texas Advanced Computing Center? Tell us a little bit about that. >> Yeah, we're part of the University of Texas at Austin here, and we build large-scale supercomputers, data systems, AI systems, to support open science research. And we're mainly funded by the National Science Foundation, so we support research projects in all fields of science, all around the country and around the world. Actually, several thousand projects at the moment. >> But tied to the university, got a lot of gear, got a lot of compute, got a lot of cool stuff going on. What's the coolest thing you got going on right now? >> Well, for me, it's always the next machine, but I think science-wise, it's the machines we have. We just finished deploying Lonestar6, which is our latest supercomputer, in conjunction with Dell. A little over 600 nodes of those PowerEdge servers that Rajesh builds for us. Which makes more than 20,000 that we've had here over the years, of those boxes. But that one just went into production. We're designing new systems for a few years from now, where we'll be even larger. Our Frontera system was top five in the world two years ago, just fell out of the top 10. So we've got to fix that and build the new top-10 system sometime soon. We always have a ton going on in large-scale computing. >> Well, I want to get to the Lonestar6 in a minute, on the next talk track, but... What are some of the areas that you guys are working on that are making an impact? Take us through, and we talked before we came on camera about, obviously, the academic affiliation, but also there's a real societal impact of the work you're doing. What are some of the key areas that the TACC is making an impact? >> So there's really a huge range from new microprocessors, new materials design, photovoltaics, climate modeling, basic science and astrophysics, and quantum mechanics, and things like that. But I think the nearest-term impacts that people see are what we call urgent computing, which is one of the drivers around Lonestar and some other recent expansions that we've done. And that's things like, there's a hurricane coming, exactly where is it going to land? Can we refine the area where there's going to be either high winds or storm surge? Can we assess the damage from digital imagery afterwards? Can we direct first responders in the optimal routes? Similarly for earthquakes, and a lot recently, as you might imagine, around COVID. In 2020, we moved almost a third of our resources to doing COVID work, full-time. >> Rajesh, I want to get your thoughts on this, because Dave Vellante and I have been talking about this on theCUBE recently, a lot. Obviously, people see what cloud's, going on with the cloud technology, but compute and on-premises, private cloud's been growing. If you look at the hyperscale on-premises and the edge, if you include that in, you're seeing a lot more user consumption on-premises, and now, with 5G, you got edge, you mentioned first responders, Dan. This is now pointing to a new architectural shift. As the VP of PowerEdge and HPC and Core Compute, you got to look at this and go, "Hmm." If Compute's going to be everywhere, and in locations, you got to have that compute. How does that all work together? And how do you do advanced computing, when you have these urgent needs, as well as real-time in a new architecture? >> Yeah, John, I mean, it's a pretty interesting time when you think about some of the changing dynamics and how customers are utilizing Compute in the compute needs in the industry. Seeing a couple of big trends. One, the distribution of Compute outside of the data center, 5G is really accelerating that, and then you're generating so much data, whether what you do with it, the insights that come out of it, that we're seeing more and more push to AI, ML, inside the data center. Dan mentioned what he's doing at TACC with computational analysis and some of the work that they're doing. So what you're seeing is, now, this push that data in the data center and what you do with it, while data is being created out at the edge. And it's actually this interesting dichotomy that we're beginning to see. Dan mentioned some of the work that they're doing in medical and on COVID research. Even at Dell, we're making cycles available for COVID research using our Zenith cluster, that's located in our HPC and AI Innovation Lab. And we continue to partner with organizations like TACC and others on research activities to continue to learn about the virus, how it mutates, and then how you treat it. So if you think about all the things, and data that's getting created, you're seeing that distribution and it's really leading to some really cool innovations going forward. >> Yeah, I want to get to that COVID research, but first, you mentioned a few words I want to get out there. You mentioned Lonestar6. Okay, so first, what is Lonestar6, then we'll get into the system aspect of it. Take us through what that definition is, what is Lonestar6? >> Well, as Dan mentioned, Lonestar6 is a Dell technology system that we developed with TACC, it's located at the University of Texas at Austin. It consists of more than 800 Dell PowerEdge 6525 servers that are powered with 3rd Generation AMD EPYC processors. And just to give you an example of the scale of this cluster, it could perform roughly three quadrillion operations per second. That's three petaFLOPS, and to match what Lonestar6 can compute in one second, a person would have to do one calculation every second for a hundred million years. So it's quite a good-size system, and quite a powerful one as well. >> Dan, what's the role that the system plays, you've got petaFLOPS, what, three petaFLOPS, you mentioned? That's a lot of FLOPS! So obviously urgent computing, what's cranking through the system there? Take us through, what's it like? >> Sure, well, there there's a mix of workloads on it, and on all our systems. So there's the urgent computing work, right? Fast turnaround, near real-time, whether it's COVID research, or doing... Project now where we bring in MRI data and are doing sort of patient-specific dosing for radiation treatments and chemotherapy, tailored to your tumor, instead of just the sort of general for people your size. That all requires sort of real-time turnaround. There's a lot AI research going on now, we're incorporating AI in traditional science and engineering research. And that uses an awful lot of data, but also consumes a huge amount of cycles in training those models. And then there's all of our traditional, simulation-based workloads and materials and digital twins for aircraft and aircraft design, and more efficient combustion in more efficient photovoltaic materials, or photovoltaic materials without using as much lead, and things like that. And I'm sure I'm missing dozens of other topics, 'cause, like I said, that one really runs every field of science. We've really focused the Lonestar line of systems, and this is obviously the sixth one we built, around our sort of Texas-centric users. It's the UT Austin users, and then with contributions from Texas A&M , and Texas Tech and the University of Texas system, MD Anderson Healthcare Center, the University of North Texas. So users all around the state, and every research problem that you might imagine, those are into. We're just ramping up a project in disaster information systems, that's looking at the probabilities of flooding in coastal Texas and doing... Can we make building code changes to mitigate impact? Do we have to change the standard foundation heights for new construction, to mitigate the increasing storm surges from these sort of slow storms that sit there and rain, like hurricanes didn't used to, but seem to be doing more and more. All those problems will run on Lonestar, and on all the systems to come, yeah. >> It's interesting, you mentioned urgent computing, I love that term because it could be an event, it could be some slow kind of brewing event like that rain example you mentioned. It could also be, obviously, with the healthcare, and you mentioned COVID earlier. These are urgent, societal challenges, and having that available, the processing capability, the compute, the data. You mentioned digital twins. I can imagine all this new goodness coming from that. Compare that, where we were 10 years ago. I mean, just from a mind-blowing standpoint, you have, have come so far, take us through, try to give a context to the level of where we are now, to do this kind of work, and where we were years ago. Can you give us a feel for that? >> Sure, there's a lot of ways to look at that, and how the technology's changed, how we operate around those things, and then sort of what our capabilities are. I think one of the big, first, urgent computing things for us, where we sort of realized we had to adapt to this model of computing was about 15 years ago with the big BP Gulf Oil spill. And suddenly, we were dumping thousands of processors of load to figure out where that oil spill was going to go, and how to do mitigation, and what the potential impacts were, and where you need to put your containment, and things like that. And it was, well, at that point we thought of it as sort of a rare event. There was another one, that I think was the first real urgent computing one, where the space shuttle was in orbit, and they knew something had hit it during takeoff. And we were modeling, along with NASA and a bunch of supercomputers around the world, the heat shield and could they make reentry safely? You have until they come back to get that problem done, you don't have months or years to really investigate that. And so, what we've sort of learned through some of those, the Japanese tsunami was another one, there have been so many over the years, is that one, these sort of disasters are all the time, right? One thing or another, right? If we're not doing hurricanes, we're doing wildfires and drought threat, if it's not COVID. We got good and ready for COVID through SARS and through the swine flu and through HIV work, and things like that. So it's that we can do the computing very fast, but you need to know how to do the work, right? So we've spent a lot of time, not only being able to deliver the computing quickly, but having the data in place, and having the code in place, and having people who know the methods who know how to use big computers, right? That's been a lot of what the COVID Consortium, the White House COVID Consortium, has been about over the last few years. And we're actually trying to modify that nationally into a strategic computing reserve, where we're ready to go after these problems, where we've run drills, right? And if there's a, there's a train that derails, and there's a chemical spill, and it's near a major city, we have the tools and the data in place to do wind modeling, and we have the terrain ready to go. And all those sorts of things that you need to have to be ready. So we've really sort of changed our sort of preparedness and operational model around urgent computing in the last 10 years. Also, just the way we scheduled the system, the ability to sort of segregate between these long-running workflows for things that are really important, like we displaced a lot of cancer research to do COVID research. And cancer's still important, but it's less likely that we're going to make an impact in the next two months, right? So we have to shuffle how we operate things and then just, having all that additional capacity. And I think one of the things that's really changed in the models is our ability to use AI, to sort of adroitly steer our simulations, or prune the space when we're searching parameters for simulations. So we have the operational changes, the system changes, and then things like adding AI on the scientific side, since we have the capacity to do that kind of things now, all feed into our sort of preparedness for this kind of stuff. >> Dan, you got me sold, I want to come work with you. Come on, can I join the team over there? It sounds exciting. >> Come on down! We always need good folks around here, so. (laughs) >> Rajesh, when I- >> Almost 200 now, and we're always growing. >> Rajesh, when I hear the stories about kind of the evolution, kind of where the state of the art is, you almost see the innovation trajectory, right? The growth and the learning, adding machine learning only extends out more capabilities. But also, Dan's kind of pointing out this kind of response, rapid compute engine, that they could actually deploy with learnings, and then software, so is this a model where anyone can call up and get some cycles to, say, power an autonomous vehicle, or, hey, I want to point the machinery and the cycles at something? Is the service, do you guys see this going that direction, or... Because this sounds really, really good. >> Yeah, I mean, one thing that Dan talked about was, it's not just the compute, it's also having the right algorithms, the software, the code, right? The ability to learn. So I think when those are set up, yeah. I mean, the ability to digitally simulate in any number of industries and areas, advances the pace of innovation, reduces the time to market of whatever a customer is trying to do or research, or even vaccines or other healthcare things. If you can reduce that time through the leverage of compute on doing digital simulations, it just makes things better for society or for whatever it is that we're trying to do, in a particular industry. >> I think the idea of instrumenting stuff is here forever, and also simulations, whether it's digital twins, and doing these kinds of real-time models. Isn't really much of a guess, so I think this is a huge, historic moment. But you guys are pushing the envelope here, at University of Texas and at TACC. It's not just research, you guys got real examples. So where do you guys see this going next? I see space, big compute areas that might need some data to be cranked out. You got cybersecurity, you got healthcare, you mentioned oil spill, you got oil and gas, I mean, you got industry, you got climate change. I mean, there's so much to tackle. What's next? >> Absolutely, and I think, the appetite for computing cycles isn't going anywhere, right? And it's only going to, it's going to grow without bound, essentially. And AI, while in some ways it reduces the amount of computing we do, it's also brought this whole new domain of modeling to a bunch of fields that weren't traditionally computational, right? We used to just do engineering, physics, chemistry, were all super computational, but then we got into genome sequencers and imaging and a whole bunch of data, and that made biology computational. And with AI, now we're making things like the behavior of human society and things, computational problems, right? So there's this sort of growing amount of workload that is, in one way or another, computational, and getting bigger and bigger. So that's going to keep on growing. I think the trick is not only going to be growing the computation, but growing the software and the people along with it, because we have amazing capabilities that we can bring to bear. We don't have enough people to hit all of them at once. And so, that's probably going to be the next frontier in growing out both our AI and simulation capability, is the human element of it. >> It's interesting, when you think about society, right? If the things become too predictable, what does a democracy even look like? If you know the election's going to be over two years from now in the United States, or you look at these major, major waves >> Human companies don't know. >> of innovation, you say, "Hmm." So it's democracy, AI, maybe there's an algorithm for checking up on the AI 'cause biases... So, again, there's so many use cases that just come out of this. It's incredible. >> Yeah, and bias in AI is something that we worry about and we work on, and on task forces where we're working on that particular problem, because the AI is going to take... Is based on... Especially when you look at a deep learning model, it's 100% a product of the data you show it, right? So if you show it a biased data set, it's going to have biased results. And it's not anything intrinsic about the computer or the personality, the AI, it's just data mining, right? In essence, right, it's learning from data. And if you show it all images of one particular outcome, it's going to assume that's always the outcome, right? It just has no choice, but to see that. So how we deal with bias, how do we deal with confirmation, right? I mean, in addition, you have to recognize, if you haven't, if it gets data it's never seen before, how do you know it's not wrong, right? So there's about data quality and quality assurance and quality checking around AI. And that's where, especially in scientific research, we use what's starting to be called things like physics-informed or physics-constrained AI, where the neural net that you're using to design an aircraft still has to follow basic physical laws in its output, right? Or if you're doing some materials or astrophysics, you still have to obey conservation of mass, right? So I can't say, well, if you just apply negative mass on this other side and positive mass on this side, everything works out right for stable flight. 'Cause we can't do negative mass, right? So you have to constrain it in the real world. So this notion of how we bring in the laws of physics and constrain your AI to what's possible is also a big part of the sort of AI research going forward. >> You know, Dan, you just, to me just encapsulate the science that's still out there, that's needed. Computer science, social science, material science, kind of all converging right now. >> Yeah, engineering, yeah, >> Engineering, science, >> slipstreams, >> it's all there, >> physics, yeah, mmhmm. >> it's not just code. And, Rajesh, data. You mentioned data, the more data you have, the better the AI. We have a world what's going from silos to open control planes. We have to get to a world. This is a cultural shift we're seeing, what's your thoughts? >> Well, it is, in that, the ability to drive predictive analysis based on the data is going to drive different behaviors, right? Different social behaviors for cultural impacts. But I think the point that Dan made about bias, right, it's only as good as the code that's written and the way that the data is actually brought into the system. So making sure that that is done in a way that generates the right kind of outcome, that allows you to use that in a predictive manner, becomes critically important. If it is biased, you're going to lose credibility in a lot of that analysis that comes out of it. So I think that becomes critically important, but overall, I mean, if you think about the way compute is, it's becoming pervasive. It's not just in selected industries as damage, and it's now applying to everything that you do, right? Whether it is getting you more tailored recommendations for your purchasing, right? You have better options that way. You don't have to sift through a lot of different ideas that, as you scroll online. It's tailoring now to some of your habits and what you're looking for. So that becomes an incredible time-saver for people to be able to get what they want in a way that they want it. And then you look at the way it impacts other industries and development innovation, and it just continues to scale and scale and scale. >> Well, I think the work that you guys are doing together is scratching the surface of the future, which is digital business. It's about data, it's about out all these new things. It's about advanced computing meets the right algorithms for the right purpose. And it's a really amazing operation you guys got over there. Dan, great to hear the stories. It's very provocative, very enticing to just want to jump in and hang out. But I got to do theCUBE day job here, but congratulations on success. Rajesh, great to see you and thanks for coming on theCUBE. >> Thanks for having us, John. >> Okay. >> Thanks very much. >> Great conversation around urgent computing, as computing becomes so much more important, bigger problems and opportunities are around the corner. And this is theCUBE, we're documenting it all here. I'm John Furrier, your host. Thanks for watching. (contemplative music)
SUMMARY :
the Texas Advanced Computing Center, good to be here. And of course, I got to love TACC, and around the world. What's the coolest thing and build the new top-10 of the work you're doing. in the optimal routes? and now, with 5G, you got edge, and some of the work that they're doing. but first, you mentioned a few of the scale of this cluster, and on all the systems to come, yeah. and you mentioned COVID earlier. in the models is our ability to use AI, Come on, can I join the team over there? Come on down! and we're always growing. Is the service, do you guys see this going I mean, the ability to digitally simulate So where do you guys see this going next? is the human element of it. of innovation, you say, "Hmm." the AI is going to take... You know, Dan, you just, the more data you have, the better the AI. and the way that the data Rajesh, great to see you are around the corner.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dan | PERSON | 0.99+ |
Dan Stanzione | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Rajesh | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Rajesh Pohani | PERSON | 0.99+ |
National Science Foundation | ORGANIZATION | 0.99+ |
TACC | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Texas A&M | ORGANIZATION | 0.99+ |
February 2022 | DATE | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Texas Advanced Computing Center | ORGANIZATION | 0.99+ |
United States | LOCATION | 0.99+ |
2020 | DATE | 0.99+ |
COVID Consortium | ORGANIZATION | 0.99+ |
Texas Tech | ORGANIZATION | 0.99+ |
one second | QUANTITY | 0.99+ |
Austin | LOCATION | 0.99+ |
Texas | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
University of Texas | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
HPC | ORGANIZATION | 0.99+ |
AI Innovation Lab | ORGANIZATION | 0.99+ |
University of North Texas | ORGANIZATION | 0.99+ |
PowerEdge | ORGANIZATION | 0.99+ |
two years ago | DATE | 0.99+ |
White House COVID Consortium | ORGANIZATION | 0.99+ |
more than 20,000 | QUANTITY | 0.99+ |
10 years ago | DATE | 0.98+ |
Dell Technologies | ORGANIZATION | 0.98+ |
Texas Advanced Computing Center | ORGANIZATION | 0.98+ |
more than 800 | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
dozens | QUANTITY | 0.97+ |
PowerEdge 6525 | COMMERCIAL_ITEM | 0.97+ |
one calculation | QUANTITY | 0.96+ |
MD Anderson Healthcare Center | ORGANIZATION | 0.95+ |
top 10 | QUANTITY | 0.95+ |
first responders | QUANTITY | 0.95+ |
One | QUANTITY | 0.94+ |
AMD | ORGANIZATION | 0.93+ |
HIV | OTHER | 0.92+ |
Core Compute | ORGANIZATION | 0.92+ |
over two years | QUANTITY | 0.89+ |
Lonestar | ORGANIZATION | 0.88+ |
last 10 years | DATE | 0.88+ |
every second | QUANTITY | 0.88+ |
Gulf Oil spill | EVENT | 0.87+ |
Almost 200 | QUANTITY | 0.87+ |
a hundred million years | QUANTITY | 0.87+ |
Lonestar6 | COMMERCIAL_ITEM | 0.86+ |
Sanzio Bassini, Cineca | CUBE Conversation, October 2021
(upbeat music) >> Welcome to this Cube Conversation. I'm Lisa Martin. I'm talking next with Sanzio Bassini, the head of High Performance Computing at Cineca a Dell Technologies customer. Sanzio, welcome to theCUBE. >> Thank you, it's a pleasure. >> Lisa Martin: Likewise. Nice to see you. So tell us a little bit about Cineca, this is a large computing center, but a very large Italian non-profit consortium. Tell us about it. >> Yes, Cineca has been founded 50 years ago, from the university systems in Italy to support the scientific discovery and the industry innovations using the high-performance computing, and the correlated mythologies like intelligence together with the big data processing, and the simulations. We are a corsortium, which means that is a private not-for-profit organization. Currently our member of the consortium, almost all the universities in Italy and also all the national agencies. >> Lisa Martin: And I also read that you are the top 10 out of the top 500 of the world's fastest super computers. That's a pretty big accomplishment. >> Yes. That is a part of our statutory visions in the last 10 to 15 years , we have been to say, frequent buyers in the top 10. The idea is that we're enabling the scientific discovery by mean of the providing the most advanced systems, and the co-designing the the most advanced HPC systems to promote to support the accents in science. And being part of the European high-performance computing ecosystems. >> Now, talk to me about some of the challenges that Cineca is trying to solve in particular, the human brain project. Talk to us a little bit about that and how you're leveraging high-performance computing to accelerate scientific discovery. >> The human brain project is one of the flagship projects that has been co-funded by the European Commission and the participating member states that are two different, right now , flagships together with another that is just in progress, which is the the quantum of flagship we are participating indirectly together with the National Disaster Council. And we are core partners of the HPC constructors , that is the human brain project. One billion euro of investment, co-founded by the participating states and the European Commissions. it's a project that would combine both the technology issues and the designing of a high-performance computing systems that would meet the requirements of the community. And the big scientific challenges, correlated to the physiological functions of the human brains, including different related to the behavior of the, of the human brain, either from the pathological point of view either from the physiological point of view. In order to better understand the aging user, that it would impact the, the health the public health systems, some other that are correlating with what would be the support for the physiological knowledge of the human brains. And finally computational performance, the human brain is more than Exascale systems, but with a average consumption, which is very low. We are talking about some hundred of wards of energy would provide a, an extreme and computational performance. So if we put the organizing the technology high-performance computing in terms of interconnections now we're morphing the computing systems that would represent a tremendous step in order to facing the big challenges of our base and energies, personalized medicine, climate change, food for all those kinds of big social economic challenge that we are facing. >> Which reading them, besides the human brain project, there are other projects going on, such as that you mentioned. I'd like to understand how Cineca is working with Dell Technologies. You have to translate, as you mentioned a minute ago, the scientific requirements for discovery into high-performance computing requirements. Talk to me about how you've been doing that with partners like Dell Technologies. >> In our computing architectures. We had the need to address the capability to facing the big data processing involved with respect of the Human Brain Project and generally speaking that evolved with the respect of the science-driven that would provide cloud access to the systems by means of containers technologies. And the capability also to address what will be the creation of a Federation for high performance computing facility in Europe. So at the end we manage a competitive dialogue procurement the processor, that in a certain sense would share together with the different potential technology providers, what would be the visions and also the constraints with respect to the divisions including budget constraints and at the end Dell had shown the characteristics of the solution, that it will be more, let's say compliant. And at the same time, flexible with respect of the combinations of very different constraints and requirements. >> Dell Technologies has been sounds like a pretty flexible partner because you've got so many different needs and scientific needs to meet for different researchers. Talk to me about how you mentioned that this is a multi-national effort. How does Cineca serve and work with teams not only in Italy, but in other countries and from other institutes? >> The Italian commitment together with the European member states is that by mean of scientific methods and peer review process roughly speaking of the production capacity, would be shared at the European level, that it's a commitment that has been shared together with the France, Germany, Spain, and Switzerland. Where also of course, the Italian scientists, can apply and participate, but in a sort of emulations and the advanced competition for addressing what will be the excellence in science. The remaining 50% of our production capacity is for, for the national community and in somehow to support the Italian community to be competitive on the worldwide scenario that setting up would lead also to the agreement after the international level, with respect of some of the actions that are promoted in progress in the US and in Japan also that means the sharing options with the US researchers or Japanese researchers in an open space. >> It sounds like the human brain project, which the HPC is powering, which has been around since 2013 is really facilitating global collaboration. Talk to me about some of the results that the high-performance computing environment has helped the human brain project to achieve so far. >> The main outcomes that it will be consolidated in the next phase that will be lead by Euro SPC, which is called the phoenix that stands for Federation of a high-performance computing system in Europe. That provide open service based on two concepts One is the sharing of the ID at the European level. So it means that open the access to the Cineca system to the system in France , to UNIX system in Germany, to fifth system in Switzerland, and to the diocese the marine ocean system in Spain that is federated, ID management, others, et cetera, related to what will be the Federation of data access. The scientific community may share their data in a seamless mode, the actions is being supported by genetic, has to do with the two specific target. One is the elaborations of the data that are provided by the lens, laser, laboratory facility in Florence, that is one of the core parts of garnering the data that would come from the mouse brains, the time user for caviar. And the second part is for the meso scale studies of the cortex of the brain. In some situations they combinations of performance capability of the Federation systems for addressing what would be the simulations of the overall being of the human brain would take a lot of performance that are challenging simulation periodically that they would happen combining that they HPC facility as at European level. >> Right. So I was reading there's a case study by the way, on Cineca that Dell Technologies has published. And some of the results you talked about those at the HPC is facilitating research and results on epilepsy, spinal cord injury, brain prosthesis for the blind, as well as new insights into autism. So incredibly important work that you're doing here for the human brain project. One last question for you. What advice would you give to your peers who might be in similar situations that need to, to build and deploy and maintain high-performance computing environments? Where should they start? >> There is a continuous sharing, of knowledge, experience, best practices, where the situation is different in the sense that there are, what would we be the integration of the high-performance computing technology into their production workflow. That is the sharing of the experience in order to provide a spreads and amplifications of the opportunity for supporting the innovation. That is part of our social mission in Italy, but it's also the objective. that is supported by the European Commission. >> Excellent, that sharing and that knowledge transfer and collaboration. It seems to be absolutely fundamental and the environment that you've built, facilitates that. Sanzio, thank you so much for sharing with us, what Cineca is doing and the great research that's going on there. And across a lot of disciplines. We appreciate you joining the program today. Thank you. >> Thank you. Thank you very much. >> Likewise, for Sanzio Bassini. I'm Lisa Martin. You're watching this Cube Conversation. (upbeat music)
SUMMARY :
the head of High Performance Nice to see you. and also all the national agencies. of the world's fastest super computers. in the last 10 to 15 years , the human brain project. that is the human brain project. the human brain project, And the capability also to address what will be the creation of a Talk to me about how you that means the sharing options of the results that the So it means that open the access And some of the results of the high-performance fundamental and the environment Thank you very much. for Sanzio Bassini.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Cineca | ORGANIZATION | 0.99+ |
France | LOCATION | 0.99+ |
European Commission | ORGANIZATION | 0.99+ |
Italy | LOCATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
European Commissions | ORGANIZATION | 0.99+ |
October 2021 | DATE | 0.99+ |
National Disaster Council | ORGANIZATION | 0.99+ |
Germany | LOCATION | 0.99+ |
Sanzio | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Switzerland | LOCATION | 0.99+ |
Florence | LOCATION | 0.99+ |
Spain | LOCATION | 0.99+ |
50% | QUANTITY | 0.99+ |
Japan | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
One billion euro | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
Sanzio Bassini | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Euro SPC | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
fifth system | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
50 years ago | DATE | 0.97+ |
One last question | QUANTITY | 0.96+ |
hundred | QUANTITY | 0.93+ |
Japanese | OTHER | 0.93+ |
two concepts | QUANTITY | 0.92+ |
European | OTHER | 0.9+ |
two specific target | QUANTITY | 0.89+ |
Italian | OTHER | 0.89+ |
15 years | QUANTITY | 0.88+ |
a minute | DATE | 0.8+ |
Federation of data | ORGANIZATION | 0.77+ |
top 10 | QUANTITY | 0.75+ |
Cube | TITLE | 0.68+ |
Federation for high | ORGANIZATION | 0.67+ |
UNIX | ORGANIZATION | 0.6+ |
500 | QUANTITY | 0.53+ |
10 | QUANTITY | 0.53+ |
HPC | ORGANIZATION | 0.49+ |
last | QUANTITY | 0.47+ |