Image Title

Search Results for Dallas:

Johnny Dallas, Zeet | AWS Summit SF 2022


 

>>Hello, and welcome back to the live cube coverage here in San Francisco, California, the cube live coverage. Two days, day two of a summit 2022, a summit New York city coming up in the summer. We'll be there as well. Events are back. I'm the host, John fur, the cube got great guest here, Johnny Dallas with Ze. Um, here's on the cube. We're gonna talk about his background. Uh, little trivia here. He was the youngest engineer ever worked at Amazon at the age. 17 had to get escorted into reinvent in Vegas cause he was underage <laugh> with security, all good stories. Now the CEO of gonna called Ze know DevOps kind of focus, managed service, a lot of cool stuff, John, welcome to the cube. >>Thanks John. Great. >>So tell a story. You were the youngest engineer at AWS. >>I was, yes. So I used to work at a company called Bebo. I got started very young. I started working when I was about 14, um, kind of as a software engineer. And when I, uh, was about 16, I graduated out of high school early. Um, worked at this company, Bebo running all of the DevOps at that company. Um, I went to reinvent in about 2018 to give a talk about some of the DevOps software I wrote at that company. Um, but you know, as many of those things are probably familiar with reinvent happens in a casino and I was 16, so I was not able to actually go into the casino on my own <laugh> um, so I'd have <inaudible> security as well as C security escort me in to give my talk. >>Did Andy jazzy, was he aware of this? >>Um, you know, that's a great question. I don't know. <laugh> >>I'll ask him great story. So obviously you started a young age. I mean, it's so cool to see you jump right in. I mean, I mean, you never grew up with the old school that I used to grew up in loading package software, loading it onto the server, deploying it, plugging the cables in, I mean you just rocking and rolling with DevOps as you look back now what's the big generational shift because now you got the Z generation coming in, millennials are in the workforce. It's changing. Like no one's putting package software on servers. >>Yeah, no, I mean the tools keep getting better, right? We, we keep creating more abstractions that make it easier and easier. When I, when I started doing DevOps, I could go straight into E two APIs. I had APIs from the get go and you know, my background was, I was a software engineer. I never went through like the CIS admin stack. I, I never had to, like you said, rack servers, myself. I was immediately able to scale to I, I was managing, I think 2,500 concurrent servers across every Ables region through software. It was a fundamental shift. >>Did you know what an SRE was at that time? Uh, you were kind of an SRE on >>Yeah, I was basically our first SRE, um, familiar with the, with the phrasing, but really thought of myself as a software engineer who knows cloud APIs, not a SRE. >>All right. So let's talk about what's what's going on now, as you look at the landscape today, what's the coolest thing that's going on in your mind and cloud? >>Yeah, I think the, I think the coolest thing is, you know, we're seeing the next layer of those abstraction tools exist and that's what we're doing with Ze is we've basically gone and we've, we're building an app platform that deploys onto your cloud. So if you're familiar with something like Carku, um, where you just click a GitHub repo, uh, we actually make it that easy. You click a GI hub repo and it'll deploy on a AWS using Al AWS tools. >>So, right. So this is Z. This is the company. Yes. How old's the company >>About a year and a half old now. >>Right. So explain what it does. >>Yeah. So we make it really easy for any software engineer to deploy on a AWS. Um, that's not SREs. These are the actual application engineers doing the business logic. Mm-hmm <affirmative> they don't really want to think about Yamo. They don't really want to configure everything super deeply. Um, they want to say, run this API on a AWS in the best way possible. We've encoded all the best practices into software and we set it up for you. >>Yeah. So I think the problem you're solving is, is that there's a lot of want to be DevOps engineers. And then they realize, oh shit, I don't wanna do this. Yeah. And the people want to do it. They loved under the hood. Right. People love that infrastructure, but the average developer needs to actually be as agile on scale. So that seems to be the problem you solve. Right? Yeah. >>We, we, we give way more productivity to each individual engineer, you know? >>All right. So let me ask you a question. So let me just say, I'm a developer. Cool. I built this new app. It's a streaming app or whatever. I'm making it up cube here, but let's just say I deploy it. I need your service. But what happens about when my customers say, Hey, what's your SLA? The CDN went down from this it's flaky. Does Amazon have? So how do you handle all that SLA reporting that Amazon provides? Cause they do a good job with sock reports all through the console. But as you start getting into DevOps and sell your app, mm-hmm <affirmative> you have customer issues. You, how do you view that? Yeah, >>Well, I, I think you make a great point of AWS has all this stuff already. AWS has SLAs. AWS has contract. Aw, has a lot of the tools that are expected. Um, so we don't have to reinvent the wheel here. What we do is we help people get to those SLAs more easily. So, Hey, this is a AWS SLA as a default. Um, Hey, we'll configure your services. This is what you can expect here. Um, but we can really leverage AWS reli ability of you don't have to trust us. You have to trust S and trust that the setup is good there. >>Do you handle all the recovery or mitigation between, uh, identification say downtime for instance, oh, the servers not 99% downtime, uh, went down for an hour, say something's going on? And is there a service dashboard? How does it get what's the remedy? Do you have, how does all that work? >>Yeah, so we have some built in remediation. You know, we, we basically say we're gonna do as much as we can to keep your endpoint up 24 7 mm-hmm <affirmative>. If it's something in our control, we'll do it. If it's a disc failure, that's on us. If you push bad code, we won't put out that new version until it's working. Um, so we do a lot to make sure that your endpoint stays up, um, and then alert you if there's a problem that we can't fix. So cool. Hey, S has some downtime, this thing's going on. You need to do this action. Um, we'll let you know. >>All right. So what do you do for fun? >>Yeah, so, uh, for, for fun, um, a lot of side projects. <laugh>, uh, >>What's your side hustle right now. You got going on >>The, uh, it's a lot of schools playing >>With serverless. >>Yeah. Playing with a lot of serverless stuff. Um, I think there's a lot of really cool Lam stuff as well, going on right now. Um, I love tools is, is the truest answer is I love building something that I can give to somebody else. And they're suddenly twice as productive because of it. Um, >>That's a good feeling, isn't it? Oh >>Yeah. There's nothing >>Like that. Tools versus platforms. Mm-hmm, <affirmative>, you know, the expression, too many tools in the tool, she becomes, you know, tools for all. And then ultimately tools become platforms. What's your view on that? Because if a good tool works and starts to get traction, you need to either add more tools or start building a platform platform versus tool. What's your, what's your view on our reaction to that kind of concept debate? >>Yeah, it's a good question. Uh, we we've basically started as like a, a platform. First of we've really focused on these, uh, developers who don't wanna get deep into the DevOps. And so we've done all of the piece of the stacks. We do C I C D management. We do container orchestration, we do monitoring. Um, and now we're, spliting those up into individual tools so they can be used awesome in conjunction more. >>Right. So what are some of the use cases that you see for your service? It's DevOps basically nano service DevOps for people on a DevOps team. Do clients have a DevOps person and then one person, two people what's the requirements to run >>Z? Yeah. So we we've got teams, um, from no DevOps is kind of when they start and then we've had teams grow up to about, uh, five, 10 man DevOps teams. Mm-hmm <affirmative> um, so, you know, as more structured people come in, because we're in your cloud, you're able to go in and configure it on top you're we can't block you. Uh, you wanna use some new AOL service. You're welcome to use that alongside the stack that we deploy for >>You. How many customers do you have now? >>So we've got about 40 companies that are using us for all of their infrastructure, um, kind of across the board, um, as well as >>What's the pricing model. >>Uh, so our pricing model is we, we charge basically similar to an engineer salary. So we charge, uh, a monthly rate. We have plans at 300 bucks a month, a thousand bucks a month, and then enterprise plan for based >>On the requirement scale. Yeah. You know, so back into the people cost, you must offer her discounts, not a fully loaded thing, is it? >>Yeah. There's a discounts kind of at scale, >>Then you pass through the Amazon bill. >>Yeah. So our customers actually pay for the Amazon bill themselves. Oh. So >>They have their own >>Account. There's no margin on top. You're linking your Aless account in, um, it, which is huge because we can, we are now able to help our customers get better deals with Amazon. Um, got it. We're incentivized on their team to drive your cost down. >>And what's your unit main unit of economics software scale. >>Yeah. Um, yeah, so we, we think of things as projects. How many services do you have to deploy as that scales up? Um, awesome. >>All right. You're 20 years old now you not even can't even drink legally. <laugh> what are you gonna do when you're 30? We're gonna be there. >>Well, we're, uh, we're making it better. And >>The better, the old guy on the cube here. >><laugh> I think, uh, I think we're seeing a big shift of, um, you know, we've got these major clouds. AWS is obviously the biggest cloud. Um, and it's constantly coming out with new services. Yeah. But we're starting to see other clouds have built many of the common services. So Kubernetes is a great example. It exists across all the clouds. Um, and we're starting to see new platforms come up on top that allow you to leverage tools from multiple clouds. At the same time. Many of our customers actually have AWS as their primary cloud and they'll have secondary clouds or they'll pull features from other clouds into AWS, um, through our software. I think that I'm very excited by that. And I, uh, expect to be working on that when I'm 30. Awesome. >>Well, you gonna have a good future. I gotta ask you this question cuz uh, you know, I've always, I was a computer science undergraduate in the, in the eighties and um, computer science back then was hardcore, mostly systems OS stuff, uh, database compiler. Um, now there's so much compi, right? So mm-hmm <affirmative> how do you look at the high school college curriculum experience slash folks who are nerding out on computer science? It's not one or two things much. You've got a lot of, a lot of things. I mean, look at Python, data engineering, merging as a huge skill. What's it? What's it like for college kids now and high school kids? What, what do you think they should be doing if you had to give advice to your 16 year old self back a few years ago now in college? Um, I mean Python's not a great language, but it's super effective for coding and the data's really relevant, but it's you got other language opportunities, you got tools to build. So you got a whole culture of young builders out there. What should, what should people gravitate to in your opinion stay away from yeah. Or >>Stay away from that's a good question. I, I think that first of all, you're very right of the, the amount of developers is increasing so quickly. Um, and so we see more specialization. That's why we also see, you know, these SREs that are different than typical application engineering. You get more specialization in job roles. Um, I think if, what I'd say to my 16 year old self is do projects, um, the, I learned most of my, what I've learned just on the job or online trying things, playing with different technologies, actually getting stuff out into the world, um, way more useful than what you'll learn in kind of a college classroom. I think classrooms great to, uh, get a basis, but you need to go out and experiment actually try things. >>You know? I think that's great advice. In fact, I would just say from my experience of doing all the hard stuff and cloud is so great for just saying, okay, I'm done, I'm abandoning the project. Move on. Yeah. Because you know, it's not gonna work in the old days. You have to build this data center. I bought all this certain, you know, people hang on to the old, you know, project and try to force it out there. >>You can launch a project, >>Can see gratification, it ain't working <laugh> or this is shut it down and then move on to something new. >>Yeah, exactly. Instantly you should be able to do that much more quickly. Right. >>So you're saying get those projects and don't be afraid to shut it down. Mm-hmm <affirmative> that? Do you agree with that? >>Yeah. I think it's ex experiment. Um, you're probably not gonna hit it rich on the first one. It's probably not gonna be that idea is DJing me this idea. So don't be afraid to get rid of things and just try over and over again. It's it's number of reps that a win. >>I was commenting online. Elon Musk was gonna buy Twitter, that whole Twitter thing. And, and, and someone said, Hey, you know, what's the, I go look at the product group at Twitter's been so messed up because they actually did get it right on the first time <laugh> and, and became such a great product. They could never change it because people would freak out and the utility of Twitter. I mean, they gotta add some things, the added button and we all know what they need to add, but the product, it was just like this internal dysfunction, the product team, what are we gonna work on? Don't change the product so that you kind of have there's opportunities out there where you might get the lucky strike, right. Outta the gate. Yeah. Right. You don't know, >>It's almost a curse too. It's you're not gonna Twitter. You're not gonna hit a rich second time too. So yeah. >><laugh> Johnny Dallas. Thanks for coming on the cube. Really appreciate it. Give a plug for your company. Um, take a minute to explain what you're working on, what you're looking for. You're hiring funding. Customers. Just give a plug, uh, last minute and have the last word. >>Yeah. So, um, John Dallas from Ze, if you, uh, need any help with your DevOps, if you're a early startup, you don't have DevOps team, um, or you're trying to deploy across clouds, check us out ze.com. Um, we are actively hiring. So if you are a software engineer excited about tools and cloud, or you're interested in helping getting this message out there, hit me up. Um, find a Z. >>Yeah. LinkedIn Twitter handle GitHub handle. >>Yeah. I'm the only Johnny on a LinkedIn and GitHub and underscore Johnny Dallas underscore on Twitter. Right? Um, >>Johnny Dallas, the youngest engineer working at Amazon. Um, now 20 we're on great new project here. The cube builders are all young. They're growing in to the business. They got cloud at their, at their back it's, uh, tailwind. I wish I was 20. Again, this is a cue. I'm John for your host. Thanks for watching. >>Thanks.

Published Date : Apr 21 2022

SUMMARY :

John fur, the cube got great guest here, Johnny Dallas with Ze. So tell a story. Um, but you know, Um, you know, that's a great question. I mean, it's so cool to see you jump right in. get go and you know, my background was, I was a software engineer. Yeah, I was basically our first SRE, um, familiar with the, with the phrasing, but really thought of myself as a software engineer So let's talk about what's what's going on now, as you look at the landscape today, what's the coolest Yeah, I think the, I think the coolest thing is, you know, we're seeing the next layer of those abstraction tools exist So this is Z. This is the company. So explain what it does. Um, they want to say, So that seems to be the problem you solve. So how do you handle all that SLA reporting that Amazon provides? This is what you can expect here. Um, we'll let you know. So what do you do for fun? Yeah, so, uh, for, for fun, um, a lot of side projects. What's your side hustle right now. Um, I think there's a lot of really cool Lam stuff as well, going on right now. Mm-hmm, <affirmative>, you know, the expression, too many tools in the tool, Um, and now we're, spliting those up into individual tools so they can be used awesome in conjunction more. So what are some of the use cases that you see for your service? Mm-hmm <affirmative> um, so, you know, as more structured people come in, So we charge, uh, On the requirement scale. Oh. So Um, got it. How many services do you have to deploy as that scales up? <laugh> what are you gonna do when you're And <laugh> I think, uh, I think we're seeing a big shift of, um, you know, So mm-hmm <affirmative> how do you look at the high school college curriculum experience I think classrooms great to, uh, get a basis, but you need to go out and experiment actually try things. I bought all this certain, you know, move on to something new. Instantly you should be able to do that much more quickly. Do you agree with that? So don't be afraid to get rid of things and Don't change the product so that you kind of have there's opportunities out there where you might get the lucky strike, So yeah. Um, take a minute to explain what you're working on, what you're looking for. So if you are a software engineer excited about tools and cloud, Um, Johnny Dallas, the youngest engineer working at Amazon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

Johnny DallasPERSON

0.99+

JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

John DallasPERSON

0.99+

BeboORGANIZATION

0.99+

Two daysQUANTITY

0.99+

VegasLOCATION

0.99+

99%QUANTITY

0.99+

30QUANTITY

0.99+

John furPERSON

0.99+

San Francisco, CaliforniaLOCATION

0.99+

PythonTITLE

0.99+

Elon MuskPERSON

0.99+

16 yearQUANTITY

0.99+

oneQUANTITY

0.99+

New YorkLOCATION

0.99+

JohnnyPERSON

0.99+

two peopleQUANTITY

0.99+

20QUANTITY

0.99+

LinkedInORGANIZATION

0.99+

17QUANTITY

0.99+

DevOpsTITLE

0.99+

firstQUANTITY

0.99+

fiveQUANTITY

0.99+

AOLORGANIZATION

0.99+

16QUANTITY

0.98+

an hourQUANTITY

0.98+

24 7 mmQUANTITY

0.98+

TwitterORGANIZATION

0.98+

eightiesDATE

0.98+

one personQUANTITY

0.97+

FirstQUANTITY

0.97+

GitHubORGANIZATION

0.96+

Andy jazzyPERSON

0.96+

twiceQUANTITY

0.96+

ZeetPERSON

0.95+

About a year and a half oldQUANTITY

0.95+

first timeQUANTITY

0.95+

todayDATE

0.94+

10 manQUANTITY

0.93+

each individualQUANTITY

0.92+

2,500 concurrentQUANTITY

0.92+

first oneQUANTITY

0.91+

ZeORGANIZATION

0.91+

20 years oldQUANTITY

0.89+

agileTITLE

0.87+

day twoQUANTITY

0.87+

KubernetesTITLE

0.86+

second timeQUANTITY

0.86+

a thousand bucks a monthQUANTITY

0.86+

ZePERSON

0.85+

about 40 companiesQUANTITY

0.83+

300 bucks aQUANTITY

0.78+

two thingsQUANTITY

0.77+

aboutQUANTITY

0.77+

DevOpsORGANIZATION

0.75+

ZeTITLE

0.75+

CarkuTITLE

0.73+

Bhavani Thurasingham, UT Dallas | WiDS 2018


 

>> Announcer: Live, from Stanford University in Palo Alto, California, it's theCUBE covering Women in Data Science Conference 2018, brought to you by Stanford. (light techno music) >> Welcome back to theCUBE's continuing coverage of the Women in Data Science event, WiDS 2018. We are live at Stanford University. You can hear some great buzz around us. A lot of these exciting ladies in data science are here around us. I'm pleased to be joined by my next guest, Bhavani Thuraisingham, who is one of the speakers this afternoon, as well as a distinguished professor of computer science and the executive director of Cyber Security Institute at the University of Texas at Dallas. Bhavani, thank you so much for joining us. >> Thank you very much for having me in your program. >> You have an incredible career, but before we get into that I'd love to understand your thoughts on WiDS. In it's third year alone, they're expecting to reach over 100,000 people today, both here at Stanford, as well as more than 150 regional events in over 50 countries. When you were early in your career you didn't have a mentor. What does an event like WiDS mean to you? What are some of the things that excite you about giving your time to this exciting event? >> This is such an amazing event and just in three years it has just grown and I'm just so motivated myself and it's just, words cannot express to see so many women working in data science or wanting to work in data science, and not just in U.S. and in Stanford, it's around the world. I was reading some information about WiDS and I'm finding that there are WiDS ambassadors in Africa, South America, Asia, Australia, Europe, of course U.S., Central America, all over the world. And data science is exploding so rapidly because data is everywhere, right? And so you really need to collect the data, stow the data, analyze the data, disseminate the data, and for that you need data scientists. And what I'm so encouraged is that when I started getting into this field back in 1985, and that was 32 plus years ago in the fall, I worked 50% in cyber security, what used to be called computer security, and 50% in data science, what used to be called data management at the time. And there were so few women and we did not have, as I said, women role models, and so I had to sort of work really hard, the commercial industry and then the MITRE Corporation and the U.S. Government, but slowly I started building a network and my strongest supporters have been women. And so that was sort of in the early 90's when I really got started to build this network and today I have a strong support group of women and we support each other and we also mentor so many of the junior women and so that, you know, they don't go through, have to learn the hard way like I have and so I'm very encouraged to see the enthusiasm, the motivation, both the part of the mentors as well as the mentees, so that's very encouraging but we really have to do so much more. >> We do, you're right. It's really kind of the tip of the iceberg, but I think this scale at which WiDS has grown so quickly shines a massive spotlight on there's clearly such a demand for it. I'd love to get a feel now for the female undergrads in the courses that you teach at UT Dallas. What are some of the things that you are seeing in terms of their beliefs in themselves, their interests in data science, computer science, cyber security. Tell me about that dynamic. >> Right, so I have been teaching for 13 plus years full-time now, after a career in industry and federal research lab and government and I find that we have women, but still not enough. But just over the last 13 years I'm seeing so much more women getting so involved and wanting to further their careers, coming and talking to me. When I first joined in 2004 fall, there weren't many women, but now with programs like WiDS and I also belong to another conference and actually I shared that in 2016, called WiCyS, Women in Cyber Security. So, through these programs, we've been able to recruit more women, but I would still have to say that most of the women, especially in our graduate programs are from South Asia and East Asia. We hardly find women from the U.S., right, U.S. born women pursuing careers in areas like cyber security and to some extent I would also say data science. And so we really need to do a lot more and events like WiDS and WiCys, and we've also started a Grace Lecture Series. >> Grace Hopper. >> We call it Grace Lecture at our university. Of course there's Grace Hopper, we go to Grace Hopper as well. So through these events I think that, you know women are getting more encouraged and taking leadership roles so that's very encouraging. But I still think that we are really behind, right, when you compare men and women. >> Yes and if you look at the statistics. So you have a speaking session this afternoon. Share with our audience some of the things that you're going to be sharing with the audience and some of the things that you think you'll be able to impart, in terms of wisdom, on the women here today. >> Okay, so, what I'm going to do is that, first start off with some general background, how I got here so I've already mentioned some of it to you, because it's not just going to be a U.S. event, you know, it's going to be in Forbes reports that around 100,000 people are going to watch this event from all over the world so I'm going to sort of speak to this global audience as to how I got here, to motivate these women from India, from Nigeria, from New Zealand, right? And then I'm going to talk about the work I've done. So over the last 32 years I've said about 50% of my time has been in cyber security, 50% in data science, roughly. Sometimes it's more in cyber, sometimes more in data. So my work has been integrating the two areas, okay? So my talk, first I'm going to wear my data science hat, and as a data scientist I'm developing data science techniques, which is integration of statistical reasoning, machine learning, and data management. So applying data science techniques for cyber security applications. What are these applications? Intrusion detection, insider threat detection, email spam filtering, website fingerprinting, malware analysis, so that's going to be my first part of the talk, a couple of charts. But then I'm going to wear my cyber security hat. What does that mean? These data science techniques could be hacked. That's happening now, there are some attacks that have been published where the data science, the models are being thwarted by the attackers. So you can do all the wonderful data science in the world but if your models are thwarted and they go and do something completely different, it's going to be of no use. So I'm going to wear my cyber security hat and I'm going to talk about how we are taking the attackers into consideration in designing our data science models. It's not easy, it's extremely challenging. We are getting some encouraging results but it doesn't mean that we have solved the problem. Maybe we will never solve the problem but we want to get close to it. So this area called Adversarial Machine Learning, it started probably around five years ago, in fact our team has been doing some really good work for the Army, Army research office, on Adversarial Machine Learning. And when we started, I believe it was in 2012, almost six years ago, there weren't many people doing this work, but now, there are more and more. So practically every cyber security conference has got tracks in data science machine learning. And so their point of view, I mean, their focus is not, sort of, designing machine learning techniques. That's the area of data scientists. Their focus is going to be coming up with appropriate models that are going to take the attackers into consideration. Because remember, attackers are always trying to thwart your learning process. >> Right, we were just at Fortinet Accelerate last week, theCUBE was, and cyber security and data science are such interesting and pervasive topics, right, cyber security things when Equifax happened, right, it suddenly translates to everyone, male, female, et cetera. And the same thing with data science in terms of the social impact. I'd love your thoughts on how cyber security and data science, how you can educate the next generation and maybe even reinvigorate the women that are currently in STEM fields to go look at how much more open and many more opportunities there are for women to make massive impact socially. >> There are, I would say at this time, unlimited opportunities in both areas. Now, in data science it's really exploding because every company wants to do data science because data gives them the edge. But what's the point in having raw data when you cannot analyze? That's why data science is just exploding. And in fact, most of our graduate students, especially international students, want to focus in data science. So that's one thing. Cyber security is also exploding because every technology that is being developed, anything that has a microprocessor could be hacked. So, we can do all the great data science in the world but an attacker can thwart everything, right? And so cyber security is really crucial because you have to try and stop the attacker, or at least detect what the attacker is doing. So every step that you move forward you're going to be attacked. That doesn't mean you want to give up technology. One could say, okay, let's just forget about Facebook, and Google, and Amazon, and the whole lot and let's just focus on cyber security but we cannot. I mean we have to make progress in technology. Whenever we make for progress in technology, driver-less cars or pacemakers, these technologies could be attacked. And with cyber security there is such a shortage with the U.S. Government. And so we have substantial funding from the National Science Foundation to educate U.S. citizen students in cyber security. And especially recruit more women in cyber security. So that's why we're also focusing, we are a permanent coach here for the women in cyber security event. >> What have some of the things along that front, and I love that, that you think are key to successfully recruiting U.S. females into cyber security? What do you think speaks to them? >> So, I think what speaks to them, and we have been successful in recent years, this program started in 2010 for us, so it's about eight years. The first phase we did not have women, so 2000 to 2014, because we were trying to get this education program going, giving out the scholarships, then we got our second round of funding, but our program director said, look, you guys have done a phenomenal job in having students, educating them, and placing them with U.S. Government, but you have not recruited female students. So what we did then is to get some of our senior lecturers, a superb lady called Dr. Janelle Stratch, she can really speak to these women, so we started the Grace Lecture. And so with those events, and we started the women in cyber security center as part of my cyber security institute. Through these events we were able to recruit more women. We are, women are still under-represented in our cyber security program but still, instead of zero women, I believe now we have about five women, and that's, five, by the time we will have finished a second phase we will have total graduated about 50 plus students, 52 to 55 students, out of which, I would say about eight would be female. So from zero to go to eight is a good thing, but it's not great. >> We want to keep going, keep growing that. >> We want out of 50 we should get at least 25. But at least it's a start for us. But data science we don't have as much of a problem because we have lots of international students, remember you don't need U.S. citizenship to get jobs at Facebook or, but you need U.S. citizenships to get jobs as NSA or CIA. So we get many international students and we have more women and I would say we have, I don't have the exact numbers, but in my classes I would say about 30%, maybe just under 30%, female, which is encouraging but still it's not good. >> 30% now, right, you're right, it's encouraging. What was that 13 years ago when you started? >> When I started, before data science and everything it was more men, very few women. I would say maybe about 10%. >> So even getting to 30% now is a pretty big accomplishment. >> Exactly, in data science, but we need to get our cyber security numbers up. >> So last question for you as we have about a minute left, what are some of the things that excite you about having the opportunity, to not just mentor your students, but to reach such a massive audience as you're going to be able to reach through WiDS? >> I, it's as I said, words cannot express my honor and how pleased and touched, these are the words, touched I am to be able to talk to so many women, and I want to say why, because I'm of, I'm a tamil of Sri Lanka origin and so I had to make a journey, I got married and I'm going to talk about, at 20, in 1975 and my husband was finishing, I was just finishing my undergraduate in mathematics and physics, my husband was finishing his Ph.D. at University of Cambridge, England, and so soon after marriage, at 20 I moved to England, did my master's and Ph.D., so I joined University of Bristol and then we came here in 1980, and my husband got a position at New Mexico Petroleum Recovery Center and so New Mexico Tech offered me a tenure-track position but my son was a baby and so I turned it down. Once you do that, it's sort of hard to, so I took visiting faculty positions for three years in New Mexico then in Minneapolis, then I was a senior software developer at Control Data Corporation it was one of the big companies. Then I had a lucky break in 1985. So I wanted to get back into research because I liked development but I wanted to get back into research. '85 I became, I was becoming in the fall, a U.S. citizen. Honeywell got a contract to design and develop a research contract from United States Air Force, one of the early secure database systems and Honeywell had to interview me and they had to like me, hire me. All three things came together. That was a lucky break and since then my career has been just so thankful, so grateful. >> And you've turned that lucky break by a lot of hard work into what you're doing now. We thank you so much for stopping. >> Thank you so much for having me, yes. >> And sharing your story and we're excited to hear some of the things you're going to speak about later on. So have a wonderful rest of the conference. >> Thank you very much. >> We wanted to thank you for watching theCUBE. Again, we are live at Stanford University at the third annual Women in Data Science Conference, #WiDs2018, I am Lisa Martin. After this short break I'll be back with my next guest. Stick around. (light techno music)

Published Date : Mar 5 2018

SUMMARY :

brought to you by Stanford. of computer science and the executive director What are some of the things that excite you so many of the junior women and so that, you know, What are some of the things that you are seeing and I find that we have women, but still not enough. So through these events I think that, you know and some of the things that you think you'll be able and I'm going to talk about how we and maybe even reinvigorate the women that are currently and let's just focus on cyber security but we cannot. and I love that, that you think are key to successfully and that's, five, by the time we will have finished to get jobs at Facebook or, but you need U.S. citizenships What was that 13 years ago when you started? it was more men, very few women. So even getting to 30% now Exactly, in data science, but we need and so I had to make a journey, I got married We thank you so much for stopping. some of the things you're going to speak about later on. We wanted to thank you for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
HoneywellORGANIZATION

0.99+

National Science FoundationORGANIZATION

0.99+

1980DATE

0.99+

BhavaniPERSON

0.99+

2010DATE

0.99+

New MexicoLOCATION

0.99+

1975DATE

0.99+

Lisa MartinPERSON

0.99+

MinneapolisLOCATION

0.99+

Control Data CorporationORGANIZATION

0.99+

NSAORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

2012DATE

0.99+

Janelle StratchPERSON

0.99+

1985DATE

0.99+

EnglandLOCATION

0.99+

AustraliaLOCATION

0.99+

MITRE CorporationORGANIZATION

0.99+

New ZealandLOCATION

0.99+

AfricaLOCATION

0.99+

FacebookORGANIZATION

0.99+

United States Air ForceORGANIZATION

0.99+

2016DATE

0.99+

GoogleORGANIZATION

0.99+

EuropeLOCATION

0.99+

AsiaLOCATION

0.99+

52QUANTITY

0.99+

fiveQUANTITY

0.99+

three yearsQUANTITY

0.99+

NigeriaLOCATION

0.99+

2014DATE

0.99+

CIAORGANIZATION

0.99+

U.S.LOCATION

0.99+

13 plus yearsQUANTITY

0.99+

IndiaLOCATION

0.99+

second roundQUANTITY

0.99+

Grace HopperPERSON

0.99+

Central AmericaLOCATION

0.99+

South AsiaLOCATION

0.99+

30%QUANTITY

0.99+

50%QUANTITY

0.99+

Cyber Security InstituteORGANIZATION

0.99+

U.S. GovernmentORGANIZATION

0.99+

eightQUANTITY

0.99+

East AsiaLOCATION

0.99+

first phaseQUANTITY

0.99+

Bhavani ThuraisinghamPERSON

0.99+

South AmericaLOCATION

0.99+

DallasLOCATION

0.99+

last weekDATE

0.99+

University of BristolORGANIZATION

0.99+

third yearQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

zeroQUANTITY

0.99+

first partQUANTITY

0.99+

2004 fallDATE

0.99+

StanfordLOCATION

0.99+

New Mexico TechORGANIZATION

0.98+

WiDSEVENT

0.98+

over 100,000 peopleQUANTITY

0.98+

EquifaxORGANIZATION

0.98+

oneQUANTITY

0.98+

more than 150 regional eventsQUANTITY

0.98+

second phaseQUANTITY

0.98+

over 50 countriesQUANTITY

0.98+

UT DallasORGANIZATION

0.98+

two areasQUANTITY

0.98+

2000DATE

0.98+

one thingQUANTITY

0.98+

early 90'sDATE

0.98+

both areasQUANTITY

0.98+

bothQUANTITY

0.98+

Stanford UniversityORGANIZATION

0.98+

Women in Data ScienceEVENT

0.98+

55 studentsQUANTITY

0.98+

todayDATE

0.98+

firstQUANTITY

0.98+

WiDS 2018EVENT

0.98+

'85DATE

0.98+

theCUBEORGANIZATION

0.98+

Prem Balasubramanian and Suresh Mothikuru | Hitachi Vantara: Build Your Cloud Center of Excellence


 

(soothing music) >> Hey everyone, welcome to this event, "Build Your Cloud Center of Excellence." I'm your host, Lisa Martin. In the next 15 minutes or so my guest and I are going to be talking about redefining cloud operations, an application modernization for customers, and specifically how partners are helping to speed up that process. As you saw on our first two segments, we talked about problems enterprises are facing with cloud operations. We talked about redefining cloud operations as well to solve these problems. This segment is going to be focusing on how Hitachi Vantara's partners are really helping to speed up that process. We've got Johnson Controls here to talk about their partnership with Hitachi Vantara. Please welcome both of my guests, Prem Balasubramanian is with us, SVP and CTO Digital Solutions at Hitachi Vantara. And Suresh Mothikuru, SVP Customer Success Platform Engineering and Reliability Engineering from Johnson Controls. Gentlemen, welcome to the program, great to have you. >> Thank. >> Thank you, Lisa. >> First question is to both of you and Suresh, we'll start with you. We want to understand, you know, the cloud operations landscape is increasingly complex. We've talked a lot about that in this program. Talk to us, Suresh, about some of the biggest challenges and pin points that you faced with respect to that. >> Thank you. I think it's a great question. I mean, cloud has evolved a lot in the last 10 years. You know, when we were talking about a single cloud whether it's Azure or AWS and GCP, and that was complex enough. Now we are talking about multi-cloud and hybrid and you look at Johnson Controls, we have Azure we have AWS, we have GCP, we have Alibaba and we also support on-prem. So the architecture has become very, very complex and the complexity has grown so much that we are now thinking about whether we should be cloud native or cloud agnostic. So I think, I mean, sometimes it's hard to even explain the complexity because people think, oh, "When you go to cloud, everything is simplified." Cloud does give you a lot of simplicity, but it also really brings a lot more complexity along with it. So, and then next one is pretty important is, you know, generally when you look at cloud services, you have plenty of services that are offered within a cloud, 100, 150 services, 200 services. Even within those companies, you take AWS they might not know, an individual resource might not know about all the services we see. That's a big challenge for us as a customer to really understand each of the service that is provided in these, you know, clouds, well, doesn't matter which one that is. And the third one is pretty big, at least at the CTO the CIO, and the senior leadership level, is cost. Cost is a major factor because cloud, you know, will eat you up if you cannot manage it. If you don't have a good cloud governance process it because every minute you are in it, it's burning cash. So I think if you ask me, these are the three major things that I am facing day to day and that's where I use my partners, which I'll touch base down the line. >> Perfect, we'll talk about that. So Prem, I imagine that these problems are not unique to Johnson Controls or JCI, as you may hear us refer to it. Talk to me Prem about some of the other challenges that you're seeing within the customer landscape. >> So, yeah, I agree, Lisa, these are not very specific to JCI, but there are specific issues in JCI, right? So the way we think about these are, there is a common issue when people go to the cloud and there are very specific and unique issues for businesses, right? So JCI, and we will talk about this in the episode as we move forward. I think Suresh and his team have done some phenomenal step around how to manage this complexity. But there are customers who have a lesser complex cloud which is, they don't go to Alibaba, they don't have footprint in all three clouds. So their multi-cloud footprint could be a bit more manageable, but still struggle with a lot of the same problems around cost, around security, around talent. Talent is a big thing, right? And in Suresh's case I think it's slightly more exasperated because every cloud provider Be it AWS, JCP, or Azure brings in hundreds of services and there is nobody, including many of us, right? We learn every day, nowadays, right? It's not that there is one service integrator who knows all, while technically people can claim as a part of sales. But in reality all of us are continuing to learn in this landscape. And if you put all of this equation together with multiple clouds the complexity just starts to exponentially grow. And that's exactly what I think JCI is experiencing and Suresh's team has been experiencing, and we've been working together. But the common problems are around security talent and cost management of this, right? Those are my three things. And one last thing that I would love to say before we move away from this question is, if you think about cloud operations as a concept that's evolving over the last few years, and I have touched upon this in the previous episode as well, Lisa, right? If you take architectures, we've gone into microservices, we've gone into all these server-less architectures all the fancy things that we want. That helps us go to market faster, be more competent to as a business. But that's not simplified stuff, right? That's complicated stuff. It's a lot more distributed. Second, again, we've advanced and created more modern infrastructure because all of what we are talking is platform as a service, services on the cloud that we are consuming, right? In the same case with development we've moved into a DevOps model. We kind of click a button put some code in a repository, the code starts to run in production within a minute, everything else is automated. But then when we get to operations we are still stuck in a very old way of looking at cloud as an infrastructure, right? So you've got an infra team, you've got an app team, you've got an incident management team, you've got a soft knock, everything. But again, so Suresh can talk about this more because they are making significant strides in thinking about this as a single workload, and how do I apply engineering to go manage this? Because a lot of it is codified, right? So automation. Anyway, so that's kind of where the complexity is and how we are thinking, including JCI as a partner thinking about taming that complexity as we move forward. >> Suresh, let's talk about that taming the complexity. You guys have both done a great job of articulating the ostensible challenges that are there with cloud, especially multi-cloud environments that you're living in. But Suresh, talk about the partnership with Hitachi Vantara. How is it helping to dial down some of those inherent complexities? >> I mean, I always, you know, I think I've said this to Prem multiple times. I treat my partners as my internal, you know, employees. I look at Prem as my coworker or my peers. So the reason for that is I want Prem to have the same vested interest as a partner in my success or JCI success and vice versa, isn't it? I think that's how we operate and that's how we have been operating. And I think I would like to thank Prem and Hitachi Vantara for that really been an amazing partnership. And as he was saying, we have taken a completely holistic approach to how we want to really be in the market and play in the market to our customers. So if you look at my jacket it talks about OpenBlue platform. This is what JCI is building, that we are building this OpenBlue digital platform. And within that, my team, along with Prem's or Hitachi's, we have built what we call as Polaris. It's a technical platform where our apps can run. And this platform is automated end-to-end from a platform engineering standpoint. We stood up a platform engineering organization, a reliability engineering organization, as well as a support organization where Hitachi played a role. As I said previously, you know, for me to scale I'm not going to really have the talent and the knowledge of every function that I'm looking at. And Hitachi, not only they brought the talent but they also brought what he was talking about, Harc. You know, they have set up a lot and now we can leverage it. And they also came up with some really interesting concepts. I went and met them in India. They came up with this concept called IPL. Okay, what is that? They really challenged all their employees that's working for GCI to come up with innovative ideas to solve problems proactively, which is self-healing. You know, how you do that? So I think partners, you know, if they become really vested in your interests, they can do wonders for you. And I think in this case Hitachi is really working very well for us and in many aspects. And I'm leveraging them... You started with support, now I'm leveraging them in the automation, the platform engineering, as well as in the reliability engineering and then in even in the engineering spaces. And that like, they are my end-to-end partner right now? >> So you're really taking that holistic approach that you talked about and it sounds like it's a very collaborative two-way street partnership. Prem, I want to go back to, Suresh mentioned Harc. Talk a little bit about what Harc is and then how partners fit into Hitachi's Harc strategy. >> Great, so let me spend like a few seconds on what Harc is. Lisa, again, I know we've been using the term. Harc stands for Hitachi application reliability sectors. Now the reason we thought about Harc was, like I said in the beginning of this segment, there is an illusion from an architecture standpoint to be more modern, microservices, server-less, reactive architecture, so on and so forth. There is an illusion in your development methodology from Waterfall to agile, to DevOps to lean, agile to path program, whatever, right? Extreme program, so on and so forth. There is an evolution in the space of infrastructure from a point where you were buying these huge humongous servers and putting it in your data center to a point where people don't even see servers anymore, right? You buy it, by a click of a button you don't know the size of it. All you know is a, it's (indistinct) whatever that name means. Let's go provision it on the fly, get go, get your work done, right? When all of this is advanced when you think about operations people have been solving the problem the way they've been solving it 20 years back, right? That's the issue. And Harc was conceived exactly to fix that particular problem, to think about a modern way of operating a modern workload, right? That's exactly what Harc. So it brings together finest engineering talent. So the teams are trained in specific ways of working. We've invested and implemented some of the IP, we work with the best of the breed partner ecosystem, and I'll talk about that in a minute. And we've got these facilities in Dallas and I am talking from my office in Dallas, which is a Harc facility in the US from where we deliver for our customers. And then back in Hyderabad, we've got one more that we opened and these are facilities from where we deliver Harc services for our customers as well, right? And then we are expanding it in Japan and Portugal as we move into 23. That's kind of the plan that we are thinking through. However, that's what Harc is, Lisa, right? That's our solution to this cloud complexity problem. Right? >> Got it, and it sounds like it's going quite global, which is fantastic. So Suresh, I want to have you expand a bit on the partnership, the partner ecosystem and the role that it plays. You talked about it a little bit but what role does the partner ecosystem play in really helping JCI to dial down some of those challenges and the inherent complexities that we talked about? >> Yeah, sure. I think partners play a major role and JCI is very, very good at it. I mean, I've joined JCI 18 months ago, JCI leverages partners pretty extensively. As I said, I leverage Hitachi for my, you know, A group and the (indistinct) space and the cloud operations space, and they're my primary partner. But at the same time, we leverage many other partners. Well, you know, Accenture, SCL, and even on the tooling side we use Datadog and (indistinct). All these guys are major partners of our because the way we like to pick partners is based on our vision and where we want to go. And pick the right partner who's going to really, you know make you successful by investing their resources in you. And what I mean by that is when you have a partner, partner knows exactly what kind of skillset is needed for this customer, for them to really be successful. As I said earlier, we cannot really get all the skillset that we need, we rely on the partners and partners bring the the right skillset, they can scale. I can tell Prem tomorrow, "Hey, I need two parts by next week", and I guarantee it he's going to bring two parts to me. So they let you scale, they let you move fast. And I'm a big believer, in today's day and age, to get things done fast and be more agile. I'm not worried about failure, but for me moving fast is very, very important. And partners really do a very good job bringing that. But I think then they also really make you think, isn't it? Because one thing I like about partners they make you innovate whether they know it or not but they do because, you know, they will come and ask you questions about, "Hey, tell me why you are doing this. Can I review your architecture?" You know, and then they will try to really say I don't think this is going to work. Because they work with so many different clients, not JCI, they bring all that expertise and that's what I look from them, you know, just not, you know, do a T&M job for me. I ask you to do this go... They just bring more than that. That's how I pick my partners. And that's how, you know, Hitachi's Vantara is definitely one of a good partner from that sense because they bring a lot more innovation to the table and I appreciate about that. >> It sounds like, it sounds like a flywheel of innovation. >> Yeah. >> I love that. Last question for both of you, which we're almost out of time here, Prem, I want to go back to you. So I'm a partner, I'm planning on redefining CloudOps at my company. What are the two things you want me to remember from Hitachi Vantara's perspective? >> So before I get to that question, Lisa, the partners that we work with are slightly different from from the partners that, again, there are some similar partners. There are some different partners, right? For example, we pick and choose especially in the Harc space, we pick and choose partners that are more future focused, right? We don't care if they are huge companies or small companies. We go after companies that are future focused that are really, really nimble and can change for our customers need because it's not our need, right? When I pick partners for Harc my ultimate endeavor is to ensure, in this case because we've got (indistinct) GCI on, we are able to operate (indistinct) with the level of satisfaction above and beyond that they're expecting from us. And whatever I don't have I need to get from my partners so that I bring this solution to Suresh. As opposed to bringing a whole lot of people and making them stand in front of Suresh. So that's how I think about partners. What do I want them to do from, and we've always done this so we do workshops with our partners. We just don't go by tools. When we say we are partnering with X, Y, Z, we do workshops with them and we say, this is how we are thinking. Either you build it in your roadmap that helps us leverage you, continue to leverage you. And we do have minimal investments where we fix gaps. We're building some utilities for us to deliver the best service to our customers. And our intention is not to build a product to compete with our partner. Our intention is to just fill the wide space until they go build it into their product suite that we can then leverage it for our customers. So always think about end customers and how can we make it easy for them? Because for all the tool vendors out there seeing this and wanting to partner with Hitachi the biggest thing is tools sprawl, especially on the cloud is very real. For every problem on the cloud. I have a billion tools that are being thrown at me as Suresh if I'm putting my installation and it's not easy at all. It's so confusing. >> Yeah. >> So that's what we want. We want people to simplify that landscape for our end customers, and we are looking at partners that are thinking through the simplification not just making money. >> That makes perfect sense. There really is a very strong symbiosis it sounds like, in the partner ecosystem. And there's a lot of enablement that goes on back and forth it sounds like as well, which is really, to your point it's all about the end customers and what they're expecting. Suresh, last question for you is which is the same one, if I'm a partner what are the things that you want me to consider as I'm planning to redefine CloudOps at my company? >> I'll keep it simple. In my view, I mean, we've touched upon it in multiple facets in this interview about that, the three things. First and foremost, reliability. You know, in today's day and age my products has to be reliable, available and, you know, make sure that the customer's happy with what they're really dealing with, number one. Number two, my product has to be secure. Security is super, super important, okay? And number three, I need to really make sure my customers are getting the value so I keep my cost low. So these three is what I would focus and what I expect from my partners. >> Great advice, guys. Thank you so much for talking through this with me and really showing the audience how strong the partnership is between Hitachi Vantara and JCI. What you're doing together, we'll have to talk to you again to see where things go but we really appreciate your insights and your perspectives. Thank you. >> Thank you, Lisa. >> Thanks Lisa, thanks for having us. >> My pleasure. For my guests, I'm Lisa Martin. Thank you so much for watching. (soothing music)

Published Date : Mar 2 2023

SUMMARY :

In the next 15 minutes or so and pin points that you all the services we see. Talk to me Prem about some of the other in the episode as we move forward. that taming the complexity. and play in the market to our customers. that you talked about and it sounds Now the reason we thought about Harc was, and the inherent complexities But at the same time, we like a flywheel of innovation. What are the two things you want me especially in the Harc space, we pick for our end customers, and we are looking it sounds like, in the partner ecosystem. make sure that the customer's happy showing the audience how Thank you so much for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SureshPERSON

0.99+

HitachiORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Suresh MothikuruPERSON

0.99+

JapanLOCATION

0.99+

Prem BalasubramanianPERSON

0.99+

JCIORGANIZATION

0.99+

LisaPERSON

0.99+

HarcORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

DallasLOCATION

0.99+

IndiaLOCATION

0.99+

AlibabaORGANIZATION

0.99+

HyderabadLOCATION

0.99+

Hitachi VantaraORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

PortugalLOCATION

0.99+

USLOCATION

0.99+

SCLORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

bothQUANTITY

0.99+

AWSORGANIZATION

0.99+

two partsQUANTITY

0.99+

150 servicesQUANTITY

0.99+

SecondQUANTITY

0.99+

FirstQUANTITY

0.99+

next weekDATE

0.99+

200 servicesQUANTITY

0.99+

First questionQUANTITY

0.99+

PremPERSON

0.99+

tomorrowDATE

0.99+

PolarisORGANIZATION

0.99+

T&MORGANIZATION

0.99+

hundreds of servicesQUANTITY

0.99+

three thingsQUANTITY

0.98+

threeQUANTITY

0.98+

agileTITLE

0.98+

Deania Davidson, Dell Technologies & Dave Lincoln, Dell Technologies | MWC Barcelona 2023


 

>> Narrator: theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Hey everyone and welcome back to Barcelona, Spain, it's theCUBE. We are live at MWC 23. This is day two of our coverage, we're giving you four days of coverage, but you already know that because you were here yesterday. Lisa Martin with Dave Nicholson. Dave this show is massive. I was walking in this morning and almost getting claustrophobic with the 80,000 people that are joining us. There is, seems to be at MWC 23 more interest in enterprise-class technology than we've ever seen before. What are some of the things that you've observed with that regard? >> Well I've observed a lot of people racing to the highest level messaging about how wonderful it is to have the kiss of a breeze on your cheek, and to feel the flowing wheat. (laughing) I want to hear about the actual things that make this stuff possible. >> Right. >> So I think we have a couple of guests here who can help us start to go down that path of actually understanding the real cool stuff that's behind the scenes. >> And absolutely we got some cool stuff. We've got two guests from Dell. Dave Lincoln is here, the VP of Networking and Emerging the Server Solutions, and Deania Davidson, Director Edge Server Product Planning and Management at Dell. So great to have you. >> Thank you. >> Two Daves, and a Davidson. >> (indistinct) >> Just me who stands alone here. (laughing) So guys talk about, Dave, we'll start with you the newest generation of PowerEdge servers. What's new? Why is it so exciting? What challenges for telecom operators is it solving? >> Yeah, well so this is actually Dell's largest server launch ever. It's the most expansive, which is notable because of, we have a pretty significant portfolio. We're very proud of our core mainstream portfolio. But really since the Supercompute in Dallas in November, that we started a rolling thunder of launches. MWC being part of that leading up to DTW here in May, where we're actually going to be announcing big investments in those parts of the market that are the growth segments of server. Specifically AIML, where we in, to address that. We're investing heavy in our XE series which we, as I said, we announced at Supercompute in November. And then we have to address the CSP segment, a big investment around the HS series which we just announced, and then lastly, the edge telecom segment which we're, we had the biggest investment, biggest announce in portfolio launch with XR series. >> Deania, lets dig into that. >> Yeah. >> Where we see the growth coming from you mentioned telecom CSPs with the edge. What are some of the growth opportunities there that organizations need Dell's help with to manage, so that they can deliver what they're demanding and user is wanting? >> The biggest areas being obviously, in addition the telecom has been the biggest one, but the other areas too we're seeing is in retail and manufacturing as well. And, so internally, I mean we're going to be focused on hardware, but we also have a solutions team who are working with us to build the solutions focused on retail, and edge and telecom as well on top of the servers that we'll talk about shortly. >> What are some of the biggest challenges that retailers and manufacturers are facing? And during the pandemic retailers, those that were successful pivoted very quickly to curbside delivery. >> Deania: Yeah. >> Those that didn't survive weren't able to do that digitally. >> Deania: Yeah. >> But we're seeing such demand. >> Yeah. >> At the retail edge. On the consumer side we want to get whatever we want right now. >> Yes. >> It has to be delivered, it has to be personalized. Talk a little bit more about some of the challenges there, within those two verticals and how Dell is helping to address those with the new server technologies. >> For retail, I think there's couple of things, the one is like in the fast food area. So obviously through COVID a lot of people got familiar and comfortable with driving through. >> Lisa: Yeah. >> And so there's probably a certain fast food restaurant everyone's pretty familiar with, they're pretty efficient in that, and so there are other customers who are trying to replicate that, and so how do we help them do that all, from a technology perspective. From a retail, it's one of the pickup and the online experience, but when you go into a store, I don't know about you but I go to Target, and I'm looking for something and I have kids who are kind of distracting you. Its like where is this one thing, and so I pull up the Target App for example, and it tells me where its at, right. And then obviously, stores want to make more money, so like hey, since you picked this thing, there are these things around you. So things like that is what we're having conversations with customers about. >> It's so interesting because the demand is there. >> Yeah, it is. >> And its not going to go anywhere. >> No. >> And it's certainly not going to be dialed down. We're not going to want less stuff, less often. >> Yeah (giggles) >> And as typical consumers, we don't necessarily make the association between what we're seeing in the palm of our hand on a mobile device. >> Deania: Right. >> And the infrastructure that's actually supporting all of it. >> Deania: Right. >> People hear the term Cloud and they think cloud-phone mystery. >> Yeah, magic just happens. >> Yeah. >> Yeah. >> But in fact, in order to support the things that we want to be able to do. >> Yeah. >> On the move, you have to optimize the server hardware. >> Deania: Yes. >> In certain ways. What does that mean exactly? When you say that its optimized, what are the sorts of decisions that you make when you're building? I think of this in the terms of Lego bricks. >> Yes, yeah >> Put together. What are some of the decisions that you make? >> So there were few key things that we really had to think about in terms of what was different from the Data center, which obviously supports the cloud environment, but it was all about how do we get closer to the customer right? How do we get things really fast and how do we compute that information really quickly. So for us, it's things like size. All right, so our server is going to weigh one of them is the size of a shoe box and (giggles), we have a picture with Dave. >> Dave: It's true. >> Took off his shoe. >> Its actually, its actually as big as a shoe. (crowd chuckles) >> It is. >> It is. >> To be fair, its a pretty big shoe. >> True, true. >> It is, but its small in relative to the old big servers that you see. >> I see what you're doing, you find a guy with a size 12, (crowd giggles) >> Yeah. >> Its the size of your shoe. >> Yeah. >> Okay. >> Its literally the size of a shoe, and that's our smallest server and its the smallest one in the portfolio, its the XR 4000, and so we've actually crammed a lot of technology in there going with the Intel ZRT processors for example to get into that compute power. The XR 8000 which you'll be hearing a lot more about shortly with our next guest is one I think from a telco perspective is our flagship product, and its size was a big thing there too. Ruggedization so its like (indistinct) certification, so it can actually operate continuously in negative 5 to 55 C, which for customers, or they need that range of temperature operation, flexibility was a big thing too. In meaning that, there are some customers who wanted to have one system in different areas of deployment. So can I take this one system and configure it one way, take that same system, configure another way and have it here. So flexibility was really key for us as well, and so we'll actually be seeing that in the next segment coming. >> I think one of, some of the common things you're hearing from this is our focus on innovation, purpose build servers, so yes our times, you know economic situation like in itself is tough yeah. But far from receding we've doubled down on investment and you've seen that with the products that we are launching here, and we will be launching in the years to come. >> I imagine there's a pretty sizeable day impact to the total adjustable market for PowerEdge based on the launch what you're doing, its going to be a tam, a good size tam expansion. >> Yeah, absolutely. Depending on how you look at it, its roughly we add about $30 Billion of adjustable tam between the three purposeful series that we've launched, XE, HS and XR. >> Can you comment on, I know Dell and customers are like this. Talk about, I'd love to get both of your perspective, I'm sure you have a favorite customer stories. But talk about the involvement of the customer in the generation, and the evolution of PowerEdge. Where are they in that process? What kind of feedback do they deliver? >> Well, I mean, just to start, one thing that is essential Cortana of Dell period, is it all is about the customer. All of it, everything that we do is about the customer, and so there is a big focus at our level, from on high to get out there and talk with customers, and actually we have a pretty good story around XR8000 which is call it our flagship of the XR line that we've just announced, and because of this deep customer intimacy, there was a last minute kind of architectural design change. >> Hm-mm. >> Which actually would have been, come to find out it would have been sort of a fatal flaw for deployment. So we corrected that because of this tight intimacy with our customers. This was in two Thanksgiving ago about and, so anyways it's super cool and the fact that we were able to make a change so late in development cycle, that's a testament to a lot of the speed and, speed of innovation that we're driving, so anyway that was that's one, just case of one example. >> Hm-mm. >> Let talk about AI, we can't go to any trade show without talking about AI, the big thing right now is ChatGPT. >> Yeah. >> I was using it the other day, it's so interesting. But, the growing demand for AI, talk about how its driving the evolution of the server so that more AI use cases can become more (indistinct). >> In the edge space primarily, we actually have another product, so I guess what you'll notice in the XR line itself because there are so many different use cases and technologies that support the different use cases. We actually have a range form factor, so we have really small, I guess I would say 350 ml the size of a shoe box, you know, Dave's shoe box. (crowd chuckles) And then we also have, at the other end a 472, so still small, but a little bit bigger, but we did recognize obviously AI was coming up, and so that is our XR 7620 platform and that does support 2 GPUs right, so, like for Edge infrencing, making sure that we have the capability to support customers in that too, but also in the small one, we do also have a GPU capability there, that also helps in those other use cases as well. So we've built the platforms even though they're small to be able to handle the GPU power for customers. >> So nice tight package, a lot of power there. >> Yes. >> Beside as we've all clearly demonstrated the size of Dave's shoe. (crowd chuckles) Dave, talk about Dell's long standing commitment to really helping to rapidly evolve the server market. >> Dave: Yeah. >> Its a pivotal payer there. >> Well, like I was saying, we see innovation, I mean, this is, to us its a race to the top. You talked about racing and messaging that sort of thing, when you opened up the show here, but we see this as a race to the top, having worked at other server companies where maybe its a little bit different, maybe more of a race to the bottom source of approach. That's what I love about being at Dell. This is very much, we understand that it's innovation is that is what's going to deliver the most value for our customers. So whether its some of the first to market, first of its kind sort of innovation that you find in the XR4000, or XR8000, or any of our XE line, we know that at the end of day, that is what going to propel Dell, do the best for our customers and thereby do the best for us. To be honest, its a little bit surprising walking by some of our competitors booths, there's been like a dearth of zero, like no, like it's almost like you wouldn't even know that there was a big launch here right? >> Yeah. >> Or is it just me? >> No. >> It was a while, we've been walking around and yet we've had, and its sort of maybe I should take this as a flattery, but a lot of our competitors have been coming by to our booth everyday actually. >> Deania: Yeah, everyday. >> They came by multiple times yesterday, they came by multiple times today, they're taking pictures of our stuff I kind of want to just send 'em a sample. >> Lisa: Or your shoe. >> Right? Or just maybe my shoe right? But anyway, so I suppose I should take it as an honor. >> Deania: Yeah. >> And conversely when we've walked over there we actually get in back (indistinct), maybe I need a high Dell (indistinct). (crowd chuckles) >> We just had that experience, yeah. >> Its kind of funny but. >> Its a good position to be in. >> Yeah. >> Yes. >> You talked about the involvement of the customers, talk a bit more about Dell's ecosystem is also massive, its part of what makes Dell, Dell. >> Wait did you say ego-system? (laughing) After David just. >> You caught that? Darn it! The talk about the influence or the part of the ecosystem and also some of the feedback from the partners as you've been rapidly evolving the server market and clearly your competitors are taking notice. >> Yeah, sorry. >> Deania: That's okay. >> Dave: you want to take that? >> I mean I would say generally, one of the things that Dell prides itself on is being able to deliver the worlds best innovation into the hands of our customers, faster and better that any other, the optimal solution. So whether its you know, working with our great partners like Intel, AMD Broadcom, these sorts of folks. That is, at the end of the day that is our core mantra, again its retractor on service, doing the best, you know, what's best for the customers. And we want to bring the world's best innovation from our technology partners, get it into the hands of our partners you know, faster and better than any other option out there. >> Its a satisfying business for all of us to be in, because to your point, I made a joke about the high level messaging. But really, that's what it comes down to. >> Lisa: Yeah. >> We do these things, we feel like sometimes we're toiling in obscurity, working with the hardware. But what it delivers. >> Deania: Hm-mm. >> The experiences. >> Dave: Absolutely. >> Deania: Yes. >> Are truly meaningful. So its a fun. >> Absolutely. >> Its a really fun thing to be a part of. >> It is. >> Absolutely. >> Yeah. Is there a favorite customer story that you have that really articulates the value of what Dell is doing, full PowerEdge, at the Edge? >> Its probably one I can't particularly name obviously but, it was, they have different environments, so, in one case there's like on flights or on sea vessels, and just being able to use the same box in those different environments is really cool. And they really appreciate having the small compact, where they can just take the server with them and go somewhere. That was really cool to me in terms of how they were using the products that we built for them. >> I have one that's kind of funny. It around XR8000. Again a customer I won't name but they're so proud of it, they almost kinds feel like they co defined it with us, they want to be on the patent with us so, anyways that's. >> Deania: (indistinct). >> That's what they went in for, yeah. >> So it shows the strength of the partnership that. >> Yeah, exactly. >> Of course, the ecosystem of partners, customers, CSVs, telecom Edge. Guys thank you so much for joining us today. >> Thank you. >> Thank you. >> Sharing what's new with the PowerEdge. We can't wait to, we're just, we're cracking open the box, we saw the shoe. (laughing) And we're going to be dealing a little bit more later. So thank you. >> We're going to be able to touch something soon? >> Yes, yes. >> Yeah. >> In couple of minutes? >> Next segment I think. >> All right! >> Thanks for setting the table for that guys. We really appreciate your time. >> Thank you for having us. >> Thank you. >> Alright, our pleasure. >> For our guests and for Dave Nicholson, I'm Lisa Martin . You're watching theCUBE. The leader in live tech coverage, LIVE in Barcelona, Spain, MWC 23. Don't go anywhere, we will be right back with our next guests. (gentle music)

Published Date : Feb 28 2023

SUMMARY :

that drive human progress. What are some of the have the kiss of a breeze that's behind the scenes. the VP of Networking and and a Davidson. the newest generation that are the growth segments of server. What are some of the but the other areas too we're seeing is What are some of the biggest challenges do that digitally. On the consumer side we some of the challenges there, the one is like in the fast food area. and the online experience, because the demand is there. going to be dialed down. in the palm of our hand And the infrastructure People hear the term Cloud the things that we want to be able to do. the server hardware. decisions that you make What are some of the from the Data center, its actually as big as a shoe. that you see. and its the smallest one in the portfolio, some of the common things for PowerEdge based on the between the three purposeful and the evolution of PowerEdge. flagship of the XR line and the fact that we were able the big thing right now is ChatGPT. the evolution of the server but also in the small one, a lot of power there. the size of Dave's shoe. the first to market, and its sort of maybe I should I kind of want to just send 'em a sample. But anyway, so I suppose I should take it we actually get in back (indistinct), involvement of the customers, Wait did you say ego-system? and also some of the one of the things that I made a joke about the we feel like sometimes So its a fun. that really articulates the the server with them they want to be on the patent with us so, So it shows the Of course, the ecosystem of partners, we saw the shoe. the table for that guys. we will be right back

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

DavePERSON

0.99+

DeaniaPERSON

0.99+

Lisa MartinPERSON

0.99+

LisaPERSON

0.99+

MayDATE

0.99+

Dave LincolnPERSON

0.99+

DavidPERSON

0.99+

NovemberDATE

0.99+

DellORGANIZATION

0.99+

CortanaTITLE

0.99+

350 mlQUANTITY

0.99+

DallasLOCATION

0.99+

TargetORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

IntelORGANIZATION

0.99+

TwoQUANTITY

0.99+

XR 4000COMMERCIAL_ITEM

0.99+

four daysQUANTITY

0.99+

80,000 peopleQUANTITY

0.99+

two guestsQUANTITY

0.99+

XR 8000COMMERCIAL_ITEM

0.99+

XR8000COMMERCIAL_ITEM

0.99+

55 CQUANTITY

0.99+

2 GPUsQUANTITY

0.99+

Deania DavidsonPERSON

0.99+

XR4000COMMERCIAL_ITEM

0.99+

yesterdayDATE

0.99+

todayDATE

0.99+

two verticalsQUANTITY

0.99+

Barcelona, SpainLOCATION

0.98+

bothQUANTITY

0.98+

LegoORGANIZATION

0.98+

oneQUANTITY

0.98+

XR seriesCOMMERCIAL_ITEM

0.98+

one systemQUANTITY

0.98+

about $30 BillionQUANTITY

0.97+

SupercomputeORGANIZATION

0.97+

MWCEVENT

0.97+

zeroQUANTITY

0.95+

5QUANTITY

0.95+

firstQUANTITY

0.94+

MWC 23EVENT

0.94+

this morningDATE

0.94+

telcoORGANIZATION

0.93+

one wayQUANTITY

0.93+

DavidsonORGANIZATION

0.92+

coupleQUANTITY

0.92+

twoDATE

0.91+

EdgeORGANIZATION

0.91+

Prem Balasubramanian and Suresh Mothikuru | Hitachi Vantara: Build Your Cloud Center of Excellence


 

(soothing music) >> Hey everyone, welcome to this event, "Build Your Cloud Center of Excellence." I'm your host, Lisa Martin. In the next 15 minutes or so my guest and I are going to be talking about redefining cloud operations, an application modernization for customers, and specifically how partners are helping to speed up that process. As you saw on our first two segments, we talked about problems enterprises are facing with cloud operations. We talked about redefining cloud operations as well to solve these problems. This segment is going to be focusing on how Hitachi Vantara's partners are really helping to speed up that process. We've got Johnson Controls here to talk about their partnership with Hitachi Vantara. Please welcome both of my guests, Prem Balasubramanian is with us, SVP and CTO Digital Solutions at Hitachi Vantara. And Suresh Mothikuru, SVP Customer Success Platform Engineering and Reliability Engineering from Johnson Controls. Gentlemen, welcome to the program, great to have you. >> Thank. >> Thank you, Lisa. >> First question is to both of you and Suresh, we'll start with you. We want to understand, you know, the cloud operations landscape is increasingly complex. We've talked a lot about that in this program. Talk to us, Suresh, about some of the biggest challenges and pin points that you faced with respect to that. >> Thank you. I think it's a great question. I mean, cloud has evolved a lot in the last 10 years. You know, when we were talking about a single cloud whether it's Azure or AWS and GCP, and that was complex enough. Now we are talking about multi-cloud and hybrid and you look at Johnson Controls, we have Azure we have AWS, we have GCP, we have Alibaba and we also support on-prem. So the architecture has become very, very complex and the complexity has grown so much that we are now thinking about whether we should be cloud native or cloud agnostic. So I think, I mean, sometimes it's hard to even explain the complexity because people think, oh, "When you go to cloud, everything is simplified." Cloud does give you a lot of simplicity, but it also really brings a lot more complexity along with it. So, and then next one is pretty important is, you know, generally when you look at cloud services, you have plenty of services that are offered within a cloud, 100, 150 services, 200 services. Even within those companies, you take AWS they might not know, an individual resource might not know about all the services we see. That's a big challenge for us as a customer to really understand each of the service that is provided in these, you know, clouds, well, doesn't matter which one that is. And the third one is pretty big, at least at the CTO the CIO, and the senior leadership level, is cost. Cost is a major factor because cloud, you know, will eat you up if you cannot manage it. If you don't have a good cloud governance process it because every minute you are in it, it's burning cash. So I think if you ask me, these are the three major things that I am facing day to day and that's where I use my partners, which I'll touch base down the line. >> Perfect, we'll talk about that. So Prem, I imagine that these problems are not unique to Johnson Controls or JCI, as you may hear us refer to it. Talk to me Prem about some of the other challenges that you're seeing within the customer landscape. >> So, yeah, I agree, Lisa, these are not very specific to JCI, but there are specific issues in JCI, right? So the way we think about these are, there is a common issue when people go to the cloud and there are very specific and unique issues for businesses, right? So JCI, and we will talk about this in the episode as we move forward. I think Suresh and his team have done some phenomenal step around how to manage this complexity. But there are customers who have a lesser complex cloud which is, they don't go to Alibaba, they don't have footprint in all three clouds. So their multi-cloud footprint could be a bit more manageable, but still struggle with a lot of the same problems around cost, around security, around talent. Talent is a big thing, right? And in Suresh's case I think it's slightly more exasperated because every cloud provider Be it AWS, JCP, or Azure brings in hundreds of services and there is nobody, including many of us, right? We learn every day, nowadays, right? It's not that there is one service integrator who knows all, while technically people can claim as a part of sales. But in reality all of us are continuing to learn in this landscape. And if you put all of this equation together with multiple clouds the complexity just starts to exponentially grow. And that's exactly what I think JCI is experiencing and Suresh's team has been experiencing, and we've been working together. But the common problems are around security talent and cost management of this, right? Those are my three things. And one last thing that I would love to say before we move away from this question is, if you think about cloud operations as a concept that's evolving over the last few years, and I have touched upon this in the previous episode as well, Lisa, right? If you take architectures, we've gone into microservices, we've gone into all these server-less architectures all the fancy things that we want. That helps us go to market faster, be more competent to as a business. But that's not simplified stuff, right? That's complicated stuff. It's a lot more distributed. Second, again, we've advanced and created more modern infrastructure because all of what we are talking is platform as a service, services on the cloud that we are consuming, right? In the same case with development we've moved into a DevOps model. We kind of click a button put some code in a repository, the code starts to run in production within a minute, everything else is automated. But then when we get to operations we are still stuck in a very old way of looking at cloud as an infrastructure, right? So you've got an infra team, you've got an app team, you've got an incident management team, you've got a soft knock, everything. But again, so Suresh can talk about this more because they are making significant strides in thinking about this as a single workload, and how do I apply engineering to go manage this? Because a lot of it is codified, right? So automation. Anyway, so that's kind of where the complexity is and how we are thinking, including JCI as a partner thinking about taming that complexity as we move forward. >> Suresh, let's talk about that taming the complexity. You guys have both done a great job of articulating the ostensible challenges that are there with cloud, especially multi-cloud environments that you're living in. But Suresh, talk about the partnership with Hitachi Vantara. How is it helping to dial down some of those inherent complexities? >> I mean, I always, you know, I think I've said this to Prem multiple times. I treat my partners as my internal, you know, employees. I look at Prem as my coworker or my peers. So the reason for that is I want Prem to have the same vested interest as a partner in my success or JCI success and vice versa, isn't it? I think that's how we operate and that's how we have been operating. And I think I would like to thank Prem and Hitachi Vantara for that really been an amazing partnership. And as he was saying, we have taken a completely holistic approach to how we want to really be in the market and play in the market to our customers. So if you look at my jacket it talks about OpenBlue platform. This is what JCI is building, that we are building this OpenBlue digital platform. And within that, my team, along with Prem's or Hitachi's, we have built what we call as Polaris. It's a technical platform where our apps can run. And this platform is automated end-to-end from a platform engineering standpoint. We stood up a platform engineering organization, a reliability engineering organization, as well as a support organization where Hitachi played a role. As I said previously, you know, for me to scale I'm not going to really have the talent and the knowledge of every function that I'm looking at. And Hitachi, not only they brought the talent but they also brought what he was talking about, Harc. You know, they have set up a lot and now we can leverage it. And they also came up with some really interesting concepts. I went and met them in India. They came up with this concept called IPL. Okay, what is that? They really challenged all their employees that's working for GCI to come up with innovative ideas to solve problems proactively, which is self-healing. You know, how you do that? So I think partners, you know, if they become really vested in your interests, they can do wonders for you. And I think in this case Hitachi is really working very well for us and in many aspects. And I'm leveraging them... You started with support, now I'm leveraging them in the automation, the platform engineering, as well as in the reliability engineering and then in even in the engineering spaces. And that like, they are my end-to-end partner right now? >> So you're really taking that holistic approach that you talked about and it sounds like it's a very collaborative two-way street partnership. Prem, I want to go back to, Suresh mentioned Harc. Talk a little bit about what Harc is and then how partners fit into Hitachi's Harc strategy. >> Great, so let me spend like a few seconds on what Harc is. Lisa, again, I know we've been using the term. Harc stands for Hitachi application reliability sectors. Now the reason we thought about Harc was, like I said in the beginning of this segment, there is an illusion from an architecture standpoint to be more modern, microservices, server-less, reactive architecture, so on and so forth. There is an illusion in your development methodology from Waterfall to agile, to DevOps to lean, agile to path program, whatever, right? Extreme program, so on and so forth. There is an evolution in the space of infrastructure from a point where you were buying these huge humongous servers and putting it in your data center to a point where people don't even see servers anymore, right? You buy it, by a click of a button you don't know the size of it. All you know is a, it's (indistinct) whatever that name means. Let's go provision it on the fly, get go, get your work done, right? When all of this is advanced when you think about operations people have been solving the problem the way they've been solving it 20 years back, right? That's the issue. And Harc was conceived exactly to fix that particular problem, to think about a modern way of operating a modern workload, right? That's exactly what Harc. So it brings together finest engineering talent. So the teams are trained in specific ways of working. We've invested and implemented some of the IP, we work with the best of the breed partner ecosystem, and I'll talk about that in a minute. And we've got these facilities in Dallas and I am talking from my office in Dallas, which is a Harc facility in the US from where we deliver for our customers. And then back in Hyderabad, we've got one more that we opened and these are facilities from where we deliver Harc services for our customers as well, right? And then we are expanding it in Japan and Portugal as we move into 23. That's kind of the plan that we are thinking through. However, that's what Harc is, Lisa, right? That's our solution to this cloud complexity problem. Right? >> Got it, and it sounds like it's going quite global, which is fantastic. So Suresh, I want to have you expand a bit on the partnership, the partner ecosystem and the role that it plays. You talked about it a little bit but what role does the partner ecosystem play in really helping JCI to dial down some of those challenges and the inherent complexities that we talked about? >> Yeah, sure. I think partners play a major role and JCI is very, very good at it. I mean, I've joined JCI 18 months ago, JCI leverages partners pretty extensively. As I said, I leverage Hitachi for my, you know, A group and the (indistinct) space and the cloud operations space, and they're my primary partner. But at the same time, we leverage many other partners. Well, you know, Accenture, SCL, and even on the tooling side we use Datadog and (indistinct). All these guys are major partners of our because the way we like to pick partners is based on our vision and where we want to go. And pick the right partner who's going to really, you know make you successful by investing their resources in you. And what I mean by that is when you have a partner, partner knows exactly what kind of skillset is needed for this customer, for them to really be successful. As I said earlier, we cannot really get all the skillset that we need, we rely on the partners and partners bring the the right skillset, they can scale. I can tell Prem tomorrow, "Hey, I need two parts by next week", and I guarantee it he's going to bring two parts to me. So they let you scale, they let you move fast. And I'm a big believer, in today's day and age, to get things done fast and be more agile. I'm not worried about failure, but for me moving fast is very, very important. And partners really do a very good job bringing that. But I think then they also really make you think, isn't it? Because one thing I like about partners they make you innovate whether they know it or not but they do because, you know, they will come and ask you questions about, "Hey, tell me why you are doing this. Can I review your architecture?" You know, and then they will try to really say I don't think this is going to work. Because they work with so many different clients, not JCI, they bring all that expertise and that's what I look from them, you know, just not, you know, do a T&M job for me. I ask you to do this go... They just bring more than that. That's how I pick my partners. And that's how, you know, Hitachi's Vantara is definitely one of a good partner from that sense because they bring a lot more innovation to the table and I appreciate about that. >> It sounds like, it sounds like a flywheel of innovation. >> Yeah. >> I love that. Last question for both of you, which we're almost out of time here, Prem, I want to go back to you. So I'm a partner, I'm planning on redefining CloudOps at my company. What are the two things you want me to remember from Hitachi Vantara's perspective? >> So before I get to that question, Lisa, the partners that we work with are slightly different from from the partners that, again, there are some similar partners. There are some different partners, right? For example, we pick and choose especially in the Harc space, we pick and choose partners that are more future focused, right? We don't care if they are huge companies or small companies. We go after companies that are future focused that are really, really nimble and can change for our customers need because it's not our need, right? When I pick partners for Harc my ultimate endeavor is to ensure, in this case because we've got (indistinct) GCI on, we are able to operate (indistinct) with the level of satisfaction above and beyond that they're expecting from us. And whatever I don't have I need to get from my partners so that I bring this solution to Suresh. As opposed to bringing a whole lot of people and making them stand in front of Suresh. So that's how I think about partners. What do I want them to do from, and we've always done this so we do workshops with our partners. We just don't go by tools. When we say we are partnering with X, Y, Z, we do workshops with them and we say, this is how we are thinking. Either you build it in your roadmap that helps us leverage you, continue to leverage you. And we do have minimal investments where we fix gaps. We're building some utilities for us to deliver the best service to our customers. And our intention is not to build a product to compete with our partner. Our intention is to just fill the wide space until they go build it into their product suite that we can then leverage it for our customers. So always think about end customers and how can we make it easy for them? Because for all the tool vendors out there seeing this and wanting to partner with Hitachi the biggest thing is tools sprawl, especially on the cloud is very real. For every problem on the cloud. I have a billion tools that are being thrown at me as Suresh if I'm putting my installation and it's not easy at all. It's so confusing. >> Yeah. >> So that's what we want. We want people to simplify that landscape for our end customers, and we are looking at partners that are thinking through the simplification not just making money. >> That makes perfect sense. There really is a very strong symbiosis it sounds like, in the partner ecosystem. And there's a lot of enablement that goes on back and forth it sounds like as well, which is really, to your point it's all about the end customers and what they're expecting. Suresh, last question for you is which is the same one, if I'm a partner what are the things that you want me to consider as I'm planning to redefine CloudOps at my company? >> I'll keep it simple. In my view, I mean, we've touched upon it in multiple facets in this interview about that, the three things. First and foremost, reliability. You know, in today's day and age my products has to be reliable, available and, you know, make sure that the customer's happy with what they're really dealing with, number one. Number two, my product has to be secure. Security is super, super important, okay? And number three, I need to really make sure my customers are getting the value so I keep my cost low. So these three is what I would focus and what I expect from my partners. >> Great advice, guys. Thank you so much for talking through this with me and really showing the audience how strong the partnership is between Hitachi Vantara and JCI. What you're doing together, we'll have to talk to you again to see where things go but we really appreciate your insights and your perspectives. Thank you. >> Thank you, Lisa. >> Thanks Lisa, thanks for having us. >> My pleasure. For my guests, I'm Lisa Martin. Thank you so much for watching. (soothing music)

Published Date : Feb 27 2023

SUMMARY :

In the next 15 minutes or so and pin points that you all the services we see. Talk to me Prem about some of the other in the episode as we move forward. that taming the complexity. and play in the market to our customers. that you talked about and it sounds Now the reason we thought about Harc was, and the inherent complexities But at the same time, we like a flywheel of innovation. What are the two things you want me especially in the Harc space, we pick for our end customers, and we are looking it sounds like, in the partner ecosystem. make sure that the customer's happy showing the audience how Thank you so much for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SureshPERSON

0.99+

HitachiORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Suresh MothikuruPERSON

0.99+

JapanLOCATION

0.99+

Prem BalasubramanianPERSON

0.99+

JCIORGANIZATION

0.99+

LisaPERSON

0.99+

HarcORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

DallasLOCATION

0.99+

IndiaLOCATION

0.99+

AlibabaORGANIZATION

0.99+

HyderabadLOCATION

0.99+

Hitachi VantaraORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

PortugalLOCATION

0.99+

USLOCATION

0.99+

SCLORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

bothQUANTITY

0.99+

AWSORGANIZATION

0.99+

two partsQUANTITY

0.99+

150 servicesQUANTITY

0.99+

SecondQUANTITY

0.99+

FirstQUANTITY

0.99+

next weekDATE

0.99+

200 servicesQUANTITY

0.99+

First questionQUANTITY

0.99+

PremPERSON

0.99+

tomorrowDATE

0.99+

PolarisORGANIZATION

0.99+

T&MORGANIZATION

0.99+

hundreds of servicesQUANTITY

0.99+

three thingsQUANTITY

0.98+

threeQUANTITY

0.98+

agileTITLE

0.98+

Prem Balasubramanian & Suresh Mothikuru


 

(soothing music) >> Hey everyone, welcome to this event, "Build Your Cloud Center of Excellence." I'm your host, Lisa Martin. In the next 15 minutes or so my guest and I are going to be talking about redefining cloud operations, an application modernization for customers, and specifically how partners are helping to speed up that process. As you saw on our first two segments, we talked about problems enterprises are facing with cloud operations. We talked about redefining cloud operations as well to solve these problems. This segment is going to be focusing on how Hitachi Vantara's partners are really helping to speed up that process. We've got Johnson Controls here to talk about their partnership with Hitachi Vantara. Please welcome both of my guests, Prem Balasubramanian is with us, SVP and CTO Digital Solutions at Hitachi Vantara. And Suresh Mothikuru, SVP Customer Success Platform Engineering and Reliability Engineering from Johnson Controls. Gentlemen, welcome to the program, great to have you. >> Thank. >> Thank you, Lisa. >> First question is to both of you and Suresh, we'll start with you. We want to understand, you know, the cloud operations landscape is increasingly complex. We've talked a lot about that in this program. Talk to us, Suresh, about some of the biggest challenges and pin points that you faced with respect to that. >> Thank you. I think it's a great question. I mean, cloud has evolved a lot in the last 10 years. You know, when we were talking about a single cloud whether it's Azure or AWS and GCP, and that was complex enough. Now we are talking about multi-cloud and hybrid and you look at Johnson Controls, we have Azure we have AWS, we have GCP, we have Alibaba and we also support on-prem. So the architecture has become very, very complex and the complexity has grown so much that we are now thinking about whether we should be cloud native or cloud agnostic. So I think, I mean, sometimes it's hard to even explain the complexity because people think, oh, "When you go to cloud, everything is simplified." Cloud does give you a lot of simplicity, but it also really brings a lot more complexity along with it. So, and then next one is pretty important is, you know, generally when you look at cloud services, you have plenty of services that are offered within a cloud, 100, 150 services, 200 services. Even within those companies, you take AWS they might not know, an individual resource might not know about all the services we see. That's a big challenge for us as a customer to really understand each of the service that is provided in these, you know, clouds, well, doesn't matter which one that is. And the third one is pretty big, at least at the CTO the CIO, and the senior leadership level, is cost. Cost is a major factor because cloud, you know, will eat you up if you cannot manage it. If you don't have a good cloud governance process it because every minute you are in it, it's burning cash. So I think if you ask me, these are the three major things that I am facing day to day and that's where I use my partners, which I'll touch base down the line. >> Perfect, we'll talk about that. So Prem, I imagine that these problems are not unique to Johnson Controls or JCI, as you may hear us refer to it. Talk to me Prem about some of the other challenges that you're seeing within the customer landscape. >> So, yeah, I agree, Lisa, these are not very specific to JCI, but there are specific issues in JCI, right? So the way we think about these are, there is a common issue when people go to the cloud and there are very specific and unique issues for businesses, right? So JCI, and we will talk about this in the episode as we move forward. I think Suresh and his team have done some phenomenal step around how to manage this complexity. But there are customers who have a lesser complex cloud which is, they don't go to Alibaba, they don't have footprint in all three clouds. So their multi-cloud footprint could be a bit more manageable, but still struggle with a lot of the same problems around cost, around security, around talent. Talent is a big thing, right? And in Suresh's case I think it's slightly more exasperated because every cloud provider Be it AWS, JCP, or Azure brings in hundreds of services and there is nobody, including many of us, right? We learn every day, nowadays, right? It's not that there is one service integrator who knows all, while technically people can claim as a part of sales. But in reality all of us are continuing to learn in this landscape. And if you put all of this equation together with multiple clouds the complexity just starts to exponentially grow. And that's exactly what I think JCI is experiencing and Suresh's team has been experiencing, and we've been working together. But the common problems are around security talent and cost management of this, right? Those are my three things. And one last thing that I would love to say before we move away from this question is, if you think about cloud operations as a concept that's evolving over the last few years, and I have touched upon this in the previous episode as well, Lisa, right? If you take architectures, we've gone into microservices, we've gone into all these server-less architectures all the fancy things that we want. That helps us go to market faster, be more competent to as a business. But that's not simplified stuff, right? That's complicated stuff. It's a lot more distributed. Second, again, we've advanced and created more modern infrastructure because all of what we are talking is platform as a service, services on the cloud that we are consuming, right? In the same case with development we've moved into a DevOps model. We kind of click a button put some code in a repository, the code starts to run in production within a minute, everything else is automated. But then when we get to operations we are still stuck in a very old way of looking at cloud as an infrastructure, right? So you've got an infra team, you've got an app team, you've got an incident management team, you've got a soft knock, everything. But again, so Suresh can talk about this more because they are making significant strides in thinking about this as a single workload, and how do I apply engineering to go manage this? Because a lot of it is codified, right? So automation. Anyway, so that's kind of where the complexity is and how we are thinking, including JCI as a partner thinking about taming that complexity as we move forward. >> Suresh, let's talk about that taming the complexity. You guys have both done a great job of articulating the ostensible challenges that are there with cloud, especially multi-cloud environments that you're living in. But Suresh, talk about the partnership with Hitachi Vantara. How is it helping to dial down some of those inherent complexities? >> I mean, I always, you know, I think I've said this to Prem multiple times. I treat my partners as my internal, you know, employees. I look at Prem as my coworker or my peers. So the reason for that is I want Prem to have the same vested interest as a partner in my success or JCI success and vice versa, isn't it? I think that's how we operate and that's how we have been operating. And I think I would like to thank Prem and Hitachi Vantara for that really been an amazing partnership. And as he was saying, we have taken a completely holistic approach to how we want to really be in the market and play in the market to our customers. So if you look at my jacket it talks about OpenBlue platform. This is what JCI is building, that we are building this OpenBlue digital platform. And within that, my team, along with Prem's or Hitachi's, we have built what we call as Polaris. It's a technical platform where our apps can run. And this platform is automated end-to-end from a platform engineering standpoint. We stood up a platform engineering organization, a reliability engineering organization, as well as a support organization where Hitachi played a role. As I said previously, you know, for me to scale I'm not going to really have the talent and the knowledge of every function that I'm looking at. And Hitachi, not only they brought the talent but they also brought what he was talking about, Harc. You know, they have set up a lot and now we can leverage it. And they also came up with some really interesting concepts. I went and met them in India. They came up with this concept called IPL. Okay, what is that? They really challenged all their employees that's working for GCI to come up with innovative ideas to solve problems proactively, which is self-healing. You know, how you do that? So I think partners, you know, if they become really vested in your interests, they can do wonders for you. And I think in this case Hitachi is really working very well for us and in many aspects. And I'm leveraging them... You started with support, now I'm leveraging them in the automation, the platform engineering, as well as in the reliability engineering and then in even in the engineering spaces. And that like, they are my end-to-end partner right now? >> So you're really taking that holistic approach that you talked about and it sounds like it's a very collaborative two-way street partnership. Prem, I want to go back to, Suresh mentioned Harc. Talk a little bit about what Harc is and then how partners fit into Hitachi's Harc strategy. >> Great, so let me spend like a few seconds on what Harc is. Lisa, again, I know we've been using the term. Harc stands for Hitachi application reliability sectors. Now the reason we thought about Harc was, like I said in the beginning of this segment, there is an illusion from an architecture standpoint to be more modern, microservices, server-less, reactive architecture, so on and so forth. There is an illusion in your development methodology from Waterfall to agile, to DevOps to lean, agile to path program, whatever, right? Extreme program, so on and so forth. There is an evolution in the space of infrastructure from a point where you were buying these huge humongous servers and putting it in your data center to a point where people don't even see servers anymore, right? You buy it, by a click of a button you don't know the size of it. All you know is a, it's (indistinct) whatever that name means. Let's go provision it on the fly, get go, get your work done, right? When all of this is advanced when you think about operations people have been solving the problem the way they've been solving it 20 years back, right? That's the issue. And Harc was conceived exactly to fix that particular problem, to think about a modern way of operating a modern workload, right? That's exactly what Harc. So it brings together finest engineering talent. So the teams are trained in specific ways of working. We've invested and implemented some of the IP, we work with the best of the breed partner ecosystem, and I'll talk about that in a minute. And we've got these facilities in Dallas and I am talking from my office in Dallas, which is a Harc facility in the US from where we deliver for our customers. And then back in Hyderabad, we've got one more that we opened and these are facilities from where we deliver Harc services for our customers as well, right? And then we are expanding it in Japan and Portugal as we move into 23. That's kind of the plan that we are thinking through. However, that's what Harc is, Lisa, right? That's our solution to this cloud complexity problem. Right? >> Got it, and it sounds like it's going quite global, which is fantastic. So Suresh, I want to have you expand a bit on the partnership, the partner ecosystem and the role that it plays. You talked about it a little bit but what role does the partner ecosystem play in really helping JCI to dial down some of those challenges and the inherent complexities that we talked about? >> Yeah, sure. I think partners play a major role and JCI is very, very good at it. I mean, I've joined JCI 18 months ago, JCI leverages partners pretty extensively. As I said, I leverage Hitachi for my, you know, A group and the (indistinct) space and the cloud operations space, and they're my primary partner. But at the same time, we leverage many other partners. Well, you know, Accenture, SCL, and even on the tooling side we use Datadog and (indistinct). All these guys are major partners of our because the way we like to pick partners is based on our vision and where we want to go. And pick the right partner who's going to really, you know make you successful by investing their resources in you. And what I mean by that is when you have a partner, partner knows exactly what kind of skillset is needed for this customer, for them to really be successful. As I said earlier, we cannot really get all the skillset that we need, we rely on the partners and partners bring the the right skillset, they can scale. I can tell Prem tomorrow, "Hey, I need two parts by next week", and I guarantee it he's going to bring two parts to me. So they let you scale, they let you move fast. And I'm a big believer, in today's day and age, to get things done fast and be more agile. I'm not worried about failure, but for me moving fast is very, very important. And partners really do a very good job bringing that. But I think then they also really make you think, isn't it? Because one thing I like about partners they make you innovate whether they know it or not but they do because, you know, they will come and ask you questions about, "Hey, tell me why you are doing this. Can I review your architecture?" You know, and then they will try to really say I don't think this is going to work. Because they work with so many different clients, not JCI, they bring all that expertise and that's what I look from them, you know, just not, you know, do a T&M job for me. I ask you to do this go... They just bring more than that. That's how I pick my partners. And that's how, you know, Hitachi's Vantara is definitely one of a good partner from that sense because they bring a lot more innovation to the table and I appreciate about that. >> It sounds like, it sounds like a flywheel of innovation. >> Yeah. >> I love that. Last question for both of you, which we're almost out of time here, Prem, I want to go back to you. So I'm a partner, I'm planning on redefining CloudOps at my company. What are the two things you want me to remember from Hitachi Vantara's perspective? >> So before I get to that question, Lisa, the partners that we work with are slightly different from from the partners that, again, there are some similar partners. There are some different partners, right? For example, we pick and choose especially in the Harc space, we pick and choose partners that are more future focused, right? We don't care if they are huge companies or small companies. We go after companies that are future focused that are really, really nimble and can change for our customers need because it's not our need, right? When I pick partners for Harc my ultimate endeavor is to ensure, in this case because we've got (indistinct) GCI on, we are able to operate (indistinct) with the level of satisfaction above and beyond that they're expecting from us. And whatever I don't have I need to get from my partners so that I bring this solution to Suresh. As opposed to bringing a whole lot of people and making them stand in front of Suresh. So that's how I think about partners. What do I want them to do from, and we've always done this so we do workshops with our partners. We just don't go by tools. When we say we are partnering with X, Y, Z, we do workshops with them and we say, this is how we are thinking. Either you build it in your roadmap that helps us leverage you, continue to leverage you. And we do have minimal investments where we fix gaps. We're building some utilities for us to deliver the best service to our customers. And our intention is not to build a product to compete with our partner. Our intention is to just fill the wide space until they go build it into their product suite that we can then leverage it for our customers. So always think about end customers and how can we make it easy for them? Because for all the tool vendors out there seeing this and wanting to partner with Hitachi the biggest thing is tools sprawl, especially on the cloud is very real. For every problem on the cloud. I have a billion tools that are being thrown at me as Suresh if I'm putting my installation and it's not easy at all. It's so confusing. >> Yeah. >> So that's what we want. We want people to simplify that landscape for our end customers, and we are looking at partners that are thinking through the simplification not just making money. >> That makes perfect sense. There really is a very strong symbiosis it sounds like, in the partner ecosystem. And there's a lot of enablement that goes on back and forth it sounds like as well, which is really, to your point it's all about the end customers and what they're expecting. Suresh, last question for you is which is the same one, if I'm a partner what are the things that you want me to consider as I'm planning to redefine CloudOps at my company? >> I'll keep it simple. In my view, I mean, we've touched upon it in multiple facets in this interview about that, the three things. First and foremost, reliability. You know, in today's day and age my products has to be reliable, available and, you know, make sure that the customer's happy with what they're really dealing with, number one. Number two, my product has to be secure. Security is super, super important, okay? And number three, I need to really make sure my customers are getting the value so I keep my cost low. So these three is what I would focus and what I expect from my partners. >> Great advice, guys. Thank you so much for talking through this with me and really showing the audience how strong the partnership is between Hitachi Vantara and JCI. What you're doing together, we'll have to talk to you again to see where things go but we really appreciate your insights and your perspectives. Thank you. >> Thank you, Lisa. >> Thanks Lisa, thanks for having us. >> My pleasure. For my guests, I'm Lisa Martin. Thank you so much for watching. (soothing music)

Published Date : Feb 24 2023

SUMMARY :

In the next 15 minutes or so and pin points that you all the services we see. Talk to me Prem about some of the other in the episode as we move forward. that taming the complexity. and play in the market to our customers. that you talked about and it sounds Now the reason we thought about Harc was, and the inherent complexities But at the same time, we like a flywheel of innovation. What are the two things you want me especially in the Harc space, we pick for our end customers, and we are looking it sounds like, in the partner ecosystem. make sure that the customer's happy showing the audience how Thank you so much for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SureshPERSON

0.99+

HitachiORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Suresh MothikuruPERSON

0.99+

JapanLOCATION

0.99+

Prem BalasubramanianPERSON

0.99+

JCIORGANIZATION

0.99+

LisaPERSON

0.99+

HarcORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

DallasLOCATION

0.99+

IndiaLOCATION

0.99+

AlibabaORGANIZATION

0.99+

HyderabadLOCATION

0.99+

Hitachi VantaraORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

PortugalLOCATION

0.99+

USLOCATION

0.99+

SCLORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

bothQUANTITY

0.99+

AWSORGANIZATION

0.99+

two partsQUANTITY

0.99+

150 servicesQUANTITY

0.99+

SecondQUANTITY

0.99+

FirstQUANTITY

0.99+

next weekDATE

0.99+

200 servicesQUANTITY

0.99+

First questionQUANTITY

0.99+

PremPERSON

0.99+

tomorrowDATE

0.99+

PolarisORGANIZATION

0.99+

T&MORGANIZATION

0.99+

hundreds of servicesQUANTITY

0.99+

three thingsQUANTITY

0.98+

threeQUANTITY

0.98+

agileTITLE

0.98+

Mohan Rokkam & Greg Gibby | 4th Gen AMD EPYC on Dell PowerEdge: Virtualization


 

(cheerful music) >> Welcome to theCUBE's continuing coverage of AMD's 4th Generation EPYC launch. I'm Dave Nicholson, and I'm here in our Palo Alto studios talking to Greg Gibby, senior product manager, data center products from AMD, and Mohan Rokkam, technical marketing engineer at Dell. Welcome, gentlemen. >> Mohan: Hello, hello. >> Greg: Thank you. Glad to be here. >> Good to see each of you. Just really quickly, I want to start out. Let us know a little bit about yourselves. Mohan, let's start with you. What do you do at Dell exactly? >> So I'm a technical marketing engineer at Dell. I've been with Dell for around 15 years now and my goal is to really look at the Dell powered servers and see how do customers take advantage of some of the features we have, especially with the AMD EPYC processors that have just come out. >> Greg, and what do you do at AMD? >> Yeah, so I manage our software-defined infrastructure solutions team, and really it's a cradle to grave where we work with the ISVs in the market, so VMware, Nutanix, Microsoft, et cetera, to integrate the features that we're putting into our processors and make sure they're ready to go and enabled. And then we work with our valued partners like Dell on putting those into actual solutions that customers can buy and then we work with them to sell those solutions into the market. >> Before we get into the details on the 4th Generation EPYC launch and what that means and why people should care. Mohan, maybe you can tell us a little about the relationship between Dell and AMD, how that works, and then Greg, if you've got commentary on that afterwards, that'd be great. Yeah, Mohan. >> Absolutely. Dell and AMD have a long standing partnership, right? Especially now with EPYC series. We have had products since EPYC first generation. We have been doing solutions across the whole range of Dell ecosystem. We have integrated AMD quite thoroughly and effectively and we really love how performant these systems are. So, yeah. >> Dave: Greg, what are your thoughts? >> Yeah, I would say the other thing too is, is that we need to point out is that we both have really strong relationships across the entire ecosystem. So memory vendors, the software providers, et cetera, we have technical relationships. We're working with them to optimize solutions so that ultimately when the customer buys that, they get a great user experience right out of the box. >> So, Mohan, I know that you and your team do a lot of performance validation testing as time goes by. I suspect that you had early releases of the 4th Gen EPYC processor technology. What have you been seeing so far? What can you tell us? >> AMD has definitely knocked it out of the park. Time and again, in the past four generations, in the past five years alone, we have done some database work where in five years, we have seen five exit performance. And across the board, AMD is the leader in benchmarks. We have done virtualization where we would consolidate from five into one system. We have world records in AI, we have world records in databases, we have world records in virtualization. The AMD EPYC solutions has been absolutely performant. I'll leave you with one number here. When we went from top of Stack Milan to top of Stack Genoa, we saw a performance bump of 120%. And that number just blew my mind. >> So that prompts a question for Greg. Often we, in industry insiders, think in terms of performance gains over the last generation or the current generation. A lot of customers in the real world, however, are N - 2. They're a ways back, so I guess two points on that. First of all, the kinds of increases the average person is going to see when they move to this architecture, correct me if I'm wrong, but it's even more significant than a lot of the headline numbers because they're moving two generations, number one. Correct me if I'm wrong on that, but then the other thing is the question to you, Greg. I like very long complicated questions, as you can tell. The question is, is it okay for people to skip generations or make the case for upgrades, I guess is the problem? >> Well, yeah, so a couple thoughts on that first too. Mohan talked about that five X over the generation improvements that we've seen. The other key point with that too is that we've made significant process improvements along the way moving to seven nanocomputer to now five nanocomputer and that's really reducing the total amount of power or the performance per watt the customers can realize as well. And when we look at why would a customer want to upgrade, right? And I want to rephrase that as to why aren't you? And there is a real cost of not upgrading. And so when you look at infrastructure, the average age of a server in the data center is over five years old. And if you look at the most popular processors that were sold in that timeframe, it's 8, 10, 12 cores. So now you've got a bunch of servers that you need in order to deliver the applications and meet your SLAs to your end users, and all those servers pull power. They require maintenance. They have the opportunity to go down, et cetera. You got to pay licensing and service and support costs and all those. And when you look at all the costs that roll up, even though the hardware is paid for just to keep the lights on, and not even talking about the soft costs of unplanned downtime, and, "I'm not meeting your SLAs," et cetera, it's very expensive to keep those servers running. Now, if you refresh, and now you have processors that have 32, 64, 96 cores, now you can consolidate that infrastructure and reduce your total power bill. You can reduce your CapEx, you reduce your ongoing OpEx, you improve your performance, and you improve your security profile. So it really is more cost effective to refresh than not to refresh. >> So, Mohan, what has your experience been double clicking on this topic of consolidation? I know that we're going to talk about virtualization in some of the results that you've seen. What have you seen in that regard? Does this favor better consolidation and virtualized environments? And are you both assuring us that the ROI and TCO pencil out on these new big, bad machines? >> Greg definitely hit the nail on the head, right? We are seeing tremendous savings really, if you're consolidating from two generations old. We went from, as I said, five is to one. You're going from five full servers, probably paid off down to one single server. That itself is, if you look at licensing costs, which again, with things like VMware does get pretty expensive. If you move to a single system, yes, we are at 32, 64, 96 cores, but if you compare to the licensing costs of 10 cores, two sockets, that's still pretty significant, right? That's one huge thing. Another thing which actually really drives the thing is we are looking at security, and in today's environment, security becomes a major driving factor for upgrades. Dell has its own setups, cyber-resilient architecture, as we call it, and that really is integrated from processor all the way up into the OS. And those are some of the features which customers really can take advantage of and help protect their ecosystems. >> So what kinds of virtualized environments did you test? >> We have done virtualization across primary codes with VMware, but the Azure Stack, we have looked at Nutanix. PowerFlex is another one within Dell. We have vSAN Ready Nodes. All of these, OpenShift, we have a broad variety of solutions from Dell and AMD really fits into almost every one of them very well. >> So where does hyper-converged infrastructure fit into this puzzle? We can think of a server as something that contains not only AMD's latest architecture but also latest PCIe bus technology and all of the faster memory, faster storage cards, faster nicks, all of that comes together. But how does that play out in Dell's hyper-converged infrastructure or HCI strategy? >> Dell is a leader in hyper-converged infrastructure. We have the very popular VxRail line, we have the PowerFlex, which is now going into the AWS ecosystem as well, Nutanix, and of course, Azure Stack. With all these, when you look at AMD, we have up to 96 cores coming in. We have PCIe Gen 5 which means you can now connect dual port, 100 and 200 gig nicks and get line rate on those so you can connect to your ecosystem. And I don't know if you've seen the news, 200, 400 gig routers and switchers are selling out. That's not slowing down. The network infrastructure is booming. If you want to look at the AI/ML side of things, the VDI side of things, accelerator cards are becoming more and more powerful, more and more popular. And of course they need that higher end data path that PCIe Gen 5 brings to the table. GDDR5 is another huge improvement in terms of performance and latencies. So when we take all this together, you talk about hyper-converged, all of them add into making sure that A, with hyper-converged, you get ease of management, but B, just 'cause you have ease of management doesn't mean you need to compromise on anything. And the AMD servers effectively are a no compromise offering that we at Dell are able to offer to our customers. >> So Greg, I've got a question a little bit from left field for you. We covered Supercompute Conference 2022. We were in Dallas a couple of weeks ago, and there was a lot of discussion of the current processor manufacturer battles, and a lot of buzz around 4th Gen EPYC being launched and what's coming over the next year. Do you have any thoughts on what this architecture can deliver for us in terms of things like AI? We talk about virtualization, but if you look out over the next year, do you see this kind of architecture driving significant change in the world? >> Yeah, yeah, yeah, yeah. It has the real potential to do that from just the building blocks. So we have our chiplet architecture we call it. So you have an IO die and then you have your core complexes that go around that. And we integrate it all with our infinity fabric. That architecture allows you, if we wanted to, replace some of those CCDs with specific accelerators. And so when we look two, three, four years down the road, that architecture and that capability already built into what we're delivering and can easily be moved in. We just need to make sure that when you look at doing that, that the power that's required to do that and the software, et cetera, and those accelerators actually deliver better performance as a dedicated engine versus just using standard CPUs. The other things that I would say too is if you look at emerging workloads. So data center modernization is one of the buzzwords in cloud native, right? And these container environments, well, AMD'S architecture really just screams support for those type of environments, right? Where when you get into these larger core accounts and the consolidation that Mohan talked about. Now when I'm in a container environment, that blast radius so a lot of customers have concerns around, "Hey, having a single point of failure and having more than X number of cores concerns me." If I'm in containers, that becomes less of a concern. And so when you look at cloud native, containerized applications, data center modernization, AMD's extremely well positioned to take advantage of those use cases as well. >> Yeah, Mohan, and when we talk about virtualization, I think sometimes we have to remind everyone that yeah, we're talking about not only virtualization that has a full-blown operating system in the bucket, but also virtualization where the containers have microservices and things like that. I think you had something to add, Mohan. >> I did, and I think going back to the accelerator side of business, right? When we are looking at the current technology and looking at accelerators, AMD has done a fantastic job of adding in features like AVX-512, we have the bfloat16 and eight features. And some of what these do is they're effectively built-in accelerators for certain workloads especially in the AI and media spaces. And in some of these use cases we look at, for example, are inference. Traditionally we have used external accelerator cards, but for some of the entry level and mid-level use cases, CPU is going to work just fine especially with the newer CPUs that we are seeing this fantastic performance from. The accelerators just help get us to the point where if I'm at the edge, if I'm in certain use cases, I don't need to have an accelerator in there. I can run most of my inference workloads right on the CPU. >> Yeah, yeah. You know the game. It's an endless chase to find the bottleneck. And once we've solved the puzzle, we've created a bottleneck somewhere else. Back to the supercompute conversations we had, specifically about some of the AMD EPYC processor technology and the way that Dell is packaging it up and leveraging things like connectivity. That was one of the things that was also highlighted. This idea that increasingly connectivity is critically important, not just for supercomputing, but for high-performance computing that's finding its way out of the realms of Los Alamos and down to the enterprise level. Gentlemen, any more thoughts about the partnership or maybe a hint at what's coming in the future? I know that the original AMD announcement was announcing and previewing some things that are rolling out over the next several months. So let me just toss it to Greg. What are we going to see in 2023 in terms of rollouts that you can share with us? >> That I can share with you? Yeah, so I think look forward to see more advancements in the technology at the core level. I think we've already announced our product code name Bergamo, where we'll have up to 128 cores per socket. And then as we look in, how do we continually address this demand for data, this demand for, I need actionable insights immediately, look for us to continue to drive performance leadership in our products that are coming out and address specific workloads and accelerators where appropriate and where we see a growing market. >> Mohan, final thoughts. >> On the Dell side, of course, we have four very rich and configurable options with AMD EPYC servers. But beyond that, you'll see a lot more solutions. Some of what Greg has been talking about around the next generation of processors or the next updated processors, you'll start seeing some of those. and you'll definitely see more use cases from us and how customers can implement them and take advantage of the features that. It's just exciting stuff. >> Exciting stuff indeed. Gentlemen, we have a great year ahead of us. As we approach possibly the holiday seasons, I wish both of you well. Thank you for joining us. From here in the Palo Alto studios, again, Dave Nicholson here. Stay tuned for our continuing coverage of AMD's 4th Generation EPYC launch. Thanks for joining us. (cheerful music)

Published Date : Dec 14 2022

SUMMARY :

talking to Greg Gibby, Glad to be here. What do you do at Dell exactly? of some of the features in the market, so VMware, on the 4th Generation EPYC launch the whole range of Dell ecosystem. is that we need to point out is that of the 4th Gen EPYC processor technology. Time and again, in the the question to you, Greg. of servers that you need in some of the results that you've seen. really drives the thing is we have a broad variety and all of the faster We have the very popular VxRail line, over the next year, do you that the power that's required to do that in the bucket, but also but for some of the entry I know that the original AMD in the technology at the core level. and take advantage of the features that. From here in the Palo Alto studios,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GregPERSON

0.99+

Dave NicholsonPERSON

0.99+

AMDORGANIZATION

0.99+

Greg GibbyPERSON

0.99+

DellORGANIZATION

0.99+

DavePERSON

0.99+

8QUANTITY

0.99+

MohanPERSON

0.99+

32QUANTITY

0.99+

Mohan RokkamPERSON

0.99+

100QUANTITY

0.99+

200QUANTITY

0.99+

10 coresQUANTITY

0.99+

10QUANTITY

0.99+

DallasLOCATION

0.99+

120%QUANTITY

0.99+

two socketsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

12 coresQUANTITY

0.99+

two generationsQUANTITY

0.99+

2023DATE

0.99+

fiveQUANTITY

0.99+

64QUANTITY

0.99+

200 gigQUANTITY

0.99+

AWSORGANIZATION

0.99+

oneQUANTITY

0.99+

five full serversQUANTITY

0.99+

Palo AltoLOCATION

0.99+

two pointsQUANTITY

0.99+

400 gigQUANTITY

0.99+

EPYCORGANIZATION

0.99+

twoQUANTITY

0.99+

five yearsQUANTITY

0.99+

one systemQUANTITY

0.99+

threeQUANTITY

0.99+

Los AlamosLOCATION

0.99+

next yearDATE

0.99+

NutanixORGANIZATION

0.99+

two generationsQUANTITY

0.99+

four yearsQUANTITY

0.98+

bothQUANTITY

0.98+

Azure StackTITLE

0.98+

five nanocomputerQUANTITY

0.98+

Seamus Jones & Milind Damle


 

>>Welcome to the Cube's Continuing coverage of AMD's fourth generation Epic launch. I'm Dave Nicholson and I'm joining you here in our Palo Alto Studios. We have two very interesting guests to dive into some of the announcements that have been made and maybe take a look at this from an AI and ML perspective. Our first guest is Milland Doley. He's a senior director for software and solutions at amd, and we're also joined by Shamus Jones, who's a director of server engineering at Dell Technologies. Welcome gentlemen. How are you? >>Very good, thank >>You. Welcome to the Cube. So let's start out really quickly, Shamus, what, give us a thumbnail sketch of what you do at Dell. >>Yeah, so I'm the director of technical marketing engineering here at Dell, and our team really takes a look at the technical server portfolio and solutions and ensures that we can look at, you know, the performance metrics, benchmarks, and performance characteristics, so that way we can give customers a good idea of what they can expect from the server portfolio when they're looking to buy Power Edge from Dell. >>Milland, how about you? What's, what's new at a M D? What do you do there? >>Great to be here. Thank you for having me at amd, I'm the senior director of performance engineering and ISV ecosystem enablement, which is a long winter way of saying we do a lot of benchmarks, improved performance and demonstrate with wonderful partners such as Shamus and Dell, the combined leverage that AMD four generation processes and Dell systems can bring to bear on a multitude of applications across the industry spectrum. >>Shamus, talk about that relationship a little bit more. The relationship between a M D and Dell. How far back does it go? What does it look like in practical terms? >>Absolutely. So, you know, ever since AM MD reentered the server space, we've had a very close relationship. You know, it's one of those things where we are offering solutions that are out there to our customers no matter what generation A portfolio, if they're, if they're demanding either from their competitor or a m d, we offer a portfolio solutions that are out there. What we're finding is that within their generational improvements, they're just getting better and better and better. Really exciting things happening from a m D at the moment, and we're seeing that as we engineer those CPU stacks into our, our server portfolio, you know, we're really seeing unprecedented performance across the board. So excited about the, the history, you know, my team and Lin's team work very closely together, so much so that we were communicating almost on a daily basis around portfolio platforms and updates around the, the, the benchmarks testing and, and validation efforts. >>So Melind, are you happy with these PowerEdge boxes that Seamus is building to, to house, to house your baby? >>We are delighted, you know, it's hard to find stronger partners than Shamus and Dell with AMD's, second generation epic service CPUs. We already had undisputable industry performance leadership, and then with the third and now the fourth generation CPUs, we've just increased our lead with competition. We've got so many outstanding features at the platform, at the CPU level, everybody focuses on the high core counts, but there's also the DDR five, the memory, the io, and the storage subsystem. So we believe we have a fantastic performance and performance per dollar performance per what edge over competition, and we look to partners such as Dell to help us showcase that leadership. >>Well. So Shay Yeah, through Yeah, go ahead >>Dave. What, what I'd add, Dave, is that through the, the partnership that we've had, you know, we've been able to develop subsystems and platform features that historically we couldn't have really things around thermals power efficiency and, and efficiency within the platform. That means that customers can get the most out of their compute infrastructure. >>So this is gonna be a big question moving forward as next generation platforms are rolled out, there's the potential for people to have sticker shock. You talk about something that has eight or 12 cores in a, in a physical enclosure versus 96 cores, and, and I guess the, the question is, do the ROI and TCO numbers look good for someone to make that upgrade? Shamus, you wanna, you wanna hit that first or you guys are integrated? >>Absolutely, yeah, sorry. Absolutely. So we, I'll tell you what, at the moment, customers really can't afford not to upgrade at the moment, right? We've taken a look at the cost basis of keeping older infrastructure in place, let's say five or seven year old infrastructure servers that are, that are drawing more power maybe are, are poorly utilized within the infrastructure and take more and more effort and time to manage, maintain and, and really keep in production. So as customers look to upgrade or refresh their platforms, what we're finding right is that they can take a dynamic consolidation sometimes 5, 7, 8 to one consolidation depending on which platform they have as a historical and which one they're looking to upgrade to. Within AI specifically and machine learning frameworks, we're seeing really unprecedented performance. Lin's team partnered with us to deliver multiple benchmarks for the launch, some of which we're still continuing to see the goodness from things like TP C X AI as a framework, and I'm talking about here specifically the CPU U based performance. >>Even though in a lot of those AI frameworks, you would also expect to have GPUs, which all of the four platforms that we're offering on the AM MD portfolio today offer multiple G P U offerings. So we're seeing a balance between a huge amount of C P U gain and performance, as well as more and more GPU offerings within the platform. That was real, that was a real challenge for us because of the thermal challenges. I mean, you think GPUs are going up 300, 400 watt, these CPUs at 96 core are, are quite demanding thermally, but what we're able to do is through some, some unique smart cooling engineering within the, the PowerEdge portfolio, we can take a look at those platforms and make the most efficient use case by having things like telemetry within the platform so that way we can dynamically change fan speeds to get customers the best performance without throttling based on their need. >>Melin the cube was at the Supercomputing conference in Dallas this year, supercomputing conference 2022, and a lot of the discussion was around not only advances in microprocessor technology, but also advances in interconnect technology. How do you manage that sort of research partnership with Dell when you aren't strictly just focusing on the piece that you are bringing to the party? It's kind of a potluck, you know, we, we, we, we mentioned P C I E Gen five or 5.0, whatever you want to call it, new DDR storage cards, Nicks, accelerators, all of those, all of those things. How do you keep that straight when those aren't things that you actually build? >>Well, excellent question, Dave. And you know, as we are developing the next platform, obviously the, the ongoing relationship is there with Dell, but we start way before launch, right? Sometimes it's multiple years before launch. So we are not just focusing on the super high core counts at the CPU level and the platform configurations, whether it's single socket or dual socket, we are looking at it from the memory subsystem from the IO subsystem, P c i lanes for storage is a big deal, for example, in this generation. So it's really a holistic approach. And look, core counts are, you know, more important at the higher end for some customers h HPC space, some of the AI applications. But on the lower end you have database applications or some other is s v applications that care a lot about those. So it's, I guess different things matter to different folks across verticals. >>So we partnered with Dell very early in the cycle, and it's really a joint co-engineering. Shamus talked about the focus on AI with TP C X xci, I, so we set five world records in that space just on that one benchmark with AD and Dell. So fantastic kick kick off to that across a multitude of scale factors. But PPP c Xci is not just the only thing we are focusing on. We are also collaborating with Dell and des e i on some of the transformer based natural language processing models that we worked on, for example. So it's not just a steep CPU story, it's CPU platform, es subsystem software and the whole thing delivering goodness across the board to solve end user problems in AI and and other verticals. >>Yeah, the two of you are at the tip of the spear from a performance perspective. So I know it's easy to get excited about world records and, and they're, they're fantastic. I know Shamus, you know, that, you know, end user customers might, might immediately have the reaction, well, I don't need a Ferrari in my data center, or, you know, what I need is to be able to do more with less. Well, aren't we delivering that also? And you know, you imagine you milland you mentioned natural, natural language processing. Shamus, are you thinking in 2023 that a lot more enterprises are gonna be able to afford to do things like that? I mean, what are you hearing from customers on this front? >>I mean, while the adoption of the top bin CPU stack is, is definitely the exception, not the rule today we are seeing marked performance, even when we look at the mid bin CPU offerings from from a m d, those are, you know, the most common sold SKUs. And when we look at customers implementations, really what we're seeing is the fact that they're trying to make the most, not just of dollar spend, but also the whole subsystem that Melin was talking about. You know, the fact that balanced memory configs can give you marked performance improvements, not just at the CPU level, but as actually all the way through to the, to the application performance. So it's, it's trying to find the correct balance between the application needs, your budget, power draw and infrastructure within the, the data center, right? Because not only could you, you could be purchasing and, and look to deploy the most powerful systems, but if you don't have an infrastructure that's, that's got the right power, right, that's a large challenge that's happening right now and the right cooling to deal with the thermal differences of the systems, might you wanna ensure that, that you can accommodate those for not just today but in the future, right? >>So it's, it's planning that balance. >>If I may just add onto that, right? So when we launched, not just the fourth generation, but any generation in the past, there's a natural tendency to zero in on the top bin and say, wow, we've got so many cores. But as Shamus correctly said, it's not just that one core count opn, it's, it's the whole stack. And we believe with our four gen CPU processor stack, we've simplified things so much. We don't have, you know, dozens and dozens of offerings. We have a fairly simple skew stack, but we also have a very efficient skew stack. So even, even though at the top end we've got 96 scores, the thermal budget that we require is fairly reasonable. And look, with all the energy crisis going around, especially in Europe, this is a big deal. Not only do customers want performance, but they're also super focused on performance per want. And so we believe with this generation, we really delivered not just on raw performance, but also on performance per dollar and performance per one. >>Yeah. And it's not just Europe, I'm, we're, we are here in Palo Alto right now, which is in California where we all know the cost of an individual kilowatt hour of electricity because it's quite, because it's quite high. So, so thermals, power cooling, all of that, all of that goes together and that, and that drives cost. So it's a question of how much can you get done per dollar shame as you made the point that you, you're not, you don't just have a one size fits all solution that it's, that it's fit for function. I, I'm, I'm curious to hear from you from the two of you what your thoughts are from a, from a general AI and ML perspective. We're starting to see right now, if you hang out on any kind of social media, the rise of these experimental AI programs that are being presented to the public, some will write stories for you based on prom, some will create images for you. One of the more popular ones will create sort of a, your superhero alter ego for, I, I can't wait to do it, I just got the app on my phone. So those are all fun and they're trivial, but they sort of get us used to this idea that, wow, these systems can do things. They can think on their own in a certain way. W what do, what do you see the future of that looking like over the next year in terms of enterprises, what they're going to do for it with it >>Melan? Yeah, I can go first. Yeah, yeah, yeah, yeah, >>Sure. Yeah. Good. >>So the couple of examples, Dave, that you mentioned are, I, I guess it's a blend of novelty and curiosity. You know, people using AI to write stories or poems or, you know, even carve out little jokes, check grammar and spelling very useful, but still, you know, kind of in the realm of novelty in the mainstream, in the enterprise. Look, in my opinion, AI is not just gonna be a vertical, it's gonna be a horizontal capability. We are seeing AI deployed across the board once the models have been suitably trained for disparate functions ranging from fraud detection or anomaly detection, both in the financial markets in manufacturing to things like image classification or object detection that you talked about in, in the sort of a core AI space itself, right? So we don't think of AI necessarily as a vertical, although we are showcasing it with a specific benchmark for launch, but we really look at AI emerging as a horizontal capability and frankly, companies that don't adopt AI on a massive scale run the risk of being left behind. >>Yeah, absolutely. There's an, an AI as an outcome is really something that companies, I, I think of it in the fact that they're adopting that and the frameworks that you're now seeing as the novelty pieces that Melin was talking about is, is really indicative of the under the covers activity that's been happening within infrastructures and within enterprises for the past, let's say 5, 6, 7 years, right? The fact that you have object detection within manufacturing to be able to, to be able to do defect detection within manufacturing lines. Now that can be done on edge platforms all the way at the device. So you're no longer only having to have things be done, you know, in the data center, you can bring it right out to the edge and have that high performance, you know, inferencing training models. Now, not necessarily training at the edge, but the inferencing models especially, so that way you can, you know, have more and, and better use cases for some of these, these instances things like, you know, smart cities with, with video detection. >>So that way they can see, especially during covid, we saw a lot of hospitals and a lot of customers that were using using image and, and spatial detection within their, their video feeds to be able to determine who and what employees were at risk during covid. So there's a lot of different use cases that have been coming around. I think the novelty aspect of it is really interesting and I, I know my kids, my daughters love that, that portion of it, but really what's been happening has been exciting for quite a, quite a period of time in the enterprise space. We're just now starting to actually see those come to light in more of a, a consumer relevant kind of use case. So the technology that's been developed in the data center around all of these different use cases is now starting to feed in because we do have more powerful compute at our fingertips. We do have the ability to talk more about the framework and infrastructure that's that's right out at the edge. You know, I know Dave in the past you've said things like the data center of, you know, 20 years ago is now in my hand as, as my cell phone. That's right. And, and that's, that's a fact and I'm, it's exciting to think where it's gonna be in the next 10 or 20 years. >>One terabyte baby. Yeah. One terabyte. Yeah. It's mind bo. Exactly. It's mind boggling. Yeah. And it makes me feel old. >>Yeah, >>Me too. And, and that and, and Shamus, that all sounded great. A all I want is a picture of me as a superhero though, so you guys are already way ahead of the curve, you know, with, with, with that on that note, Seamus wrap us up with, with a, with kind of a summary of the, the highlights of what we just went through in terms of the performance you're seeing out of this latest gen architecture from a md. >>Absolutely. So within the TPC xai frameworks that Melin and my team have worked together to do, you know, we're seeing unprecedented price performance. So the fact that you can get 220% uplift gen on gen for some of these benchmarks and, you know, you can have a five to one consolidation means that if you're looking to refresh platforms that are historically legacy, you can get a, a huge amount of benefit, both in reduction in the number of units that you need to deploy and the, the amount of performance that you can get per unit. You know, Melinda had mentioned earlier around CPU performance and performance per wat, specifically on the Tu socket two U platform using the fourth generation a m d Epic, you know, we're seeing a 55% higher C P U performance per wat that is that, you know, when for people who aren't necessarily looking at these statistics, every generation of servers, that that's, that is a huge jump leap forward. >>That combined with 121% higher spec scores, you know, as a benchmark, those are huge. Normally we see, let's say a 40 to 60% performance improvement on the spec benchmarks, we're seeing 121%. So while that's really impressive at the top bin, we're actually seeing, you know, large percentile improvements across the mid bins as well, you know, things in the range of like 70 to 90% performance improvements in those standard bins. So it, it's a, it's a huge performance improvement, a power efficiency, which means customers are able to save energy, space and time based on, on their deployment size. >>Thanks for that Shamus, sadly, gentlemen, our time has expired. With that, I want to thank both of you. It's a very interesting conversation. Thanks for, thanks for being with us, both of you. Thanks for joining us here on the Cube for our coverage of AMD's fourth generation Epic launch. Additional information, including white papers and benchmarks plus editorial coverage can be found on does hardware matter.com.

Published Date : Dec 9 2022

SUMMARY :

I'm Dave Nicholson and I'm joining you here in our Palo Alto Studios. Shamus, what, give us a thumbnail sketch of what you do at Dell. and ensures that we can look at, you know, the performance metrics, benchmarks, and Dell, the combined leverage that AMD four generation processes and Shamus, talk about that relationship a little bit more. So, you know, ever since AM MD reentered the server space, We are delighted, you know, it's hard to find stronger partners That means that customers can get the most out you wanna, you wanna hit that first or you guys are integrated? So we, I'll tell you what, and make the most efficient use case by having things like telemetry within the platform It's kind of a potluck, you know, we, But on the lower end you have database applications or some But PPP c Xci is not just the only thing we are focusing on. Yeah, the two of you are at the tip of the spear from a performance perspective. the fact that balanced memory configs can give you marked performance improvements, but any generation in the past, there's a natural tendency to zero in on the top bin and say, the two of you what your thoughts are from a, from a general AI and ML perspective. Yeah, I can go first. So the couple of examples, Dave, that you mentioned are, I, I guess it's a blend of novelty have that high performance, you know, inferencing training models. So the technology that's been developed in the data center around all And it makes me feel old. so you guys are already way ahead of the curve, you know, with, with, with that on that note, So the fact that you can get 220% uplift gen you know, large percentile improvements across the mid bins as well, Thanks for that Shamus, sadly, gentlemen, our time has

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

DellORGANIZATION

0.99+

EuropeLOCATION

0.99+

70QUANTITY

0.99+

40QUANTITY

0.99+

55%QUANTITY

0.99+

fiveQUANTITY

0.99+

DavePERSON

0.99+

220%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

121%QUANTITY

0.99+

96 coresQUANTITY

0.99+

CaliforniaLOCATION

0.99+

AMDORGANIZATION

0.99+

Shamus JonesPERSON

0.99+

12 coresQUANTITY

0.99+

ShamusORGANIZATION

0.99+

ShamusPERSON

0.99+

2023DATE

0.99+

eightQUANTITY

0.99+

96 coreQUANTITY

0.99+

300QUANTITY

0.99+

bothQUANTITY

0.99+

twoQUANTITY

0.99+

dozensQUANTITY

0.99+

seven yearQUANTITY

0.99+

5QUANTITY

0.99+

FerrariORGANIZATION

0.99+

96 scoresQUANTITY

0.99+

60%QUANTITY

0.99+

90%QUANTITY

0.99+

Milland DoleyPERSON

0.99+

first guestQUANTITY

0.99+

thirdQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

amdORGANIZATION

0.99+

todayDATE

0.98+

LinPERSON

0.98+

20 years agoDATE

0.98+

MelindaPERSON

0.98+

One terabyteQUANTITY

0.98+

SeamusORGANIZATION

0.98+

one coreQUANTITY

0.98+

MelindPERSON

0.98+

fourth generationQUANTITY

0.98+

this yearDATE

0.97+

7 yearsQUANTITY

0.97+

Seamus JonesPERSON

0.97+

DallasLOCATION

0.97+

OneQUANTITY

0.97+

MelinPERSON

0.97+

oneQUANTITY

0.97+

6QUANTITY

0.96+

Milind DamlePERSON

0.96+

MelanPERSON

0.96+

firstQUANTITY

0.95+

8QUANTITY

0.94+

second generationQUANTITY

0.94+

SeamusPERSON

0.94+

TP C XTITLE

0.93+

Evan Touger, Prowess | Prowess Benchmark Testing Results for AMD EPYC Genoa on Dell Servers


 

(upbeat music) >> Welcome to theCUBE's continuing coverage of AMD's fourth generation EPYC launch. I've got a special guest with me today from Prowess Consulting. His name is Evan Touger, he's a senior technical writer with Prowess. Evan, welcome. >> Hi, great to be here. Thanks. >> So tell us a little bit about Prowess, what does Prowess do? >> Yeah, we're a consulting firm. We've been around for quite a few years, based in Bellevue, Washington. And we do quite a few projects with folks from Dell to a lot of other companies, and dive in. We have engineers, writers, production folks, so pretty much end-to-end work, doing research testing and writing, and diving into different technical topics. >> So you- in this case what we're going to be talking about is some validation studies that you've done, looking at Dell PowerEdge servers that happened to be integrating in fourth-gen EPYC processors from AMD. What were the specific workloads that you were focused on in this study? >> Yeah, this particular one was honing in on virtualization, right? You know, obviously it's pretty much ubiquitous in the industry, everybody works with virtualization in one way or another. So just getting optimal performance for virtualization was critical, or is critical for most businesses. So we just wanted to look a little deeper into, you know, how do companies evaluate that? What are they going to use to make the determination for virtualization performance as it relates to their workloads? So that led us to this study, where we looked at some benchmarks, and then went a little deeper under the hood to see what led to the results that we saw from those benchmarks. >> So when you say virtualization, does that include virtual desktop infrastructure or are we just talking about virtual machines in general? >> No, it can include both. We looked at VMs, thinking in terms of what about database performance when you're working in VMs, all the way through to VDI and companies like healthcare organizations and so forth, where it's common to roll out lots of virtual desktops, and performance is critical there as well. >> Okay, you alluded to, sort of, looking under the covers to see, you know, where these performance results were coming from. I assume what you're referencing is the idea that it's not just all about the CPU when you talk about a system. Am I correct in that assumption and- >> Yeah, absolutely. >> What can you tell us? >> Well, you know, for companies evaluating, there's quite a bit to consider, obviously. So they're looking at not just raw performance but power performance. So that was part of it, and then what makes up that- those factors, right? So certainly CPU is critical to that, but then other things come into play, like the RAID controllers. So we looked a little bit there. And then networking, of course can be critical for configurations that are relying on good performance on their networks, both in terms of bandwidth and just reducing latency overall. So interconnects as well would be a big part of that. So with, with PCIe gen 5 or 5.0 pick your moniker. You know in this- in the infrastructure game, we're often playing a game of whack-a-mole, looking for the bottlenecks, you know, chasing the bottlenecks. PCIe 5 opens up a lot of bandwidth for memory and things like RAID controllers and NICs. I mean, is the bottleneck now just our imagination, Evan, have we reached a point where there are no bottlenecks? What did you see when you ran these tests? What, you know, what were you able to stress to a point where it was saturated, if anything? >> Yeah. Well, first of all, we didn't- these are particular tests were ones that we looked at industry benchmarks, and we were examining in particular to see where world records were set. And so we uncovered a few specific servers, PowerEdge servers that were pretty key there, or had a lot of- were leading in the category in a lot of areas. So that's what led us to then, okay, well why is that? What's in these servers, and what's responsible for that? So in a lot of cases they, we saw these results even with, you know, gen 4, PCIe gen 4. So there were situations where clearly there was benefit from faster interconnects and, and especially NVMe for RAID, you know, for supporting NVMe and SSDs. But all of that just leads you to the understanding that it means it can only get better, right? So going from gen 4 to- if you're seeing great results on gen 4, then gen 5 is probably going to be, you know, blow that away. >> And in this case, >> It'll be even better. >> In this case, gen 5 you're referencing PCIe >> PCIe right. Yeah, that's right. >> (indistinct) >> And then the same thing with EPYC actually holds true, some of the records, we saw records set for both 3rd and 4th gen, so- with EPYC, so the same thing there. Anywhere there's a record set on the 3rd gen, you know, makes us really- we're really looking forward to going back and seeing over the next few months, which of those records fall and are broken by newer generation versions of these servers, once they actually wrap to the newer generation processors. You know, based on, on what we're seeing for the- for what those processors can do, not only in. >> (indistinct) Go ahead. >> Sorry, just want to say, not only in terms of raw performance, but as I mentioned before, the power performance, 'cause they're very efficient, and that's a really critical consideration, right? I don't think you can overstate that for companies who are looking at, you know, have to consider expenditures and power and cooling and meeting sustainability goals and so forth. So that was really an important category in terms of what we looked at, was that power performance, not just raw performance. >> Yeah, I want to get back to that, that's a really good point. We should probably give credit where credit is due. Which Dell PowerEdge servers are we talking about that were tested and what did those interconnect components look like from a (indistinct) perspective? >> Yeah, so we focused primarily on a couple benchmarks that seemed most important for real world performance results for virtualization. TPCx-V and VMmark 3.x. the TPCx-V, that's where we saw PowerEdge R7525, R7515. They both had top scores in different categories there. That benchmark is great for looking at database workloads in particular, right? Running in virtualization settings. And then the VMmark 3.x was critical. We saw good, good results there for the 7525 and the R 7515 as well as the R 6525, in that one and that included, sorry, just checking notes to see what- >> Yeah, no, no, no, no, (indistinct) >> Included results for power performance, as I mentioned earlier, that's where we could see that. So we kind of, we saw this in a range of servers that included both 3rd gen AMD EPYC and newer 4th gen as well as I mentioned. The RAID controllers were critical in the TPCx-V. I don't think that came into play in the VM mark test, but they were definitely part of the TPCx-V benchmarks. So that's where the RAID controllers would make a difference, right? And in those tests, I think they're using PERC 11. So, you know, the newer PERC 12 controllers there, again we'd expect >> (indistinct) >> To see continued, you know, gains in newer benchmarks. That's what we'll be looking for over the next several months. >> Yeah. So I think if I've got my Dell nomenclature down, performance, no no, PowerEdge RAID Controller, is that right? >> Exactly, yeah, there you go. Right? >> With Broadcom, you know, powered by Broadcom. >> That's right. There you go. Yeah. Isn't the Dell naming scheme there PERC? >> Yeah, exactly, exactly. Back to your comment about power. So you've had a chance to take a pretty deep look at the latest stuff coming out. You're confident that- 'cause some of these servers are going to be more expensive than previous generation. Now a server is not a server is not a server, but some are awakening to the idea that there might be some sticker shock. You're confident that the bang for your buck, the bang for your kilowatt hour is actually going to be beneficial. We're actually making things better, faster, stronger, cheaper, more energy efficient. We're continuing on that curve? >> That's what I would expect to see, right. I mean, of course can't speak to to pricing without knowing, you know, where the dollars are going to land on the servers. But I would expect to see that because you're getting gains in a couple of ways. I mean, one, if the performance increases to the point where you can run more VMs, right? Get more performance out of your VMs and run more total VMs or more BDIs, then there's obviously a good, you know, payback on your investment there. And then as we were discussing earlier, just the power performance ratio, right? So if you're bringing down your power and cooling costs, if these machines are just more efficient overall, then you should see some gains there as well. So, you know, I think the key is looking at what's the total cost of ownership over, you know, a standard like a three-year period or something and what you're going to get out of it for your number of sessions, the performance for the sessions, and the overall efficiency of the machines. >> So just just to be clear with these Dell PowerEdge servers, you were able to validate world record performance. But this isn't, if you, if you look at CPU architecture, PCIe bus architecture, memory, you know, the class of memory, the class of RAID controller, the class of NIC. Those were not all state of the art in terms of at least what has been recently announced. Correct? >> Right. >> Because (indistinct) the PCI 4.0, So to your point- world records with that, you've got next-gen RAID controllers coming out, and NICs coming out. If the motherboard was PCIe 5, with commensurate memory, all of those things are getting better. >> Exactly, right. I mean you're, you're really you're just eliminating bandwidth constraints latency constraints, you know, all of that should be improved. NVMe, you know, just collectively all these things just open the doors, you know, letting more bandwidth through reducing all the latency. Those are, those are all pieces of the puzzle, right? That come together and it's all about finding the weakest link and eliminating it. And I think we're reaching the point where we're removing the biggest constraints from the systems. >> Okay. So I guess is it fair to summarize to say that with this infrastructure that you tested, you were able to set world records. This, during this year, I mean, over the next several months, things are just going to get faster and faster and faster and faster. >> That's what I would anticipate, exactly, right. If they're setting world records with these machines before some of the components are, you know, the absolute latest, it seems to me we're going to just see a continuing trend there, and more and more records should fall. So I'm really looking forward to seeing how that goes, 'cause it's already good and I think the return on investment is pretty good there. So I think it's only going to get better as these roll out. >> So let me ask you a question that's a little bit off topic. >> Okay. >> Kind of, you know, we see these gains, you know, we're all familiar with Moore's Law, we're familiar with, you know, the advancements in memory and bus architecture and everything else. We just covered SuperCompute 2022 in Dallas a couple of weeks ago. And it was fascinating talking to people about advances in AI that will be possible with new architectures. You know, most of these supercomputers that are running right now are n minus 1 or n minus 2 infrastructure, you know, they're, they're, they're PCI 3, right. And maybe two generations of processors old, because you don't just throw out a 100,000 CPU super computing environment every 18 months. It doesn't work that way. >> Exactly. >> Do you have an opinion on this question of the qualitative versus quantitative increase in computing moving forward? And, I mean, do you think that this new stuff that you're starting to do tests on is going to power a fundamental shift in computing? Or is it just going to be more consolidation, better power consumption? Do you think there's an inflection point coming? What do you think? >> That's a great question. That's a hard one to answer. I mean, it's probably a little bit of both, 'cause certainly there will be better consolidation, right? But I think that, you know, the systems, it works both ways. It just allows you to do more with less, right? And you can go either direction, you can do what you're doing now on fewer machines, you know, and get better value for it, or reduce your footprint. Or you can go the other way and say, wow, this lets us add more machines into the mix and take our our level of performance from here to here, right? So it just depends on what your focus is. Certainly with, with areas like, you know, HPC and AI and ML, having the ability to expand what you already are capable of by adding more machines that can do more is going to be your main concern. But if you're more like a small to medium sized business and the opportunity to do what you were doing on, on a much smaller footprint and for lower costs, that's really your goal, right? So I think you can use this in either direction and it should, should pay back in a lot of dividends. >> Yeah. Thanks for your thoughts. It's an interesting subject moving forward. You know, sometimes it's easy to get lost in the minutiae of the bits and bites and bobs of all the components we're studying, but they're powering something that that's going to effect effectively all of humanity as we move forward. So what else do we need to consider when it comes to what you've just validated in the virtualization testing? Anything else, anything we left out? >> I think we hit all the key points, or most of them it's, you know, really, it's just keeping in mind that it's all about the full system, the components not- you know, the processor is a obviously a key, but just removing blockages, right? Freeing up, getting rid of latency, improving bandwidth, all these things come to play. And then the power performance, as I said, I know I keep coming back to that but you know, we just, and a lot of what we work on, we just see that businesses, that's a really big concern for businesses and finding efficiency, right? And especially in an age of constrained budgets, that's a big deal. So, it's really important to have that power performance ratio. And that's one of the key things we saw that stood out to us in, in some of these benchmarks, so. >> Well, it's a big deal for me. >> It's all good. >> Yeah, I live in California and I know exactly how much I pay for a kilowatt hour of electricity. >> I bet, yeah. >> My friends in other places don't even know. So I totally understand the power constraint question. >> Yeah, it's not going to get better, so, anything you can do there, right? >> Yeah. Well Evan, this has been great. Thanks for sharing the results that Prowess has come up with, third party validation that, you know, even without the latest and greatest components in all categories, Dell PowerEdge servers are able to set world records. And I anticipate that those world records will be broken in 2023 and I expect that Prowess will be part of that process, So Thanks for that. For the rest of us- >> (indistinct) >> Here at theCUBE, I want to thank you for joining us. Stay tuned for continuing coverage of AMD's fourth generation EPYC launch, for myself and for Evan Touger. Thanks so much for joining us. (upbeat music)

Published Date : Dec 8 2022

SUMMARY :

Welcome to theCUBE's Hi, great to be here. to a lot of other companies, and dive in. that you were focused on in this study? you know, how do companies evaluate that? all the way through to VDI looking under the covers to see, you know, you know, chasing the bottlenecks. But all of that just leads you Yeah, that's right. you know, makes us really- (indistinct) are looking at, you know, and what did those interconnect and the R 7515 as well as So, you know, the newer To see continued, you know, is that right? Exactly, yeah, there you go. With Broadcom, you There you go. the bang for your buck, to pricing without knowing, you know, PCIe bus architecture, memory, you know, So to your point- world records with that, just open the doors, you know, with this infrastructure that you tested, components are, you know, So let me ask you a question that's we're familiar with, you know, and the opportunity to do in the minutiae of the or most of them it's, you know, really, it's a big deal for me. for a kilowatt hour of electricity. So I totally understand the third party validation that, you know, I want to thank you for joining us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EvanPERSON

0.99+

Evan TougerPERSON

0.99+

CaliforniaLOCATION

0.99+

DallasLOCATION

0.99+

DellORGANIZATION

0.99+

Prowess ConsultingORGANIZATION

0.99+

2023DATE

0.99+

three-yearQUANTITY

0.99+

AMDORGANIZATION

0.99+

R 6525COMMERCIAL_ITEM

0.99+

BroadcomORGANIZATION

0.99+

3rdQUANTITY

0.99+

R 7515COMMERCIAL_ITEM

0.99+

R7515COMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

4th genQUANTITY

0.99+

3rd genQUANTITY

0.98+

both waysQUANTITY

0.98+

7525COMMERCIAL_ITEM

0.98+

ProwessORGANIZATION

0.98+

Bellevue, WashingtonLOCATION

0.98+

100,000 CPUQUANTITY

0.98+

PowerEdgeCOMMERCIAL_ITEM

0.97+

two generationsQUANTITY

0.97+

oneQUANTITY

0.96+

PCIe 5OTHER

0.96+

todayDATE

0.95+

theCUBEORGANIZATION

0.94+

this yearDATE

0.93+

PCI 4.0OTHER

0.92+

TPCx-VCOMMERCIAL_ITEM

0.92+

fourth-genQUANTITY

0.92+

gen 5QUANTITY

0.9+

MooreORGANIZATION

0.89+

fourth generationQUANTITY

0.88+

gen 4QUANTITY

0.87+

PCI 3OTHER

0.87+

couple of weeks agoDATE

0.85+

SuperCompute 2022TITLE

0.8+

PCIe gen 5OTHER

0.79+

VMmark 3.xCOMMERCIAL_ITEM

0.75+

minusQUANTITY

0.74+

one wayQUANTITY

0.74+

18 monthsQUANTITY

0.7+

PERC 12COMMERCIAL_ITEM

0.67+

5.0OTHER

0.67+

EPYCCOMMERCIAL_ITEM

0.65+

monthsDATE

0.64+

5QUANTITY

0.63+

PERC 11COMMERCIAL_ITEM

0.6+

next few monthsDATE

0.6+

firstQUANTITY

0.59+

VMmark 3.x.COMMERCIAL_ITEM

0.55+

EPYC GenoaCOMMERCIAL_ITEM

0.53+

genOTHER

0.52+

R7525COMMERCIAL_ITEM

0.52+

1QUANTITY

0.5+

2QUANTITY

0.47+

PowerEdgeORGANIZATION

0.47+

Florian Berberich, PRACE AISBL | SuperComputing 22


 

>>We're back at Supercomputing 22 in Dallas, winding down day four of this conference. I'm Paul Gillan, my co-host Dave Nicholson. We are talking, we've been talking super computing all week and you hear a lot about what's going on in the United States, what's going on in China, Japan. What we haven't talked a lot about is what's going on in Europe and did you know that two of the top five supercomputers in the world are actually from European countries? Well, our guest has a lot to do with that. Florian, bearish, I hope I pronounce that correctly. My German is, German is not. My strength is the operations director for price, ais, S B L. And let's start with that. What is price? >>So, hello and thank you for the invitation. I'm Flon and Price is a partnership for Advanced Computing in Europe. It's a non-profit association with the seat in Brussels in Belgium. And we have 24 members. These are representatives from different European countries dealing with high performance computing in at their place. And we, so far, we provided the resources for our European research communities. But this changed in the last year, this oral HPC joint undertaking who put a lot of funding in high performance computing and co-funded five PET scale and three preis scale systems. And two of the preis scale systems. You mentioned already, this is Lumi and Finland and Leonardo in Bologna in Italy were in the place for and three and four at the top 500 at least. >>So why is it important that Europe be in the top list of supercomputer makers? >>I think Europe needs to keep pace with the rest of the world. And simulation science is a key technology for the society. And we saw this very recently with a pandemic, with a covid. We were able to help the research communities to find very quickly vaccines and to understand how the virus spread around the world. And all this knowledge is important to serve the society. Or another example is climate change. Yeah. With these new systems, we will be able to predict more precise the changes in the future. So the more compute power you have, the better the smaller the grid and there is resolution you can choose and the lower the error will be for the future. So these are, I think with these systems, the big or challenges we face can be addressed. This is the climate change, energy, food supply, security. >>Who are your members? Do they come from businesses? Do they come from research, from government? All of the >>Above. Yeah. Our, our members are public organization, universities, research centers, compute sites as a data centers, but But public institutions. Yeah. And we provide this services for free via peer review process with excellence as the most important criteria to the research community for free. >>So 40 years ago when, when the idea of an eu, and maybe I'm getting the dates a little bit wrong, when it was just an idea and the idea of a common currency. Yes. Reducing friction between, between borders to create a trading zone. Yes. There was a lot of focus there. Fast forward to today, would you say that these efforts in supercomputing, would they be possible if there were not an EU super structure? >>No, I would say this would not be possible in this extent. I think when though, but though European initiatives are, are needed and the European Commission is supporting these initiatives very well. And before praise, for instance 2008, there were research centers and data centers operating high performance computing systems, but they were not talking to each other. So it was isolated praise created community of operation sites and it facilitated the exchange between them and also enabled to align investments and to, to get the most out of the available funding. And also at this time, and still today for one single country in Europe, it's very hard to provide all the different architectures needed for all the different kind of research communities and applications. If you want to, to offer always the latest technologies, though this is really hardly possible. So with this joint action and opening the resources for other research groups from other countries, you, we, we were able to, yeah, get access to the latest technology for different communities at any given time though. And >>So, so the fact that the two systems that you mentioned are physically located in Finland and in Italy, if you were to walk into one of those facilities and meet the people that are there, they're not just fins in Finland and Italians in Italy. Yeah. This is, this is very much a European effort. So this, this is true. So, so in this, in that sense, the geography is sort of abstracted. Yeah. And the issues of sovereignty that make might take place in in the private sector don't exist or are there, are there issues with, can any, what are the requirements for a researcher to have access to a system in Finland versus a system in Italy? If you've got a EU passport, Hmm. Are you good to go? >>I think you are good to go though. But EU passport, it's now it becomes complicated and political. It's, it's very much, if we talk about the recent systems, well first, let me start a praise. Praise was inclusive and there was no any constraints as even we had users from US, Australia, we wanted just to support excellence in science. And we did not look at the nationality of the organization, of the PI and and so on. There were quotas, but these quotas were very generously interpreted. So, and if so, now with our HPC joint undertaking, it's a question from what European funds, these systems were procured and if a country or being country are associated to this funding, the researchers also have access to these systems. And this addresses basically UK and and Switzerland, which are not in the European Union, but they were as created to the Horizon 2020 research framework. And though they could can access the systems now available, Lumi and Leono and the Petascale system as well. How this will develop in the future, I don't know. It depends to which research framework they will be associated or not. >>What are the outputs of your work at price? Are they reference designs? Is it actual semiconductor hardware? Is it the research? What do you produce? >>So the, the application we run or the simulation we run cover all different scientific domains. So it's, it's science, it's, but also we have industrial let projects with more application oriented targets. Aerodynamics for instance, for cars or planes or something like this. But also fundamental science like the physical elementary physics particles for instance or climate change, biology, drug design, protein costa, all these >>Things. Can businesses be involved in what you do? Can they purchase your, your research? Do they contribute to their, I'm sure, I'm sure there are many technology firms in Europe that would like to be involved. >>So this involving industry though our calls are open and is, if they want to do open r and d, they are invited to submit also proposals. They will be evaluated and if this is qualifying, they will get the access and they can do their jobs and simulations. It's a little bit more tricky if it's in production, if they use these resources for their business and do not publish the results. They are some, well, probably more sites who, who are able to deal with these requests. Some are more dominant than others, but this is on a smaller scale, definitely. Yeah. >>What does the future hold? Are you planning to, are there other countries who will be joining the effort, other institutions? Do you plan to expand your, your scope >>Well, or I think or HPC joint undertaking with 36 member states is quite, covers already even more than Europe. And yeah, clearly if, if there are other states interest interested to join that there is no limitation. Although the focus lies on European area and on union. >>When, when you interact with colleagues from North America, do you, do you feel that there is a sort of European flavor to supercomputing that is different or are we so globally entwined? No. >>So research is not national, it's not European, it's international. This is also clearly very clear and I can, so we have a longstanding collaboration with our US colleagues and also with Chap and South Africa and Canada. And when Covid hit the world, we were able within two weeks to establish regular seminars inviting US and European colleagues to talk to to other, to each other and exchange the results and find new collaboration and to boost the research activities. So, and I have other examples as well. So when we, we already did the joint calls US exceed and in Europe praise and it was a very interesting experience. So we received applications from different communities and we decided that we will review this on our side, on European, with European experts and US did it in US with their experts. And you can guess what the result was at the meeting when we compared our results, it was matching one by one. It was exactly the same. Recite >>That it, it's, it's refreshing to hear a story of global collaboration. Yeah. Where people are getting along and making meaningful progress. >>I have to mention you, I have to to point out, you did not mention China as a country you were collaborating with. Is that by, is that intentional? >>Well, with China, definitely we have less links and collaborations also. It's also existing. There, there was initiative to look at the development of the technologies and the group meet on a regular basis. And there, there also Chinese colleagues involved. It's on a lower level, >>Yes, but is is the con conversations are occurring. We're out of time. Florian be operations director of price, European Super Computing collaborative. Thank you so much for being with us. I'm always impressed when people come on the cube and submit to an interview in a language that is not their first language. Yeah, >>Absolutely. >>Brave to do that. Yeah. Thank you. You're welcome. Thank you. We'll be right back after this break from Supercomputing 22 in Dallas.

Published Date : Nov 18 2022

SUMMARY :

Well, our guest has a lot to do with that. And we have 24 members. And we saw this very recently with excellence as the most important criteria to the research Fast forward to today, would you say that these the exchange between them and also enabled to So, so the fact that the two systems that you mentioned are physically located in Finland nationality of the organization, of the PI and and so on. But also fundamental science like the physical Do they contribute to their, I'm sure, I'm sure there are many technology firms in business and do not publish the results. Although the focus lies on European area is different or are we so globally entwined? so we have a longstanding collaboration with our US colleagues and That it, it's, it's refreshing to hear a story of global I have to mention you, I have to to point out, you did not mention China as a country you the development of the technologies and the group meet Yes, but is is the con conversations are occurring. Brave to do that.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

Paul GillanPERSON

0.99+

Florian BerberichPERSON

0.99+

BrusselsLOCATION

0.99+

FinlandLOCATION

0.99+

EuropeLOCATION

0.99+

USLOCATION

0.99+

European CommissionORGANIZATION

0.99+

DallasLOCATION

0.99+

ItalyLOCATION

0.99+

BolognaLOCATION

0.99+

twoQUANTITY

0.99+

24 membersQUANTITY

0.99+

FlorianPERSON

0.99+

United StatesLOCATION

0.99+

two systemsQUANTITY

0.99+

North AmericaLOCATION

0.99+

2008DATE

0.99+

BelgiumLOCATION

0.99+

AustraliaLOCATION

0.99+

fourQUANTITY

0.99+

threeQUANTITY

0.99+

todayDATE

0.99+

last yearDATE

0.99+

EUORGANIZATION

0.99+

CovidPERSON

0.99+

pandemicEVENT

0.99+

first languageQUANTITY

0.98+

two weeksQUANTITY

0.98+

firstQUANTITY

0.98+

CanadaLOCATION

0.98+

South AfricaLOCATION

0.97+

EuropeanOTHER

0.97+

36 member statesQUANTITY

0.97+

ChapORGANIZATION

0.97+

40 years agoDATE

0.97+

Horizon 2020TITLE

0.96+

HPCORGANIZATION

0.96+

FlonORGANIZATION

0.96+

EuropeanLOCATION

0.96+

day fourQUANTITY

0.94+

ChineseOTHER

0.93+

SwitzerlandLOCATION

0.92+

UKLOCATION

0.92+

aisORGANIZATION

0.91+

one of those facilitiesQUANTITY

0.86+

five supercomputersQUANTITY

0.86+

European UnionORGANIZATION

0.85+

Lumi andORGANIZATION

0.8+

LeonardoORGANIZATION

0.79+

three preis scale systemsQUANTITY

0.78+

one single countryQUANTITY

0.78+

China,LOCATION

0.78+

PriceORGANIZATION

0.76+

FinlandORGANIZATION

0.69+

EuropeORGANIZATION

0.68+

22OTHER

0.67+

500QUANTITY

0.66+

ChinaLOCATION

0.65+

five PETQUANTITY

0.64+

S B L.PERSON

0.6+

priceORGANIZATION

0.6+

scaleOTHER

0.58+

PetascaleTITLE

0.57+

Jay Boisseau, Dell Technologies | SuperComputing 22


 

>>We are back in the final stretch at Supercomputing 22 here in Dallas. I'm your host Paul Gillum with my co-host Dave Nicholson, and we've been talking to so many smart people this week. It just, it makes, boggles my mind are next guest. J Poso is the HPC and AI technology strategist at Dell. Jay also has a PhD in astronomy from the University of Texas. And I'm guessing you were up watching the Artemis launch the other night? >>I, I wasn't. I really should have been, but, but I wasn't, I was in full super computing conference mode. So that means discussions at, you know, various venues with people into the wee hours. >>How did you make the transition from a PhD in astronomy to an HPC expert? >>It was actually really straightforward. I did theoretical astrophysics and I was modeling what white dwarfs look like when they create matter and then explode as type one A super Novi, which is a class of stars that blow up. And it's a very important class because they blow up almost exactly the same way. So if you know how bright they are physically, not just how bright they appear in the sky, but if you can determine from first principles how bright they're, then you have a standard ruler for the universe when they go off in a galaxy, you know how far the galaxy is about how faint it is. So to model these though, you had to understand equations of physics, including electron degeneracy pressure, as well as normal fluid dynamics kinds of of things. And so you were solving for an explosive burning front, ripping through something. And that required a supercomputer to have anywhere close to the fat fidelity to get a reasonable answer and, and hopefully some understanding. >>So I've always said electrons are degenerate. I've always said it and I, and I mentioned to Paul earlier, I said, finally we're gonna get a guest to consort through this whole dark energy dark matter thing for us. We'll do that after, after, after the segment. >>That's a whole different, >>So, well I guess super computing being a natural tool that you would use. What is, what do you do in your role as a strategist? >>So I'm in the product management team. I spend a lot of time talking to customers about what they want to do next. HPC customers are always trying to be maximally productive of what they've got, but always wanting to know what's coming next. Because if you think about it, we can't simulate the entire human body cell for cell on any supercomputer day. We can simulate parts of it, cell for cell or the whole body with macroscopic physics, but not at the, you know, atomic level, the entire organism. So we're always trying to build more powerful computers to solve larger problems with more fidelity and less approximations in it. And so I help people try to understand which technologies for their next system might give them the best advance in capabilities for their simulation work, their data analytics work, their AI work, et cetera. Another part of it is talking to our great technology partner ecosystem and learning about which technologies they have. Cause it feeds the first thing, right? So understanding what's coming, and Dell has a, we're very proud of our large partner ecosystem. We embrace many different partners in that with different capabilities. So understanding those helps understand what your future systems might be. That those are two of the major roles in it. Strategic customers and strategic technologies. >>So you've had four days to wander the, this massive floor here and lots of startups, lots of established companies doing interesting things. What have you seen this week that really excites you? >>So I'm gonna tell you a dirty little secret here. If you are working for someone who makes super computers, you don't get as much time to wander the floor as you would think because you get lots of meetings with people who really want to understand in an NDA way, not just in the public way that's on the floor, but what's, what are you not telling us on the floor? What's coming next? And so I've been in a large number of customer meetings as well as being on the floor. And while I can't obviously share the everything, that's a non-disclosure topic in those, some things that we're hearing a lot about, people are really concerned with power because they see the TDP on the roadmaps for all the silicon providers going way up. And so people with power comes heat as waste. And so that means cooling. >>So power and cooling has been a big topic here. Obviously accelerators are, are increasing in importance in hpc not just for AI calculations, but now also for simulation calculations. And we are very proud of the three new accelerator platforms we launched here at the show that are coming out in a quarter or so. Those are two of the big topics we've seen. You know, there's, as you walk the floor here, you see lots of interesting storage vendors. HPC community's been do doing storage the same way for roughly 20 years. But now we see a lot of interesting players in that space. We have some great things in storage now and some great things that, you know, are coming in a year or two as well. So it's, it's interesting to see that diversity of that space. And then there's always the fun, exciting topics like quantum computing. We unveiled our first hybrid classical quantum computing system here with I on Q and I can't say what the future holds in this, in this format, but I can say we believe strongly in the future of quantum computing and that this, that future will be integrated with the kind of classical computing infrastructure that we make and that will help make quantum computing more powerful downstream. >>Well, let's go down that rabbit hole because, oh boy, boy, quantum computing has been talked about for a long time. There was a lot of excitement about it four or five years ago, some of the major vendors were announcing quantum computers in the cloud. Excitement has kind of died down. We don't see a lot of activity around, no, not a lot of talk around commercial quantum computers, yet you're deep into this. How close are we to have having a true quantum computer or is it a, is it a hybrid? More >>Likely? So there are probably more than 20 and I think close to 40 companies trying different approaches to make quantum computers. So, you know, Microsoft's pursuing a topol topological approach, do a photonics based approach. I, on Q and i on trap approach. These are all different ways of trying to leverage the quantum properties of nature. We know the properties exist, we use 'em in other technologies. We know the physics, but trying the engineering is very difficult. It's very difficult. I mean, just like it was difficult at one point to split the atom. It's very difficult to build technologies that leverage quantum properties of nature in a consistent and reliable and durable way, right? So I, you know, I wouldn't wanna make a prediction, but I will tell you I'm an optimist. I believe that when a tremendous capability with, with tremendous monetary gain potential lines up with another incentive, national security engineering seems to evolve faster when those things line up, when there's plenty of investment and plenty of incentive things happen. >>So I think a lot of my, my friends in the office of the CTO at Dell Technologies, when they're really leading this effort for us, you know, they would say a few to several years probably I'm an optimist, so I believe that, you know, I, I believe that we will sell some of the solution we announced here in the next year for people that are trying to get their feet wet with quantum. And I believe we'll be selling multiple quantum hybrid classical Dell quantum computing systems multiple a year in a year or two. And then of course you hope it goes to tens and hundreds of, you know, by the end of the decade >>When people talk about, I'm talking about people writ large, super leaders in supercomputing, I would say Dell's name doesn't come up in conversations I have. What would you like them to know that they don't know? >>You know, I, I hope that's not true, but I, I, I guess I understand it. We are so good at making the products from which people make clusters that we're number one in servers, we're number one in enterprise storage. We're number one in so many areas of enterprise technology that I, I think in some ways being number one in those things detracts a little bit from a subset of the market that is a solution subset as opposed to a product subset. But, you know, depending on which analyst you talk to and how they count, we're number one or number two in the world in supercomputing revenue. We don't always do the biggest splashy systems. We do the, the frontier system at t, the HPC five system at ENI in Europe. That's the largest academic supercomputer in the world and the largest industrial super >>That's based the world on Dell. Dell >>On Dell hardware. Yep. But we, I think our vision is really that we want to help more people use HPC to solve more problems than any vendor in the world. And those problems are various scales. So we are really concerned about the more we're democratizing HPC to make it easier for more people to get in at any scale that their budget and workloads require, we're optimizing it to make sure that it's not just some parts they're getting, that they are validated to work together with maximum scalability and performance. And we have a great HPC and AI innovation lab that does this engineering work. Cuz you know, one of the myths is, oh, I can just go buy a bunch of servers from company X and a network from company Y and a storage system from company Z and then it'll all work as an equivalent cluster. Right? Not true. It'll probably work, but it won't be the highest performance, highest scalability, highest reliability. So we spend a lot of time optimizing and then we are doing things to try to advance the state of HPC as well. What our future systems look like in the second half of this decade might be very different than what they look like right. Now. >>You mentioned a great example of a limitation that we're running up against right now. You mentioned an entire human body as a, as a, as an organism >>Or any large system that you try to model at the atomic level, but it's a huge macro system, >>Right? So will we be able to reach milestones where we can get our arms entirely around something like an entire human organism with simply quantitative advances as opposed to qualitative advances? Right now, as an example, let's just, let's go down to the basics from a Dell perspective. You're in a season where microprocessor vendors are coming out with next gen stuff and those next NextGen microprocessors, GPUs and CPUs are gonna be plugged into NextGen motherboards, PCI e gen five, gen six coming faster memory, bigger memory, faster networking, whether it's NS or InfiniBand storage controllers, all bigger, better, faster, stronger. And I suspect that systems like Frontera, I don't know, but I suspect that a lot of the systems that are out there are not on necessarily what we would think of as current generation technology, but maybe they're n minus one as a practical matter. So, >>But yeah, I mean they have a lifetime, so Exactly. >>The >>Lifetime is longer than the evolution. >>That's the normal technologies. Yeah. So, so what some people miss is this is, this is the reality that when, when we move forward with the latest things that are being talked about here, it's often a two generation move for an individual, for an individual organization. Yep. >>So now some organizations will have multiple systems and they, the system's leapfrog and technology generations, even if one is their real large system, their next one might be newer technology, but smaller, the next one might be a larger one with newer technology and such. Yeah. So the, the biggest super computing sites are, are often running more than one HPC system that have been specifically designed with the latest technologies and, and designed and configured for maybe a different subset of their >>Workloads. Yeah. So, so the, the, to go back to kinda the, the core question, in your opinion, do we need that qualitative leap to something like quantum computing in order to get to the point, or is it simply a question of scale and power at the, at the, at the individual node level to get us to the point where we can in fact gain insight from a digital model of an entire human body, not just looking at a, not, not just looking at an at, at an organ. And to your point, it's not just about human body, any system that we would characterize as being chaotic today, so a weather system, whatever. Do you, are there any milestones that you're thinking of where you're like, wow, you know, I have, I, I understand everything that's going on, and I think we're, we're a year away. We're a, we're, we're a, we're a compute generation away from being able to gain insight out of systems that right now we can't simply because of scale. It's a very, very long question that I just asked you, but I think I, but hopefully, hopefully you're tracking it. What, what are your, what are your thoughts? What are these, what are these inflection points that we, that you've, in your mind? >>So I, I'll I'll start simple. Remember when we used to buy laptops and we worried about what gigahertz the clock speed was Exactly. Everybody knew the gigahertz of it, right? There's some tasks at which we're so good at making the hardware that now the primary issues are how great is the screen? How light is it, what's the battery life like, et cetera. Because for the set of applications on there, we we have enough compute power. We don't, you don't really need your laptop. Most people don't need their laptop to have twice as powerful a processor that actually rather up twice the battery life on it or whatnot, right? We make great laptops. We, we design for all of those, configure those parameters now. And what, you know, we, we see some customers want more of x, somewhat more of y but the, the general point is that the amazing progress in, in microprocessors, it's sufficient for most of the workloads at that level. Now let's go to HPC level or scientific and technical level. And when it needs hpc, if you're trying to model the orbit of the moon around the earth, you don't really need a super computer for that. You can get a highly accurate model on a, on a workstation, on a server, no problem. It won't even really make it break a sweat. >>I had to do it with a slide rule >>That, >>That >>Might make you break a sweat. Yeah. But to do it with a, you know, a single body orbiting with another body, I say orbiting around, but we both know it's really, they're, they're both ordering the center of mass. It's just that if one is much larger, it seems like one's going entirely around the other. So that's, that's not a super computing problem. What about the stars in a galaxy trying to understand how galaxies form spiral arms and how they spur star formation. Right now you're talking a hundred billion stars plus a massive amount of inter stellar medium in there. So can you solve that on that server? Absolutely not. Not even close. Can you solve it on the largest super computer in the world today? Yes and no. You can solve it with approximations on the largest super computer in the world today. But there's a lot of approximations that go into even that. >>The good news is the simulations produce things that we see through our great telescopes. So we know the approximations are sufficient to get good fidelity, but until you really are doing direct numerical simulation of every particle, right? Right. Which is impossible to do. You need a computer as big as the universe to do that. But the approximations and the science in the science as well as the known parts of the science are good enough to give fidelity. So, and answer your question, there's tremendous number of problem scales. There are problems in every field of science and study that exceed the der direct numerical simulation capabilities of systems today. And so we always want more computing power. It's not macho flops, it's real, we need it, we need exo flops and we will need zeta flops beyond that and yada flops beyond that. But an increasing number of problems will be solved as we keep working to solve problems that are farther out there. So in terms of qualitative steps, I do think technologies like quantum computing, to be clear as part of a hybrid classical quantum system, because they're really just accelerators for certain kinds of algorithms, not for general purpose algorithms. I do think advances like that are gonna be necessary to solve some of the very hardest problem. It's easy to actually formulate an optimization problem that is absolutely intractable by the larger systems in the world today, but quantum systems happen to be in theory when they're big and stable enough, great at that kind of problem. >>I, that should be understood. Quantum is not a cure all for absolutely. For the, for the shortage of computing power. It's very good for certain, certain >>Problems. And as you said at this super computing, we see some quantum, but it's a little bit quieter than I probably expected. I think we're in a period now of everybody saying, okay, there's been a lot of buzz. We know it's gonna be real, but let's calm down a little bit and figure out what the right solutions are. And I'm very proud that we offered one of those >>At the show. We, we have barely scratched the surface of what we could talk about as we get into intergalactic space, but unfortunately we only have so many minutes and, and we're out of them. Oh, >>I'm >>J Poso, HPC and AI technology strategist at Dell. Thanks for a fascinating conversation. >>Thanks for having me. Happy to do it anytime. >>We'll be back with our last interview of Supercomputing 22 in Dallas. This is Paul Gillen with Dave Nicholson. Stay with us.

Published Date : Nov 18 2022

SUMMARY :

We are back in the final stretch at Supercomputing 22 here in Dallas. So that means discussions at, you know, various venues with people into the wee hours. the sky, but if you can determine from first principles how bright they're, then you have a standard ruler for the universe when We'll do that after, after, after the segment. What is, what do you do in your role as a strategist? We can simulate parts of it, cell for cell or the whole body with macroscopic physics, What have you seen this week that really excites you? not just in the public way that's on the floor, but what's, what are you not telling us on the floor? the kind of classical computing infrastructure that we make and that will help make quantum computing more in the cloud. We know the properties exist, we use 'em in other technologies. And then of course you hope it goes to tens and hundreds of, you know, by the end of the decade What would you like them to know that they don't know? detracts a little bit from a subset of the market that is a solution subset as opposed to a product subset. That's based the world on Dell. So we are really concerned about the more we're You mentioned a great example of a limitation that we're running up against I don't know, but I suspect that a lot of the systems that are out there are not on That's the normal technologies. but smaller, the next one might be a larger one with newer technology and such. And to your point, it's not just about human of the moon around the earth, you don't really need a super computer for that. But to do it with a, you know, a single body orbiting with another are sufficient to get good fidelity, but until you really are doing direct numerical simulation I, that should be understood. And as you said at this super computing, we see some quantum, but it's a little bit quieter than We, we have barely scratched the surface of what we could talk about as we get into intergalactic J Poso, HPC and AI technology strategist at Dell. Happy to do it anytime. This is Paul Gillen with Dave Nicholson.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

Paul GillumPERSON

0.99+

Jay BoisseauPERSON

0.99+

PaulPERSON

0.99+

JayPERSON

0.99+

DallasLOCATION

0.99+

EuropeLOCATION

0.99+

J PosoPERSON

0.99+

DellORGANIZATION

0.99+

tensQUANTITY

0.99+

twoQUANTITY

0.99+

Paul GillenPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

University of TexasORGANIZATION

0.99+

firstQUANTITY

0.99+

fourDATE

0.99+

first principlesQUANTITY

0.99+

next yearDATE

0.99+

more than 20QUANTITY

0.99+

two generationQUANTITY

0.98+

Supercomputing 22TITLE

0.98+

one pointQUANTITY

0.98+

twiceQUANTITY

0.98+

hundredsQUANTITY

0.98+

todayDATE

0.97+

five years agoDATE

0.97+

bothQUANTITY

0.97+

earthLOCATION

0.96+

more than oneQUANTITY

0.96+

oneQUANTITY

0.96+

a yearQUANTITY

0.96+

this weekDATE

0.96+

first thingQUANTITY

0.95+

20 yearsQUANTITY

0.94+

four daysQUANTITY

0.93+

second half of this decadeDATE

0.93+

ENIORGANIZATION

0.91+

ZORGANIZATION

0.9+

40 companiesQUANTITY

0.9+

e gen fiveCOMMERCIAL_ITEM

0.86+

a yearQUANTITY

0.84+

hundred billion starsQUANTITY

0.83+

HPCORGANIZATION

0.83+

three new accelerator platformsQUANTITY

0.81+

end of the decadeDATE

0.8+

hpcORGANIZATION

0.8+

FronteraORGANIZATION

0.8+

single bodyQUANTITY

0.79+

XORGANIZATION

0.76+

NextGenORGANIZATION

0.73+

Supercomputing 22ORGANIZATION

0.69+

five systemQUANTITY

0.62+

gen sixQUANTITY

0.61+

number oneQUANTITY

0.57+

approximationsQUANTITY

0.53+

particleQUANTITY

0.53+

a quarterQUANTITY

0.52+

YORGANIZATION

0.49+

typeOTHER

0.49+

22OTHER

0.49+

Satish Iyer, Dell Technologies | SuperComputing 22


 

>>We're back at Super Computing, 22 in Dallas, winding down the final day here. A big show floor behind me. Lots of excitement out there, wouldn't you say, Dave? Just >>Oh, it's crazy. I mean, any, any time you have NASA presentations going on and, and steampunk iterations of cooling systems that the, you know, it's, it's >>The greatest. I've been to hundreds of trade shows. I don't think I've ever seen NASA exhibiting at one like they are here. Dave Nicholson, my co-host. I'm Paul Gell, in which with us is Satish Ier. He is the vice president of emerging services at Dell Technologies and Satit, thanks for joining us on the cube. >>Thank you. Paul, >>What are emerging services? >>Emerging services are actually the growth areas for Dell. So it's telecom, it's cloud, it's edge. So we, we especially focus on all the growth vectors for, for the companies. >>And, and one of the key areas that comes under your jurisdiction is called apex. Now I'm sure there are people who don't know what Apex is. Can you just give us a quick definition? >>Absolutely. So Apex is actually Dells for a into cloud, and I manage the Apex services business. So this is our way of actually bringing cloud experience to our customers, OnPrem and in color. >>But, but it's not a cloud. I mean, you don't, you don't have a Dell cloud, right? It's, it's of infrastructure as >>A service. It's infrastructure and platform and solutions as a service. Yes, we don't have our own e of a public cloud, but we want to, you know, this is a multi-cloud world, so technically customers want to consume where they want to consume. So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. >>You, you mentioned something just ahead of us going on air. A great way to describe Apex, to contrast Apex with CapEx. There's no c there's no cash up front necessary. Yeah, I thought that was great. Explain that, explain that a little more. Well, >>I mean, you know, one, one of the main things about cloud is the consumption model, right? So customers would like to pay for what they consume, they would like to pay in a subscription. They would like to not prepay CapEx ahead of time. They want that economic option, right? So I think that's one of the key tenets for anything in cloud. So I think it's important for us to recognize that and think Apex is basically a way by which customers pay for what they consume, right? So that's a absolutely a key tenant for how, how we want to design Apex. So it's absolutely right. >>And, and among those services are high performance computing services. Now I was not familiar with that as an offering in the Apex line. What constitutes a high performance computing Apex service? >>Yeah, I mean, you know, I mean, this conference is great, like you said, you know, I, there's so many HPC and high performance computing folks here, but one of the things is, you know, fundamentally, if you look at high performance computing ecosystem, it is quite complex, right? And when you call it as an Apex HPC or Apex offering offer, it brings a lot of the cloud economics and cloud, you know, experience to the HPC offer. So fundamentally, it's about our ability for customers to pay for what they consume. It's where Dell takes a lot of the day to day management of the infrastructure on our own so that customers don't need to do the grunge work of managing it, and they can really focus on the actual workload, which actually they run on the CHPC ecosystem. So it, it is, it is high performance computing offer, but instead of them buying the infrastructure, running all of that by themself, we make it super easy for customers to consume and manage it across, you know, proven designs, which Dell always implements across these verticals. >>So what, what makes the high performance computing offering as opposed to, to a rack of powered servers? What do you add in to make it >>Hpc? Ah, that's a great question. So, I mean, you know, so this is a platform, right? So we are not just selling infrastructure by the drink. So we actually are fundamentally, it's based on, you know, we, we, we launch two validated designs, one for life science sales, one for manufacturing. So we actually know how these PPO work together, how they actually are validated design tested solution. And we also, it's a platform. So we actually integrate the softwares on the top. So it's just not the infrastructure. So we actually integrate a cluster manager, we integrate a job scheduler, we integrate a contained orchestration layer. So a lot of these things, customers have to do it by themself, right? If they're buy the infrastructure. So by basically we are actually giving a platform or an ecosystem for our customers to run their workloads. So make it easy for them to actually consume those. >>That's Now is this, is this available on premises for customer? >>Yeah, so we, we, we make it available customers both ways. So we make it available OnPrem for customers who want to, you know, kind of, they want to take that, take that economics. We also make it available in a colo environment if the customers want to actually, you know, extend colo as that OnPrem environment. So we do both. >>What are, what are the requirements for a customer before you roll that equipment in? How do they sort of have to set the groundwork for, >>For Well, I think, you know, fundamentally it starts off with what the actual use case is, right? So, so if you really look at, you know, the two validated designs we talked about, you know, one for, you know, healthcare life sciences, and one other one for manufacturing, they do have fundamentally different requirements in terms of what you need from those infrastructure systems. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require a lot of memory intensive loads, or do they actually require something which has got a lot of compute power. So, you know, it all depends on what they would require in terms of the workloads to be, and then we do havet sizing. So we do have small, medium, large, we have, you know, multiple infrastructure options, CPU core options. Sometimes the customer would also wanna say, you know what, as long as the regular CPUs, I also want some GPU power on top of that. So those are determinations typically a customer makes as part of the ecosystem, right? And so those are things which would, they would talk to us about to say, okay, what is my best option in terms of, you know, kind of workloads I wanna run? And then they can make a determination in terms of how, how they would actually going. >>So this, this is probably a particularly interesting time to be looking at something like HPC via Apex with, with this season of Rolling Thunder from various partners that you have, you know? Yep. We're, we're all expecting that Intel is gonna be rolling out new CPU sets from a powered perspective. You have your 16th generation of PowerEdge servers coming out, P C I E, gen five, and all of the components from partners like Invidia and Broadcom, et cetera, plugging into them. Yep. What, what does that, what does that look like from your, from your perch in terms of talking to customers who maybe, maybe they're doing things traditionally and they're likely to be not, not fif not 15 G, not generation 15 servers. Yeah. But probably more like 14. Yeah, you're offering a pretty huge uplift. Yep. What, what do those conversations look >>Like? I mean, customers, so talking about partners, right? I mean, of course Dell, you know, we, we, we don't bring any solutions to the market without really working with all of our partners, whether that's at the infrastructure level, like you talked about, you know, Intel, amd, Broadcom, right? All the chip vendors, all the way to software layer, right? So we have cluster managers, we have communities orchestrators. So we usually what we do is we bring the best in class, whether it's a software player or a hardware player, right? And we bring it together as a solution. So we do give the customers a choice, and the customers always want to pick what you they know actually is awesome, right? So they that, that we actually do that. And, you know, and one of the main aspects of, especially when you talk about these things, bringing it as a service, right? >>We take a lot of guesswork away from our customer, right? You know, one of the good example of HPC is capacity, right? So customers, these are very, you know, I would say very intensive systems. Very complex systems, right? So customers would like to buy certain amount of capacity, they would like to grow and, you know, come back, right? So give, giving them the flexibility to actually consume more if they want, giving them the buffer and coming down. All of those things are very important as we actually design these things, right? And that takes some, you know, customers are given a choice, but it actually, they don't need to worry about, oh, you know, what happens if I actually have a spike, right? There's already buffer capacity built in. So those are awesome things. When we talk about things as a service, >>When customers are doing their ROI analysis, buying CapEx on-prem versus, versus using Apex, is there a point, is there a crossover point typically at which it's probably a better deal for them to, to go OnPrem? >>Yeah, I mean, it it like specifically talking about hpc, right? I mean, why, you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, right? That's not gonna go away, right? But there are certain reasons why they would look at OnPrem or they would look at, for example, Ola environment, right? One of the main reasons they would like to do that is purely have to do with cost, right? These are pretty expensive systems, right? There is a lot of ingress, egress, there is a lot of data going back and forth, right? Public cloud, you know, it costs money to put data in or actually pull data back, right? And the second one is data residency and security requirements, right? A lot of these things are probably proprietary set of information. We talked about life sciences, there's a lot of research, right? >>Manufacturing, a lot of these things are just, just in time decision making, right? You are on a factory floor, you gotta be able to do that. Now there is a latency requirement. So I mean, I think a lot of things play, you know, plays into this outside of just cost, but data residency requirements, ingress, egress are big things. And when you're talking about mass moments of data you wanna put and pull it back in, they would like to kind of keep it close, keep it local, and you know, get a, get a, get a price >>Point. Nevertheless, I mean, we were just talking to Ian Coley from aws and he was talking about how customers have the need to sort of move workloads back and forth between the cloud and on-prem. That's something that they're addressing without posts. You are very much in the, in the on-prem world. Do you have, or will you have facilities for customers to move workloads back and forth? Yeah, >>I wouldn't, I wouldn't necessarily say, you know, Dell's cloud strategy is multi-cloud, right? So we basically, so it kind of falls into three, I mean we, some customers, some workloads are suited always for public cloud. It's easier to consume, right? There are, you know, customers also consume on-prem, the customers also consuming Kohler. And we also have like Dell's amazing piece of software like storage software. You know, we make some of these things available for customers to consume a software IP on their public cloud, right? So, you know, so this is our multi-cloud strategy. So we announced a project in Alpine, in Delta fold. So you know, if you look at those, basically customers are saying, I love your Dell IP on this, on this product, on the storage, can you make it available through, in this public environment, whether, you know, it's any of the hyper skill players. So if we do all of that, right? So I think it's, it shows that, you know, it's not always tied to an infrastructure, right? Customers want to consume the best thumb and if we need to be consumed in hyperscale, we can make it available. >>Do you support containers? >>Yeah, we do support containers on hpc. We have, we have two container orchestrators we have to support. We, we, we have aner similarity, we also have a container options to customers. Both options. >>What kind of customers are you signing up for the, for the HPC offerings? Are they university research centers or is it tend to be smaller >>Companies? It, it's, it's, you know, the last three days, this conference has been great. We probably had like, you know, many, many customers talking to us. But HC somewhere in the range of 40, 50 customers, I would probably say lot of interest from educational institutions, universities research, to your point, a lot of interest from manufacturing, factory floor automation. A lot of customers want to do dynamic simulations on factory floor. That is also quite a bit of interest from life sciences pharmacies because you know, like I said, we have two designs, one on life sciences, one on manufacturing, both with different dynamics on the infrastructure. So yeah, quite a, quite a few interest definitely from academics, from life sciences, manufacturing. We also have a lot of financials, big banks, you know, who wants to simulate a lot of the, you know, brokerage, a lot of, lot of financial data because we have some, you know, really optimized hardware we announced in Dell for, especially for financial services. So there's quite a bit of interest from financial services as well. >>That's why that was great. We often think of Dell as, as the organization that democratizes all things in it eventually. And, and, and, and in that context, you know, this is super computing 22 HPC is like the little sibling trailing around, trailing behind the super computing trend. But we definitely have seen this move out of just purely academia into the business world. Dell is clearly a leader in that space. How has Apex overall been doing since you rolled out that strategy, what, two couple? It's been, it's been a couple years now, hasn't it? >>Yeah, it's been less than two years. >>How are, how are, how are mainstream Dell customers embracing Apex versus the traditional, you know, maybe 18 months to three year upgrade cycle CapEx? Yeah, >>I mean I look, I, I think that is absolutely strong momentum for Apex and like we, Paul pointed out earlier, we started with, you know, making the infrastructure and the platforms available to customers to consume as a service, right? We have options for customers, you know, to where Dell can fully manage everything end to end, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, basically environment for the customers, we also have options where customers would say, you know what, I actually have a pretty sophisticated IT organization. I want Dell to manage the infrastructure, but up to this level in the layer up to the guest operating system, I'll take care of the rest, right? So we are seeing customers who are coming to us with various requirements in terms of saying, I can do up to here, but you take all of this pain point away from me or you do everything for me. >>It all depends on the customer. So we do have wide interest. So our, I would say our products and the portfolio set in Apex is expanding and we are also learning, right? We are getting a lot of feedback from customers in terms of what they would like to see on some of these offers. Like the example we just talked about in terms of making some of the software IP available on a public cloud where they'll look at Dell as a software player, right? That's also is absolutely critical. So I think we are giving customers a lot of choices. Our, I would say the choice factor and you know, we are democratizing, like you said, expanding in terms of the customer choices. And I >>Think it's, we're almost outta our time, but I do wanna be sure we get to Dell validated designs, which you've mentioned a couple of times. How specific are the, well, what's the purpose of these designs? How specific are they? >>They, they are, I mean I, you know, so the most of these valid, I mean, again, we look at these industries, right? And we look at understanding exactly how would, I mean we have huge embedded base of customers utilizing HPC across our ecosystem in Dell, right? So a lot of them are CapEx customers. We actually do have an active customer profile. So these validated designs takes into account a lot of customer feedback, lot of partner feedback in terms of how they utilize this. And when you build these solutions, which are kind of end to end and integrated, you need to start anchoring on something, right? And a lot of these things have different characteristics. So these validated design basically prove to us that, you know, it gives a very good jump off point for customers. That's the way I look at it, right? So a lot of them will come to the table with, they don't come to the blank sheet of paper when they say, oh, you know what I'm, this, this is my characteristics of what I want. I think this is a great point for me to start from, right? So I think that that gives that, and plus it's the power of validation, really, right? We test, validate, integrate, so they know it works, right? So all of those are hypercritical. When you talk to, >>And you mentioned healthcare, you, you mentioned manufacturing, other design >>Factoring. We just announced validated design for financial services as well, I think a couple of days ago in the event. So yep, we are expanding all those DVDs so that we, we can, we can give our customers a choice. >>We're out of time. Sat ier. Thank you so much for joining us. Thank you. At the center of the move to subscription to everything as a service, everything is on a subscription basis. You really are on the leading edge of where, where your industry is going. Thanks for joining us. >>Thank you, Paul. Thank you Dave. >>Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas, wrapping up the show this afternoon and stay with us for, they'll be half more soon.

Published Date : Nov 17 2022

SUMMARY :

Lots of excitement out there, wouldn't you say, Dave? you know, it's, it's He is the vice Thank you. So it's telecom, it's cloud, it's edge. Can you just give us a quick definition? So this is our way I mean, you don't, you don't have a Dell cloud, right? So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. You, you mentioned something just ahead of us going on air. I mean, you know, one, one of the main things about cloud is the consumption model, right? an offering in the Apex line. we make it super easy for customers to consume and manage it across, you know, proven designs, So, I mean, you know, so this is a platform, if the customers want to actually, you know, extend colo as that OnPrem environment. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require Thunder from various partners that you have, you know? I mean, of course Dell, you know, we, we, So customers, these are very, you know, I would say very intensive systems. you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, in, they would like to kind of keep it close, keep it local, and you know, get a, Do you have, or will you have facilities So you know, if you look at those, basically customers are saying, I love your Dell IP on We have, we have two container orchestrators We also have a lot of financials, big banks, you know, who wants to simulate a you know, this is super computing 22 HPC is like the little sibling trailing around, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, we are democratizing, like you said, expanding in terms of the customer choices. How specific are the, well, what's the purpose of these designs? So these validated design basically prove to us that, you know, it gives a very good jump off point for So yep, we are expanding all those DVDs so that we, Thank you so much for joining us. Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TerryPERSON

0.99+

Dave NicholsonPERSON

0.99+

AWSORGANIZATION

0.99+

Ian ColeyPERSON

0.99+

Dave VellantePERSON

0.99+

Terry RamosPERSON

0.99+

DavePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

EuropeLOCATION

0.99+

Paul GellPERSON

0.99+

DavidPERSON

0.99+

Paul GillumPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

John FurrierPERSON

0.99+

Andy JassyPERSON

0.99+

190 daysQUANTITY

0.99+

AmazonORGANIZATION

0.99+

PaulPERSON

0.99+

European Space AgencyORGANIZATION

0.99+

Max PetersonPERSON

0.99+

DellORGANIZATION

0.99+

CIAORGANIZATION

0.99+

AfricaLOCATION

0.99+

oneQUANTITY

0.99+

Arcus GlobalORGANIZATION

0.99+

fourQUANTITY

0.99+

BahrainLOCATION

0.99+

D.C.LOCATION

0.99+

EvereeORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

JohnPERSON

0.99+

UKLOCATION

0.99+

four hoursQUANTITY

0.99+

USLOCATION

0.99+

DallasLOCATION

0.99+

Stu MinimanPERSON

0.99+

Zero DaysTITLE

0.99+

NASAORGANIZATION

0.99+

WashingtonLOCATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

CapgeminiORGANIZATION

0.99+

Department for Wealth and PensionsORGANIZATION

0.99+

IrelandLOCATION

0.99+

Washington, DCLOCATION

0.99+

an hourQUANTITY

0.99+

ParisLOCATION

0.99+

five weeksQUANTITY

0.99+

1.8 billionQUANTITY

0.99+

thousandsQUANTITY

0.99+

GermanyLOCATION

0.99+

450 applicationsQUANTITY

0.99+

Department of DefenseORGANIZATION

0.99+

AsiaLOCATION

0.99+

John WallsPERSON

0.99+

Satish IyerPERSON

0.99+

LondonLOCATION

0.99+

GDPRTITLE

0.99+

Middle EastLOCATION

0.99+

42%QUANTITY

0.99+

Jet Propulsion LabORGANIZATION

0.99+

Ian Colle, AWS | SuperComputing 22


 

(lively music) >> Good morning. Welcome back to theCUBE's coverage at Supercomputing Conference 2022, live here in Dallas. I'm Dave Nicholson with my co-host Paul Gillin. So far so good, Paul? It's been a fascinating morning Three days in, and a fascinating guest, Ian from AWS. Welcome. >> Thanks, Dave. >> What are we going to talk about? Batch computing, HPC. >> We've got a lot, let's get started. Let's dive right in. >> Yeah, we've got a lot to talk about. I mean, first thing is we recently announced our batch support for EKS. EKS is our Kubernetes, managed Kubernetes offering at AWS. And so batch computing is still a large portion of HPC workloads. While the interactive component is growing, the vast majority of systems are just kind of fire and forget, and we want to run thousands and thousands of nodes in parallel. We want to scale out those workloads. And what's unique about our AWS batch offering, is that we can dynamically scale, based upon the queue depth. And so customers can go from seemingly nothing up to thousands of nodes, and while they're executing their work they're only paying for the instances while they're working. And then as the queue depth starts to drop and the number of jobs waiting in the queue starts to drop, then we start to dynamically scale down those resources. And so it's extremely powerful. We see lots of distributed machine learning, autonomous vehicle simulation, and traditional HPC workloads taking advantage of AWS Batch. >> So when you have a Kubernetes cluster does it have to be located in the same region as the HPC cluster that's going to be doing the batch processing, or does the nature of batch processing mean, in theory, you can move something from here to somewhere relatively far away to do the batch processing? How does that work? 'Cause look, we're walking around here and people are talking about lengths of cables in order to improve performance. So what does that look like when you peel back the cover and you look at it physically, not just logically, AWS is everywhere, but physically, what does that look like? >> Oh, physically, for us, it depends on what the customer's looking for. We have workflows that are all entirely within a single region. And so where they could have a portion of say the traditional HPC workflow, is within that region as well as the batch, and they're saving off the results, say to a shared storage file system like our Amazon FSx for Lustre, or maybe aging that back to an S3 object storage for a little lower cost storage solution. Or you can have customers that have a kind of a multi-region orchestration layer to where they say, "You know what? "I've got a portion of my workflow that occurs "over on the other side of the country "and I replicate my data between the East Coast "and the West Coast just based upon business needs. "And I want to have that available to customers over there. "And so I'll do a portion of it in the East Coast "a portion of it in the West Coast." Or you can think of that even globally. It really depends upon the customer's architecture. >> So is the intersection of Kubernetes with HPC, is this relatively new? I know you're saying you're, you're announcing it. >> It really is. I think we've seen a growing perspective. I mean, Kubernetes has been a long time kind of eating everything, right, in the enterprise space? And now a lot of CIOs in the industrial space are saying, "Why am I using one orchestration layer "to manage my HPC infrastructure and another one "to manage my enterprise infrastructure?" And so there's a growing appreciation that, you know what, why don't we just consolidate on one? And so that's where we've seen a growth of Kubernetes infrastructure and our own managed Kubernetes EKS on AWS. >> Last month you announced a general availability of Trainium, of a chip that's optimized for AI training. Talk about what's special about that chip or what is is customized to the training workloads. >> Yeah, what's unique about the Trainium, is you'll you'll see 40% price performance over any other GPU available in the AWS cloud. And so we've really geared it to be that most price performance of options for our customers. And that's what we like about the silicon team, that we're part of that Annaperna acquisition, is because it really has enabled us to have this differentiation and to not just be innovating at the software level but the entire stack. That Annaperna Labs team develops our network cards, they develop our ARM cards, they developed this Trainium chip. And so that silicon innovation has become a core part of our differentiator from other vendors. And what Trainium allows you to do is perform similar workloads, just at a lower price performance. >> And you also have a chip several years older, called Inferentia- >> Um-hmm. >> Which is for inferencing. What is the difference between, I mean, when would a customer use one versus the other? How would you move the workload? >> What we've seen is customers traditionally have looked for a certain class of machine, more of a compute type that is not as accelerated or as heavy as you would need for Trainium for their inference portion of their workload. So when they do that training they want the really beefy machines that can grind through a lot of data. But when you're doing the inference, it's a little lighter weight. And so it's a different class of machine. And so that's why we've got those two different product lines with the Inferentia being there to support those inference portions of their workflow and the Trainium to be that kind of heavy duty training work. >> And then you advise them on how to migrate their workloads from one to the other? And once the model is trained would they switch to an Inferentia-based instance? >> Definitely, definitely. We help them work through what does that design of that workflow look like? And some customers are very comfortable doing self-service and just kind of building it on their own. Other customers look for a more professional services engagement to say like, "Hey, can you come in and help me work "through how I might modify my workflow to "take full advantage of these resources?" >> The HPC world has been somewhat slower than commercial computing to migrate to the cloud because- >> You're very polite. (panelists all laughing) >> Latency issues, they want to control the workload, they want to, I mean there are even issues with moving large amounts of data back and forth. What do you say to them? I mean what's the argument for ditching the on-prem supercomputer and going all-in on AWS? >> Well, I mean, to be fair, I started at AWS five years ago. And I can tell you when I showed up at Supercomputing, even though I'd been part of this community for many years, they said, "What is AWS doing at Supercomputing?" I know you care, wait, it's Amazon Web Services. You care about the web, can you actually handle supercomputing workloads? Now the thing that very few people appreciated is that yes, we could. Even at that time in 2017, we had customers that were performing HPC workloads. Now that being said, there were some real limitations on what we could perform. And over those past five years, as we've grown as a company, we've started to really eliminate those frictions for customers to migrate their HPC workloads to the AWS cloud. When I started in 2017, we didn't have our elastic fabric adapter, our low-latency interconnect. So customers were stuck with standard TCP/IP. So for their highly demanding open MPI workloads, we just didn't have the latencies to support them. So the jobs didn't run as efficiently as they could. We didn't have Amazon FSx for Lustre, our managed lustre offering for high performant, POSIX-compliant file system, which is kind of the key to a large portion of HPC workloads is you have to have a high-performance file system. We didn't even, I mean, we had about 25 gigs of networking when I started. Now you look at, with our accelerated instances, we've got 400 gigs of networking. So we've really continued to grow across that spectrum and to eliminate a lot of those really, frictions to adoption. I mean, one of the key ones, we had a open source toolkit that was jointly developed by Intel and AWS called CFN Cluster that customers were using to even instantiate their clusters. So, and now we've migrated that all the way to a fully functional supported service at AWS called AWS Parallel Cluster. And so you've seen over those past five years we have had to develop, we've had to grow, we've had to earn the trust of these customers and say come run your workloads on us and we will demonstrate that we can meet your demanding requirements. And at the same time, there's been, I'd say, more of a cultural acceptance. People have gone away from the, again, five years ago, to what are you doing walking around the show, to say, "Okay, I'm not sure I get it. "I need to look at it. "I, okay, I, now, oh, it needs to be a part "of my architecture but the standard questions, "is it secure? "Is it price performant? "How does it compare to my on-prem?" And really culturally, a lot of it is, just getting IT administrators used to, we're not eliminating a whole field, right? We're just upskilling the people that used to rack and stack actual hardware, to now you're learning AWS services and how to operate within that environment. And it's still key to have those people that are really supporting these infrastructures. And so I'd say it's a little bit of a combination of cultural shift over the past five years, to see that cloud is a super important part of HPC workloads, and part of it's been us meeting the the market segment of where we needed to with innovating both at the hardware level and at the software level, which we're going to continue to do. >> You do have an on-prem story though. I mean, you have outposts. We don't hear a lot of talk about outposts lately, but these innovations, like Inferentia, like Trainium, like the networking innovation you're talking about, are these going to make their way into outposts as well? Will that essentially become this supercomputing solution for customers who want to stay on-prem? >> Well, we'll see what the future lies, but we believe that we've got the, as you noted, we've got the hardware, we've got the network, we've got the storage. All those put together gives you a a high-performance computer, right? And whether you want it to be redundant in your local data center or you want it to be accessible via APIs from the AWS cloud, we want to provide that service to you. >> So to be clear, that's not that's not available now, but that is something that could be made available? >> Outposts are available right now, that have this the services that you need. >> All these capabilities? >> Often a move to cloud, an impetus behind it comes from the highest levels in an organization. They're looking at the difference between OpEx versus CapEx. CapEx for a large HPC environment, can be very, very, very high. Are these HPC clusters consumed as an operational expense? Are you essentially renting time, and then a fundamental question, are these multi-tenant environments? Or when you're referring to batches being run in HPC, are these dedicated HPC environments for customers who are running batches against them? When you think about batches, you think of, there are times when batches are being run and there are times when they're not being run. So that would sort of conjure, in the imagination, multi-tenancy, what does that look like? >> Definitely, and that's been, let me start with your second part first is- >> Yeah. That's been a a core area within AWS is we do not see as, okay we're going to, we're going to carve out this super computer and then we're going to allocate that to you. We are going to dynamically allocate multi-tenant resources to you to perform the workloads you need. And especially with the batch environment, we're going to spin up containers on those, and then as the workloads complete we're going to turn those resources over to where they can be utilized by other customers. And so that's where the batch computing component really is powerful, because as you say, you're releasing resources from workloads that you're done with. I can use those for another portion of the workflow for other work. >> Okay, so it makes a huge difference, yeah. >> You mentioned, that five years ago, people couldn't quite believe that AWS was at this conference. Now you've got a booth right out in the center of the action. What kind of questions are you getting? What are people telling you? >> Well, I love being on the show floor. This is like my favorite part is talking to customers and hearing one, what do they love, what do they want more of? Two, what do they wish we were doing that we're not currently doing? And three, what are the friction points that are still exist that, like, how can I make their lives easier? And what we're hearing is, "Can you help me migrate my workloads to the cloud? "Can you give me the information that I need, "both from a price for performance, "for an operational support model, "and really help me be an internal advocate "within my environment to explain "how my resources can be operated proficiently "within the AWS cloud." And a lot of times it's, let's just take your application a subset of your applications and let's benchmark 'em. And really that, AWS, one of the key things is we are a data-driven environment. And so when you take that data and you can help a customer say like, "Let's just not look at hypothetical, "at synthetic benchmarks, let's take "actually the LS-DYNA code that you're running, perhaps. "Let's take the OpenFOAM code that you're running, "that you're running currently "in your on-premises workloads, "and let's run it on AWS cloud "and let's see how it performs." And then we can take that back to your to the decision makers and say, okay, here's the price for performance on AWS, here's what we're currently doing on-premises, how do we think about that? And then that also ties into your earlier question about CapEx versus OpEx. We have models where actual, you can capitalize a longer-term purchase at AWS. So it doesn't have to be, I mean, depending upon the accounting models you want to use, we do have a majority of customers that will stay with that OpEx model, and they like that flexibility of saying, "Okay, spend as you go." We need to have true ups, and make sure that they have insight into what they're doing. I think one of the boogeyman is that, oh, I'm going to spend all my money and I'm not going to know what's available. And so we want to provide the, the cost visibility, the cost controls, to where you feel like, as an HPC administrator you have insight into what your customers are doing and that you have control over that. And so once you kind of take away some of those fears and and give them the information that they need, what you start to see too is, you know what, we really didn't have a lot of those cost visibility and controls with our on-premises hardware. And we've had some customers tell us we had one portion of the workload where this work center was spending thousands of dollars a day. And we went back to them and said, "Hey, we started to show this, "what you were spending on-premises." They went, "Oh, I didn't realize that." And so I think that's part of a cultural thing that, at an HPC, the question was, well on-premises is free. How do you compete with free? And so we need to really change that culturally, to where people see there is no free lunch. You're paying for the resources whether it's on-premises or in the cloud. >> Data scientists don't worry about budgets. >> Wait, on-premises is free? Paul mentioned something that reminded me, you said you were here in 2017, people said AWS, web, what are you even doing here? Now in 2022, you're talking in terms of migrating to cloud. Paul mentioned outposts, let's say that a customer says, "Hey, I'd like you to put "in a thousand-node cluster in this data center "that I happen to own, but from my perspective, "I want to interact with it just like it's "in your data center." In other words, the location doesn't matter. My experience is identical to interacting with AWS in an AWS data center, in a CoLo that works with AWS, but instead it's my physical data center. When we're tracking the percentage of IT that's that is on-prem versus off-prem. What is that? Is that, what I just described, is that cloud? And in five years are you no longer going to be talking about migrating to cloud because people go, "What do you mean migrating to cloud? "What do you even talking about? "What difference does it make?" It's either something that AWS is offering or it's something that someone else is offering. Do you think we'll be at that point in five years, where in this world of virtualization and abstraction, you talked about Kubernetes, we should be there already, thinking in terms of it doesn't matter as long as it meets latency and sovereignty requirements. So that, your prediction, we're all about insights and supercomputing- >> My prediction- >> In five years, will you still be talking about migrating to cloud or will that be something from the past? >> In five years, I still think there will be a component. I think the majority of the assumption will be that things are cloud-native and you start in the cloud and that there are perhaps, an aspect of that, that will be interacting with some sort of an edge device or some sort of an on-premises device. And we hear more and more customers that are saying, "Okay, I can see the future, "I can see that I'm shrinking my footprint." And, you can see them still saying, "I'm not sure how small that beachhead will be, "but right now I want to at least say "that I'm going to operate in that hybrid environment." And so I'd say, again, the pace of this community, I'd say five years we're still going to be talking about migrations, but I'd say the vast majority will be a cloud-native, cloud-first environment. And how do you classify that? That outpost sitting in someone's data center? I'd say we'd still, at least I'll leave that up to the analysts, but I think it would probably come down as cloud spend. >> Great place to end. Ian, you and I now officially have a bet. In five years we're going to come back. My contention is, no we're not going to be talking about it anymore. >> Okay. >> And kids in college are going to be like, "What do you mean cloud, it's all IT, it's all IT." And they won't remember this whole phase of moving to cloud and back and forth. With that, join us in five years to see the result of this mega-bet between Ian and Dave. I'm Dave Nicholson with theCUBE, here at Supercomputing Conference 2022, day three of our coverage with my co-host Paul Gillin. Thanks again for joining us. Stay tuned, after this short break, we'll be back with more action. (lively music)

Published Date : Nov 17 2022

SUMMARY :

Welcome back to theCUBE's coverage What are we going to talk about? Let's dive right in. in the queue starts to drop, does it have to be of say the traditional HPC workflow, So is the intersection of Kubernetes And now a lot of CIOs in the to the training workloads. And what Trainium allows you What is the difference between, to be that kind of heavy to say like, "Hey, can you You're very polite. to control the workload, to what are you doing I mean, you have outposts. And whether you want it to be redundant that have this the services that you need. Often a move to cloud, to you to perform the workloads you need. Okay, so it makes a What kind of questions are you getting? the cost controls, to where you feel like, And in five years are you no And so I'd say, again, the not going to be talking of moving to cloud and back and forth.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IanPERSON

0.99+

PaulPERSON

0.99+

Dave NicholsonPERSON

0.99+

Paul GillinPERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

400 gigsQUANTITY

0.99+

2017DATE

0.99+

Ian CollePERSON

0.99+

thousandsQUANTITY

0.99+

DallasLOCATION

0.99+

40%QUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

2022DATE

0.99+

AnnapernaORGANIZATION

0.99+

second partQUANTITY

0.99+

five yearsQUANTITY

0.99+

Last monthDATE

0.99+

IntelORGANIZATION

0.99+

five years agoDATE

0.98+

fiveQUANTITY

0.98+

TwoQUANTITY

0.98+

SupercomputingORGANIZATION

0.98+

LustreORGANIZATION

0.97+

Annaperna LabsORGANIZATION

0.97+

TrainiumORGANIZATION

0.97+

five yearsQUANTITY

0.96+

oneQUANTITY

0.96+

OpExTITLE

0.96+

bothQUANTITY

0.96+

first thingQUANTITY

0.96+

Supercomputing ConferenceEVENT

0.96+

firstQUANTITY

0.96+

West CoastLOCATION

0.96+

thousands of dollars a dayQUANTITY

0.96+

Supercomputing Conference 2022EVENT

0.95+

CapExTITLE

0.94+

threeQUANTITY

0.94+

theCUBEORGANIZATION

0.92+

East CoastLOCATION

0.91+

single regionQUANTITY

0.91+

yearsQUANTITY

0.91+

thousands of nodesQUANTITY

0.88+

Parallel ClusterTITLE

0.87+

about 25 gigsQUANTITY

0.87+

Dhabaleswar “DK” Panda, Ohio State State University | SuperComputing 22


 

>>Welcome back to The Cube's coverage of Supercomputing Conference 2022, otherwise known as SC 22 here in Dallas, Texas. This is day three of our coverage, the final day of coverage here on the exhibition floor. I'm Dave Nicholson, and I'm here with my co-host, tech journalist extraordinaire, Paul Gillum. How's it going, >>Paul? Hi, Dave. It's going good. >>And we have a wonderful guest with us this morning, Dr. Panda from the Ohio State University. Welcome Dr. Panda to the Cube. >>Thanks a lot. Thanks a lot to >>Paul. I know you're, you're chopping at >>The bit, you have incredible credentials, over 500 papers published. The, the impact that you've had on HPC is truly remarkable. But I wanted to talk to you specifically about a product project you've been working on for over 20 years now called mva, high Performance Computing platform that's used by more than 32 organ, 3,200 organizations across 90 countries. You've shepherded this from, its, its infancy. What is the vision for what MVA will be and and how is it a proof of concept that others can learn from? >>Yeah, Paul, that's a great question to start with. I mean, I, I started with this conference in 2001. That was the first time I came. It's very coincidental. If you remember the Finman Networking Technology, it was introduced in October of 2000. Okay. So in my group, we were working on NPI for Marinette Quadrics. Those are the old technology, if you can recollect when Finman was there, we were the very first one in the world to really jump in. Nobody knew how to use Infin van in an HPC system. So that's how the Happy Project was born. And in fact, in super computing 2002 on this exhibition floor in Baltimore, we had the first demonstration, the open source happy, actually is running on an eight node infinite van clusters, eight no zeros. And that was a big challenge. But now over the years, I means we have continuously worked with all infinite van vendors, MPI Forum. >>We are a member of the MPI Forum and also all other network interconnect. So we have steadily evolved this project over the last 21 years. I'm very proud of my team members working nonstop, continuously bringing not only performance, but scalability. If you see now INFIN event are being deployed in 8,000, 10,000 node clusters, and many of these clusters actually use our software, stack them rapid. So, so we have done a lot of, like our focuses, like we first do research because we are in academia. We come up with good designs, we publish, and in six to nine months, we actually bring it to the open source version and people can just download and then use it. And that's how currently it's been used by more than 3000 orange in 90 countries. And, but the interesting thing is happening, your second part of the question. Now, as you know, the field is moving into not just hvc, but ai, big data, and we have those support. This is where like we look at the vision for the next 20 years, we want to design this MPI library so that not only HPC but also all other workloads can take advantage of it. >>Oh, we have seen libraries that become a critical develop platform supporting ai, TensorFlow, and, and the pie torch and, and the emergence of, of, of some sort of default languages that are, that are driving the community. How, how important are these frameworks to the, the development of the progress making progress in the HPC world? >>Yeah, no, those are great. I mean, spite our stencil flow, I mean, those are the, the now the bread and butter of deep learning machine learning. Am I right? But the challenge is that people use these frameworks, but continuously models are becoming larger. You need very first turnaround time. So how do you train faster? How do you do influencing faster? So this is where HPC comes in and what exactly what we have done is actually we have linked floor fighters to our happy page because now you see the MPI library is running on a million core system. Now your fighters and tenor four clan also be scaled to to, to those number of, large number of course and gps. So we have actually done that kind of a tight coupling and that helps the research to really take advantage of hpc. >>So if, if a high school student is thinking in terms of interesting computer science, looking for a place, looking for a university, Ohio State University, bruns, world renowned, widely known, but talk about what that looks like from a day on a day to day basis in terms of the opportunity for undergrad and graduate students to participate in, in the kind of work that you do. What is, what does that look like? And is, and is that, and is that a good pitch to for, for people to consider the university? >>Yes. I mean, we continuously, from a university perspective, by the way, the Ohio State University is one of the largest single campus in, in us, one of the top three, top four. We have 65,000 students. Wow. It's one of the very largest campus. And especially within computer science where I am located, high performance computing is a very big focus. And we are one of the, again, the top schools all over the world for high performance computing. And we also have very strength in ai. So we always encourage, like the new students who like to really work on top of the art solutions, get exposed to the concepts, principles, and also practice. Okay. So, so we encourage those people that wish you can really bring you those kind of experience. And many of my past students, staff, they're all in top companies now, have become all big managers. >>How, how long, how long did you say you've been >>At 31 >>Years? 31 years. 31 years. So, so you, you've had people who weren't alive when you were already doing this stuff? That's correct. They then were born. Yes. They then grew up, yes. Went to university graduate school, and now they're on, >>Now they're in many top companies, national labs, all over the universities, all over the world. So they have been trained very well. Well, >>You've, you've touched a lot of lives, sir. >>Yes, thank you. Thank >>You. We've seen really a, a burgeoning of AI specific hardware emerge over the last five years or so. And, and architectures going beyond just CPUs and GPUs, but to Asics and f PGAs and, and accelerators, does this excite you? I mean, are there innovations that you're seeing in this area that you think have, have great promise? >>Yeah, there is a lot of promise. I think every time you see now supercomputing technology, you see there is sometime a big barrier comes barrier jump. Rather I'll say, new technology comes some disruptive technology, then you move to the next level. So that's what we are seeing now. A lot of these AI chips and AI systems are coming up, which takes you to the next level. But the bigger challenge is whether it is cost effective or not, can that be sustained longer? And this is where commodity technology comes in, which commodity technology tries to take you far longer. So we might see like all these likes, Gaudi, a lot of new chips are coming up, can they really bring down the cost? If that cost can be reduced, you will see a much more bigger push for AI solutions, which are cost effective. >>What, what about on the interconnect side of things, obvi, you, you, your, your start sort of coincided with the initial standards for Infin band, you know, Intel was very, very, was really big in that, in that architecture originally. Do you see interconnects like RDMA over converged ethernet playing a part in that sort of democratization or commoditization of things? Yes. Yes. What, what are your thoughts >>There for internet? No, this is a great thing. So, so we saw the infinite man coming. Of course, infinite Man is, commod is available. But then over the years people have been trying to see how those RDMA mechanisms can be used for ethernet. And then Rocky has been born. So Rocky has been also being deployed. But besides these, I mean now you talk about Slingshot, the gray slingshot, it is also an ethernet based systems. And a lot of those RMA principles are actually being used under the hood. Okay. So any modern networks you see, whether it is a Infin and Rocky Links art network, rock board network, you name any of these networks, they are using all the very latest principles. And of course everybody wants to make it commodity. And this is what you see on the, on the slow floor. Everybody's trying to compete against each other to give you the best performance with the lowest cost, and we'll see whoever wins over the years. >>Sort of a macroeconomic question, Japan, the US and China have been leapfrogging each other for a number of years in terms of the fastest supercomputer performance. How important do you think it is for the US to maintain leadership in this area? >>Big, big thing, significantly, right? We are saying that I think for the last five to seven years, I think we lost that lead. But now with the frontier being the number one, starting from the June ranking, I think we are getting that leadership back. And I think it is very critical not only for fundamental research, but for national security trying to really move the US to the leading edge. So I hope us will continue to lead the trend for the next few years until another new system comes out. >>And one of the gating factors, there is a shortage of people with data science skills. Obviously you're doing what you can at the university level. What do you think can change at the secondary school level to prepare students better to, for data science careers? >>Yeah, I mean that is also very important. I mean, we, we always call like a pipeline, you know, that means when PhD levels we are expecting like this even we want to students to get exposed to, to, to many of these concerts from the high school level. And, and things are actually changing. I mean, these days I see a lot of high school students, they, they know Python, how to program in Python, how to program in sea object oriented things. Even they're being exposed to AI at that level. So I think that is a very healthy sign. And in fact we, even from Ohio State side, we are always engaged with all this K to 12 in many different programs and then gradually trying to take them to the next level. And I think we need to accelerate also that in a very significant manner because we need those kind of a workforce. It is not just like a building a system number one, but how do we really utilize it? How do we utilize that science? How do we propagate that to the community? Then we need all these trained personal. So in fact in my group, we are also involved in a lot of cyber training activities for HPC professionals. So in fact, today there is a bar at 1 1 15 I, yeah, I think 1215 to one 15. We'll be talking more about that. >>About education. >>Yeah. Cyber training, how do we do for professionals? So we had a funding together with my co-pi, Dr. Karen Tom Cook from Ohio Super Center. We have a grant from NASA Science Foundation to really educate HPT professionals about cyber infrastructure and ai. Even though they work on some of these things, they don't have the complete knowledge. They don't get the time to, to learn. And the field is moving so fast. So this is how it has been. We got the initial funding, and in fact, the first time we advertised in 24 hours, we got 120 application, 24 hours. We couldn't even take all of them. So, so we are trying to offer that in multiple phases. So, so there is a big need for those kind of training sessions to take place. I also offer a lot of tutorials at all. Different conference. We had a high performance networking tutorial. Here we have a high performance deep learning tutorial, high performance, big data tutorial. So I've been offering tutorials at, even at this conference since 2001. Good. So, >>So in the last 31 years, the Ohio State University, as my friends remind me, it is properly >>Called, >>You've seen the world get a lot smaller. Yes. Because 31 years ago, Ohio, in this, you know, of roughly in the, in the middle of North America and the United States was not as connected as it was to everywhere else in the globe. So that's, that's pro that's, I i it kind of boggles the mind when you think of that progression over 31 years, but globally, and we talk about the world getting smaller, we're sort of in the thick of, of the celebratory seasons where, where many, many groups of people exchange gifts for varieties of reasons. If I were to offer you a holiday gift, that is the result of what AI can deliver the world. Yes. What would that be? What would, what would, what would the first thing be? This is, this is, this is like, it's, it's like the genie, but you only get one wish. >>I know, I know. >>So what would the first one be? >>Yeah, it's very hard to answer one way, but let me bring a little bit different context and I can answer this. I, I talked about the happy project and all, but recently last year actually we got awarded an S f I institute award. It's a 20 million award. I am the overall pi, but there are 14 universities involved. >>And who is that in that institute? >>What does that Oh, the I ici. C e. Okay. I cycle. You can just do I cycle.ai. Okay. And that lies with what exactly what you are trying to do, how to bring lot of AI for masses, democratizing ai. That's what is the overall goal of this, this institute, think of like a, we have three verticals we are working think of like one is digital agriculture. So I'll be, that will be my like the first ways. How do you take HPC and AI to agriculture the world as though we just crossed 8 billion people. Yeah, that's right. We need continuous food and food security. How do we grow food with the lowest cost and with the highest yield? >>Water >>Consumption. Water consumption. Can we minimize or minimize the water consumption or the fertilization? Don't do blindly. Technologies are out there. Like, let's say there is a weak field, A traditional farmer see that, yeah, there is some disease, they will just go and spray pesticides. It is not good for the environment. Now I can fly it drone, get images of the field in the real time, check it against the models, and then it'll tell that, okay, this part of the field has disease. One, this part of the field has disease. Two, I indicate to the, to the tractor or the sprayer saying, okay, spray only pesticide one, you have pesticide two here. That has a big impact. So this is what we are developing in that NSF A I institute I cycle ai. We also have, we have chosen two additional verticals. One is animal ecology, because that is very much related to wildlife conservation, climate change, how do you understand how the animals move? Can we learn from them? And then see how human beings need to act in future. And the third one is the food insecurity and logistics. Smart food distribution. So these are our three broad goals in that institute. How do we develop cyber infrastructure from below? Combining HP c AI security? We have, we have a large team, like as I said, there are 40 PIs there, 60 students. We are a hundred members team. We are working together. So, so that will be my wish. How do we really democratize ai? >>Fantastic. I think that's a great place to wrap the conversation here On day three at Supercomputing conference 2022 on the cube, it was an honor, Dr. Panda working tirelessly at the Ohio State University with his team for 31 years toiling in the field of computer science and the end result, improving the lives of everyone on Earth. That's not a stretch. If you're in high school thinking about a career in computer science, keep that in mind. It isn't just about the bits and the bobs and the speeds and the feeds. It's about serving humanity. Maybe, maybe a little, little, little too profound a statement, I would argue not even close. I'm Dave Nicholson with the Queue, with my cohost Paul Gillin. Thank you again, Dr. Panda. Stay tuned for more coverage from the Cube at Super Compute 2022 coming up shortly. >>Thanks a lot.

Published Date : Nov 17 2022

SUMMARY :

Welcome back to The Cube's coverage of Supercomputing Conference 2022, And we have a wonderful guest with us this morning, Dr. Thanks a lot to But I wanted to talk to you specifically about a product project you've So in my group, we were working on NPI for So we have steadily evolved this project over the last 21 years. that are driving the community. So we have actually done that kind of a tight coupling and that helps the research And is, and is that, and is that a good pitch to for, So, so we encourage those people that wish you can really bring you those kind of experience. you were already doing this stuff? all over the world. Thank this area that you think have, have great promise? I think every time you see now supercomputing technology, with the initial standards for Infin band, you know, Intel was very, very, was really big in that, And this is what you see on the, Sort of a macroeconomic question, Japan, the US and China have been leapfrogging each other for a number the number one, starting from the June ranking, I think we are getting that leadership back. And one of the gating factors, there is a shortage of people with data science skills. And I think we need to accelerate also that in a very significant and in fact, the first time we advertised in 24 hours, we got 120 application, that's pro that's, I i it kind of boggles the mind when you think of that progression over 31 years, I am the overall pi, And that lies with what exactly what you are trying to do, to the tractor or the sprayer saying, okay, spray only pesticide one, you have pesticide two here. I think that's a great place to wrap the conversation here On

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

Paul GillumPERSON

0.99+

DavePERSON

0.99+

Paul GillinPERSON

0.99+

October of 2000DATE

0.99+

PaulPERSON

0.99+

NASA Science FoundationORGANIZATION

0.99+

2001DATE

0.99+

BaltimoreLOCATION

0.99+

8,000QUANTITY

0.99+

14 universitiesQUANTITY

0.99+

31 yearsQUANTITY

0.99+

20 millionQUANTITY

0.99+

24 hoursQUANTITY

0.99+

last yearDATE

0.99+

Karen Tom CookPERSON

0.99+

60 studentsQUANTITY

0.99+

Ohio State UniversityORGANIZATION

0.99+

90 countriesQUANTITY

0.99+

sixQUANTITY

0.99+

EarthLOCATION

0.99+

PandaPERSON

0.99+

todayDATE

0.99+

65,000 studentsQUANTITY

0.99+

3,200 organizationsQUANTITY

0.99+

North AmericaLOCATION

0.99+

PythonTITLE

0.99+

United StatesLOCATION

0.99+

Dallas, TexasLOCATION

0.99+

over 500 papersQUANTITY

0.99+

JuneDATE

0.99+

OneQUANTITY

0.99+

more than 32 organQUANTITY

0.99+

120 applicationQUANTITY

0.99+

OhioLOCATION

0.99+

more than 3000 orangeQUANTITY

0.99+

first waysQUANTITY

0.99+

oneQUANTITY

0.99+

nine monthsQUANTITY

0.99+

40 PIsQUANTITY

0.99+

AsicsORGANIZATION

0.99+

MPI ForumORGANIZATION

0.98+

ChinaORGANIZATION

0.98+

TwoQUANTITY

0.98+

Ohio State State UniversityORGANIZATION

0.98+

8 billion peopleQUANTITY

0.98+

IntelORGANIZATION

0.98+

HPORGANIZATION

0.97+

Dr.PERSON

0.97+

over 20 yearsQUANTITY

0.97+

USORGANIZATION

0.97+

FinmanORGANIZATION

0.97+

RockyPERSON

0.97+

JapanORGANIZATION

0.97+

first timeQUANTITY

0.97+

first demonstrationQUANTITY

0.96+

31 years agoDATE

0.96+

Ohio Super CenterORGANIZATION

0.96+

three broad goalsQUANTITY

0.96+

one wishQUANTITY

0.96+

second partQUANTITY

0.96+

31QUANTITY

0.96+

CubeORGANIZATION

0.95+

eightQUANTITY

0.95+

over 31 yearsQUANTITY

0.95+

10,000 node clustersQUANTITY

0.95+

day threeQUANTITY

0.95+

firstQUANTITY

0.95+

INFINEVENT

0.94+

seven yearsQUANTITY

0.94+

Dhabaleswar “DK” PandaPERSON

0.94+

threeQUANTITY

0.93+

S f I instituteTITLE

0.93+

first thingQUANTITY

0.93+

David Schmidt, Dell Technologies and Scott Clark, Intel | SuperComputing 22


 

(techno music intro) >> Welcome back to theCube's coverage of SuperComputing Conference 2022. We are here at day three covering the amazing events that are occurring here. I'm Dave Nicholson, with my co-host Paul Gillin. How's it goin', Paul? >> Fine, Dave. Winding down here, but still plenty of action. >> Interesting stuff. We got a full day of coverage, and we're having really, really interesting conversations. We sort of wrapped things up at Supercomputing 22 here in Dallas. I've got two very special guests with me, Scott from Intel and David from Dell, to talk about yeah supercomputing, but guess what? We've got some really cool stuff coming up after this whole thing wraps. So not all of the holiday gifts have been unwrapped yet, kids. Welcome gentlemen. >> Thanks so much for having us. >> Thanks for having us. >> So, let's start with you, David. First of all, explain the relationship in general between Dell and Intel. >> Sure, so obviously Intel's been an outstanding partner. We built some great solutions over the years. I think the market reflects that. Our customers tell us that. The feedback's strong. The products you see out here this week at Supercompute, you know, put that on display for everybody to see. And then as we think about AI in machine learning, there's so many different directions we need to go to help our customers deliver AI outcomes. Right, so we recognize that AI has kind of spread outside of just the confines of everything we've seen here this week. And now we've got really accessible AI use cases that we can explain to friends and family. We can talk about going into retail environments and how AI is being used to track inventory, to monitor traffic, et cetera. But really what that means to us as a bunch of hardware folks is we have to deliver the right platforms and the right designs for a variety of environments, both inside and outside the data center. And so if you look at our portfolio, we have some great products here this week, but we also have other platforms, like the XR4000, our shortest rack server ever that's designed to go into Edge environments, but is also built for those Edge AI use cases that supports GPUs. It supports AI on the CPU as well. And so there's a lot of really compelling platforms that we're starting to talk about, have already been talking about, and it's going to really enable our customers to deliver AI in a variety of ways. >> You mentioned AI on the CPU. Maybe this is a question for Scott. What does that mean, AI on the CPU? >> Well, as David was talking about, we're just seeing this explosion of different use cases. And some of those on the Edge, some of them in the Cloud, some of them on Prem. But within those individual deployments, there's often different ways that you can do AI, whether that's training or inference. And what we're seeing is a lot of times the memory locality matters quite a bit. You don't want to have to pay necessarily a cost going across the PCI express bus, especially with some of our newer products like the CPU Max series, where you can have a huge about of high bandwidth memory just sitting right on the CPU. Things that traditionally would have been accelerator only, can now live on a CPU, and that includes both on the inference side. We're seeing some really great things with images, where you might have a giant medical image that you need to be able to do extremely high resolution inference on or even text, where you might have a huge corpus of extremely sparse text that you need to be able to randomly sample very efficiently. >> So how are these needs influencing the evolution of Intel CPU architectures? >> So, we're talking to our customers. We're talking to our partners. This presents both an opportunity, but also a challenge with all of these different places that you can put these great products, as well as applications. And so we're very thoughtfully trying to go to the market, see where their needs are, and then meet those needs. This industry obviously has a lot of great players in it, and it's no longer the case that if you build it, they will come. So what we're doing is we're finding where are those choke points, how can we have that biggest difference? Sometimes there's generational leaps, and I know David can speak to this, can be huge from one system to the next just because everything's accelerated on the software side, the hardware side, and the platforms themselves. >> That's right, and we're really excited about that leap. If you take what Scott just described, we've been writing white papers, our team with Scott's team, we've been talking about those types of use cases using doing large image analysis and leveraging system memory, leveraging the CPU to do that, we've been talking about that for several generations now. Right, going back to Cascade Lake, going back to what we would call 14th generation power Edge. And so now as we prepare and continue to unveil, kind of we're in launch season, right, you and I were talking about how we're in launch season. As we continue to unveil and launch more products, the performance improvements are just going to be outstanding and we'll continue that evolution that Scott described. >> Yeah, I'd like to applaud Dell just for a moment for its restraint. Because I know you could've come in and taken all of the space in the convention center to show everything that you do. >> Would have loved to. >> In the HPC space. Now, worst kept secrets on earth at this point. Vying for number one place is the fact that there is a new Mission Impossible movie coming. And there's also new stuff coming from Intel. I know, I think allegedly we're getting close. What can you share with us on that front? And I appreciate it if you can't share a ton of specifics, but where are we going? David just alluded to it. >> Yeah, as David talked about, we've been working on some of these things for many years. And it's just, this momentum is continuing to build, both in respect to some of our hardware investments. We've unveiled some things both here, both on the CPU side and the accelerator side, but also on the software side. OneAPI is gathering more and more traction and the ecosystem is continuing to blossom. Some of our AI and HPC workloads, and the combination thereof, are becoming more and more viable, as well as displacing traditional approaches to some of these problems. And it's this type of thing where it's not linear. It all builds on itself. And we've seen some of these investments that we've made for a better half of a decade starting to bear fruit, but that's, it's not just a one time thing. It's just going to continue to roll out, and we're going to be seeing more and more of this. >> So I want to follow up on something that you mentioned. I don't know if you've ever heard that the Charlie Brown saying that sometimes the most discouraging thing can be to have immense potential. Because between Dell and Intel, you offer so many different versions of things from a fit for function perspective. As a practical matter, how do you work with customers, and maybe this is a question for you, David. How do you work with customers to figure out what the right fit is? >> I'll give you a great example. Just this week, customer conversations, and we can put it in terms of kilowatts to rack, right. How many kilowatts are you delivering at a rack level inside your data center? I've had an answer anywhere from five all the way up to 90. There's some that have been a bit higher that probably don't want to talk about those cases, kind of customers we're meeting with very privately. But the range is really, really large, right, and there's a variety of environments. Customers might be ready for liquid today. They may not be ready for it. They may want to maximize air cooling. Those are the conversations, and then of course it all maps back to the workloads they wish to enable. AI is an extremely overloaded term. We don't have enough time to talk about all the different things that tuck under that umbrella, but the workloads and the outcomes they wish to enable, we have the right solutions. And then we take it a step further by considering where they are today, where they need to go. And I just love that five to 90 example of not every customer has an identical cookie cutter environment, so we've got to have the right platforms, the right solutions, for the right workloads, for the right environments. >> So, I like to dive in on this power issue, to give people who are watching an idea. Because we say five kilowatts, 90 kilowatts, people are like, oh wow, hmm, what does that mean? 90 kilowatts is more than 100 horse power if you want to translate it over. It's a massive amount of power, so if you think of EV terms. You know, five kilowatts is about a hairdryer's around a kilowatt, 1,000 watts, right. But the point is, 90 kilowatts in a rack, that's insane. That's absolutely insane. The heat that that generates has got to be insane, and so it's important. >> Several houses in the size of a closet. >> Exactly, exactly. Yeah, in a rack I explain to people, you know, it's like a refrigerator. But, so in the arena of thermals, I mean is that something during the development of next gen architectures, is that something that's been taken into consideration? Or is it just a race to die size? >> Well, you definitely have to take thermals into account, as well as just the power of consumption themselves. I mean, people are looking at their total cost of ownership. They're looking at sustainability. And at the end of the day, they need to solve a problem. There's many paths up that mountain, and it's about choosing that right path. We've talked about this before, having extremely thoughtful partners, we're just not going to common-torily try every single solution. We're going to try to find the ones that fit that right mold for that customer. And we're seeing more and more people, excuse me, care about this, more and more people wanting to say, how do I do this in the most sustainable way? How do I do this in the most reliable way, given maybe different fluctuations in their power consumption or their power pricing? We're developing more software tools and obviously partnering with great partners to make sure we do this in the most thoughtful way possible. >> Intel put a lot of, made a big investment by buying Habana Labs for its acceleration technology. They're based in Israel. You're based on the west coast. How are you coordinating with them? How will the Habana technology work its way into more mainstream Intel products? And how would Dell integrate those into your servers? >> Good question. I guess I can kick this off. So Habana is part of the Intel family now. They've been integrated in. It's been a great journey with them, as some of their products have launched on AWS, and they've had some very good wins on MLPerf and things like that. I think it's about finding the right tool for the job, right. Not every problem is a nail, so you need more than just a hammer. And so we have the Xeon series, which is incredibly flexible, can do so many different things. It's what we've come to know and love. On the other end of the spectrum, we obviously have some of these more deep learning focused accelerators. And if that's your problem, then you can solve that problem in incredibly efficient ways. The accelerators themselves are somewhere in the middle, so you get that kind of Goldilocks zone of flexibility and power. And depending on your use case, depending on what you know your workloads are going to be day in and day out, one of these solutions might work better for you. A combination might work better for you. Hybrid compute starts to become really interesting. Maybe you have something that you need 24/7, but then you only need a burst to certain things. There's a lot of different options out there. >> The portfolio approach. >> Exactly. >> And then what I love about the work that Scott's team is doing, customers have told us this week in our meetings, they do not want to spend developer's time porting code from one stack to the next. They want that flexibility of choice. Everyone does. We want it in our lives, in our every day lives. They need that flexibility of choice, but they also, there's an opportunity cost when their developers have to choose to port some code over from one stack to another or spend time improving algorithms and doing things that actually generate, you know, meaningful outcomes for their business or their research. And so if they are, you know, desperately searching I would say for that solution and for help in that area, and that's what we're working to enable soon. >> And this is what I love about oneAPI, our software stack, it's open first, heterogeneous first. You can take SYCL code, it can run on competitor's hardware. It can run on Intel hardware. It's one of these things that you have to believe long term, the future is open. Wall gardens, the walls eventually crumble. And we're just trying to continue to invest in that ecosystem to make sure that the in-developer at the end of the day really gets what they need to do, which is solving their business problem, not tinkering with our drivers. >> Yeah, I actually saw an interesting announcement that I hadn't been tracking. I hadn't been tracking this area. Chiplets, and the idea of an open standard where competitors of Intel from a silicone perspective can have their chips integrated via a universal standard. And basically you had the top three silicone vendors saying, yeah, absolutely, let's work together. Cats and dogs. >> Exactly, but at the end of the day, it's whatever menagerie solves the problem. >> Right, right, exactly. And of course Dell can solve it from any angle. >> Yeah, we need strong partners to build the platforms to actually do it. At the end of the day, silicone without software is just sand. Sand with silicone is poorly written prose. But without an actual platform to put it on, it's nothing, it's a box that sits in the corner. >> David, you mentioned that 90% of power age servers now support GPUs. So how is this high-performing, the growth of high performance computing, the demand, influencing the evolution of your server architecture? >> Great question, a couple of ways. You know, I would say 90% of our platforms support GPUs. 100% of our platforms support AI use cases. And it goes back to the CPU compute stack. As we look at how we deliver different form factors for customers, we go back to that range, I said that power range this week of how do we enable the right air coolant solutions? How do we deliver the right liquid cooling solutions, so that wherever the customer is in their environment, and whatever footprint they have, we're ready to meet it? That's something you'll see as we go into kind of the second half of launch season and continue rolling out products. You're going to see some very compelling solutions, not just in air cooling, but liquid cooling as well. >> You want to be more specific? >> We can't unveil everything at Supercompute. We have a lot of great stuff coming up here in the next few months, so. >> It's kind of like being at a great restaurant when they offer you dessert, and you're like yeah, dessert would be great, but I just can't take anymore. >> It's a multi course meal. >> At this point. Well, as we wrap, I've got one more question for each of you. Same question for each of you. When you think about high performance computing, super computing, all of the things that you're doing in your partnership, driving artificial intelligence, at that tip of the spear, what kind of insights are you looking forward to us being able to gain from this technology? In other words, what cool thing, what do you think is cool out there from an AI perspective? What problem do you think we can solve in the near future? What problems would you like to solve? What gets you out of bed in the morning? Cause it's not the little, it's not the bits and the bobs and the speeds and the feats, it's what we're going to do with them, so what do you think, David? >> I'll give you an example. And I think, I saw some of my colleagues talk about this earlier in the week, but for me what we could do in the past two years to unable our customers in a quarantine pandemic environment, we were delivering platforms and solutions to help them do their jobs, help them carry on in their lives. And that's just one example, and if I were to map that forward, it's about enabling that human progress. And it's, you know, you ask a 20 year version of me 20 years ago, you know, if you could imagine some of these things, I don't know what kind of answer you would get. And so mapping forward next decade, next two decades, I can go back to that example of hey, we did great things in the past couple of years to enable our customers. Just imagine what we're going to be able to do going forward to enable that human progress. You know, there's great use cases, there's great image analysis. We talked about some. The images that Scott was referring to had to do with taking CAT scan images and being able to scan them for tumors and other things in the healthcare industry. That is stuff that feels good when you get out of bed in the morning, to know that you're enabling that type of progress. >> Scott, quick thoughts? >> Yeah, and I'll echo that. It's not one specific use case, but it's really this wave front of all of these use cases, from the very micro of developing the next drug to finding the next battery technology, all the way up to the macro of trying to have an impact on climate change or even the origins of the universe itself. All of these fields are seeing these massive gains, both from the software, the hardware, the platforms that we're bringing to bear to these problems. And at the end of the day, humanity is going to be fundamentally transformed by the computation that we're launching and working on today. >> Fantastic, fantastic. Thank you, gentlemen. You heard it hear first, Intel and Dell just committed to solving the secrets of the universe by New Years Eve 2023. >> Well, next Supercompute, let's give us a little time. >> The next Supercompute Convention. >> Yeah, next year. >> Yeah, SC 2023, we'll come back and see what problems have been solved. You heard it hear first on theCube, folks. By SC 23, Dell and Intel are going to reveal the secrets of the universe. From here, at SC 22, I'd like to thank you for joining our conversation. I'm Dave Nicholson, with my co-host Paul Gillin. Stay tuned to theCube's coverage of Supercomputing Conference 22. We'll be back after a short break. (techno music)

Published Date : Nov 17 2022

SUMMARY :

covering the amazing events Winding down here, but So not all of the holiday gifts First of all, explain the and the right designs for What does that mean, AI on the CPU? that you need to be able to and it's no longer the case leveraging the CPU to do that, all of the space in the convention center And I appreciate it if you and the ecosystem is something that you mentioned. And I just love that five to 90 example But the point is, 90 kilowatts to people, you know, And at the end of the day, You're based on the west coast. So Habana is part of the Intel family now. and for help in that area, in that ecosystem to make Chiplets, and the idea of an open standard Exactly, but at the end of the day, And of course Dell can that sits in the corner. the growth of high performance And it goes back to the CPU compute stack. in the next few months, so. when they offer you dessert, and the speeds and the feats, in the morning, to know And at the end of the day, of the universe by New Years Eve 2023. Well, next Supercompute, From here, at SC 22, I'd like to thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

MaribelPERSON

0.99+

JohnPERSON

0.99+

KeithPERSON

0.99+

EquinixORGANIZATION

0.99+

Matt LinkPERSON

0.99+

Dave VellantePERSON

0.99+

IndianapolisLOCATION

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

ScottPERSON

0.99+

Dave NicholsonPERSON

0.99+

Tim MinahanPERSON

0.99+

Paul GillinPERSON

0.99+

Lisa MartinPERSON

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

LisaPERSON

0.99+

EuropeLOCATION

0.99+

Stephanie CoxPERSON

0.99+

AkanshkaPERSON

0.99+

BudapestLOCATION

0.99+

IndianaLOCATION

0.99+

Steve JobsPERSON

0.99+

OctoberDATE

0.99+

IndiaLOCATION

0.99+

StephaniePERSON

0.99+

NvidiaORGANIZATION

0.99+

Chris LavillaPERSON

0.99+

2006DATE

0.99+

Tanuja RanderyPERSON

0.99+

CubaLOCATION

0.99+

IsraelLOCATION

0.99+

Keith TownsendPERSON

0.99+

AkankshaPERSON

0.99+

DellORGANIZATION

0.99+

Akanksha MehrotraPERSON

0.99+

LondonLOCATION

0.99+

September 2020DATE

0.99+

IntelORGANIZATION

0.99+

David SchmidtPERSON

0.99+

90%QUANTITY

0.99+

$45 billionQUANTITY

0.99+

October 2020DATE

0.99+

AfricaLOCATION

0.99+

Kim Leyenaar, Broadcom | SuperComputing 22


 

(Intro music) >> Welcome back. We're LIVE here from SuperComputing 22 in Dallas Paul Gillin, for Silicon Angle in theCUBE with my guest host Dave... excuse me. And our, our guest today, this segment is Kim Leyenaar who is a storage performance architect at Broadcom. And the topic of this conversation is, is is networking, it's connectivity. I guess, how does that relate to the work of a storage performance architect? >> Well, that's a really good question. So yeah, I have been focused on storage performance for about 22 years. But even, even if we're talking about just storage the entire, all the components have a really big impact on ultimately how quickly you can access your data. So, you know, the, the switches the memory bandwidth, the, the expanders the just the different protocols that you're using. And so, and the big part of is actually ethernet because as you know, data's not siloed anymore. You have to be able to access it from anywhere in the world. >> Dave: So wait, so you're telling me that we're just not living in a CPU centric world now? >> Ha ha ha >> Because it is it is sort of interesting. When we talk about supercomputing and high performance computing we're always talking about clustering systems. So how do you connect those systems? Isn't that, isn't that kind of your, your wheelhouse? >> Kim: It really is. >> Dave: At Broadcom. >> It's, it is, it is Broadcom's wheelhouse. We are all about interconnectivity and we own the interconnectivity. You know, you know, years ago it was, 'Hey, you know buy this new server because, you know, we we've added more cores or we've got better memory.' But now you've got all this siloed data and we've got you know, we've got this, this stuff or defined kind of environment now this composable environments where, hey if you need more networking, just plug this in or just go here and just allocate yourself more. So what we're seeing is these silos really of, 'hey here's our compute, here's your networking, here's your storage.' And so, how do you put those all together? The thing is interconnectivity. So, that's really what we specialize in. I'm really, you know, I'm really happy to be here to talk about some of the things that that we do to enable high performance computing. >> Paul: Now we're seeing, you know, new breed of AI computers being built with multiple GPUs very large amounts of data being transferred between them. And the internet really has become a, a bottleneck. The interconnect has become a bottle, a bottleneck. Is that something that Broadcom is working on alleviating? >> Kim: Absolutely. So we work with a lot of different, there's there's a lot of different standards that we work with to define so that we can make sure that we work everywhere. So even if you're just a dentist's office that's deploying one server, or we're talking about these hyperscalers that are, you know that have thousands or, you know tens of thousands of servers, you know, we're working on making sure that the next generation is able to outperform the previous generation. Not only that, but we found that, you know with these siloed things, if, if you add more storage but that means we're going to eat up six cores using that it's not really as useful. So Broadcom's really been focused on trying to offload the CPU. So we're offloading it from, you know data security, data protection, you know, we're we do packet sniffing ourselves and things like that. So no longer do we rely on the CPU to do that kind of processing for us but we become very smart devices all on our own so that they work very well in these kind of environments. >> Dave: So how about, give, give us an example. I know a lot of the discussion here has been around using ethernet as the connectivity layer. >> Yes. >> You know, in in, in the past, people would think about supercomputing as exclusively being InfiniBand based. >> Ha ha ha. >> But give, give us an idea of what Broadcom is doing in the ethernet space. What, you know, what's what are the advantages of using ethernet? >> Kim: So we've made two really big announcements. The first one is our Tomahawk five ethernet switch. So it's a 400 gigi ethernet switch. And the other thing we announced too was our Thor. So we have, these are our network controllers that also support up to 400 gigi each as well. So, those two alone, it just, it's amazing to me how much data we're able to transfer with those. But not only that, but they're super super intelligent controllers too. And then we realized, you know, hey, we're we're managing all this data, let's go ahead and offload the CPU. So we actually adopted the Rocky Standards. So that's one of the things that puts us above InfiniBand is that ethernet is ubiquitous, it's everywhere. And InfiniBand is primarily just owned by one or two companies. And, and so, and it's also a lot more expensive. So ethernet is just, it's everywhere. And now with the, with the Rocky standards, we're working along with, it's, it's, it does what you're talking about much better than, you know predecessors. >> Tell us about the Rocky Standards. I'm not familiar with it. I'm sure some of our listeners are not. What is the Rocky standard? >> Kim: Ha ha ha. So it's our DNA over converged to ethernet. I'm not a Rocky expert myself but I am an expert on how to offload the CPU. And so one of the things it does is instead of using the CPU to transfer the data from, you know the user space over to the next, you know server when you're transferring it we actually will do it ourselves. So we'll handle it ourselves. We will take it, we will move it across the wire and we will put it in that remote computer. And we don't have to ask the CPU to do anything to get involved in that. So big, you know, it's a big savings. >> Yeah, I mean in, in a nutshell, because there are parts of the InfiniBand protocol that are essentially embedded in RDMA over converged ethernet. So... >> Right. >> So if you can, if you can leverage kind of the best of both worlds, but have it in an ethernet environment which is already ubiquitous, it seems like it's, kind of democratizing supercomputing and, and HPC and I know you guys are big partners with Dell as an example, you guys work with all sorts of other people. >> Kim: Yeah. >> But let's say, let's say somebody is going to be doing ethernet for connectivity, you also offer switches? >> Kim: We do, actually. >> So is that, I mean that's another piece of the puzzle. >> That's a big piece of the puzzle. So we just released our, our Atlas 2 switch. It is a PCIE Gen Five switch. And... >> Dave: What does that mean? What does Gen five, what does that mean? >> Oh, Gen Five PCIE, it's it's a magic connectivity right now. So, you know, we talk about the Sapphire Rapids release as well as the GENUWA release. I know that those, you know those have been talked about a lot here. I've been walking around and everybody's talking about it. Well, those enable the Gen Five PCIE interfaces. So we've been able to double the bandwidth from the Gen Four up to the Gen Five. So, in order to, to support that we do now have our Atlas two PCIE Gen Five switch. And it allows you to connect especially around here we're talking about, you know artificial intelligence and machine learning. A lot of these are relying on the GPU and the DPU that you see, you know a lot of people talking about enabling. So by in, you know, putting these switches in the servers you can connect multitudes of not only NVME devices but also these GPUs and these, these CPUs. So besides that we also have the storage component of it too. So to support that, we we just recently have released our 9,500 series HBAs which support 24 gig SAS. And you know, this is kind of a, this is kind of a big deal for some of our hyperscalers that say, Hey, look our next generation, we're putting a hundred hard drives in. So we're like, you know, so a lot of it is maybe for cold storage, but by giving them that 24 gig bandwidth and by having these mass 24 gig SAS expanders that allows these hyperscalers to build up their systems. >> Paul: And how are you supporting the HPC community at large? And what are you doing that's exclusively for supercomputing? >> Kim: Exclusively for? So we're doing the interconnectivity really for them. You know, you can have as, as much compute power as you want, but these are very data hungry applications and a lot of that data is not sitting right in the box. A lot of that data is sitting in some other country or in some other city, or just the box next door. So to be able to move that data around, you know there's a new concept where they say, you know do the compute where the data is and then there's another kind of, you know the other way is move the data around which is a lot easier kind of sometimes, but so we're allowing us to move that data around. So for that, you know, we do have our our tomahawk switches, we've got our Thor NICS and of course we got, you know, the really wide pipe. So our, our new 9,500 series HBA and RAID controllers not only allow us to do, so we're doing 28 gigabytes a second that we can trans through the one controller, and that's on protected data. So we can actually have the high availability protected data of RAID 5 or RAID 6, or RAID 10 in the box giving in 27 gigabytes a second. So it's, it's unheard of the latency that we're seeing even off of this too, we have a right cash latency that is sub 8 microseconds that is lower than most of the NVME drives that you see, you know that are available today. So, so you know we're able to support these applications that require really low latency as well as data protection. >> Dave: So, so often when we talk about the underlying hardware, it's a it's a game of, you know, whack-a-mole chase the bottleneck. And so you've mentioned PCIE five, a lot of folks who will be implementing five, gen five PCIE five are coming off of three, not even four. >> Kim: I know. >> So make, so, so they're not just getting a last generation to this generation bump but they're getting a two generations, bump. >> Kim: They are. >> How does that, is it the case that it would never make sense to use a next gen or a current gen card in an older generation bus because of the mismatch and performance? Are these things all designed to work together? >> Uh... That's a really tough question. I want to say, no, it doesn't make sense. It, it really makes sense just to kind of move things forward and buy a card that's made for the bus it's in. However, that's not always the case. So for instance, our 9,500 controller is a Gen four PCIE but what we did, we doubled the PCIE so it's a by 16, even though it's a gen four, it's a by 16. So we're getting really, really good bandwidth out of it. As I said before, you know, we're getting 28, 27.8 or almost 28 gigabytes a second bandwidth out of that by doubling the PCIE bus. >> Dave: But they worked together, it all works together? >> All works together. You can put, you can put our Gen four and a Gen five all day long and they work beautifully. Yeah. We, we do work to validate that. >> We're almost out our time. But I, I want to ask you a more, nuts and bolts question, about storage. And we've heard for, you know, for years of the aerial density of hard disk has been reached and there's really no, no way to excel. There's no way to make the, the dish any denser. What is the future of the hard disk look like as a storage medium? >> Kim: Multi actuator actually, we're seeing a lot of multi-actuator. I was surprised to see it come across my desk, you know because our 9,500 actually does support multi-actuator. And, and, and so it was really neat after I've been working with hard drives for 22 years and I remember when they could do 30 megabytes a second, and that was amazing. That was like, wow, 30 megabytes a second. And then, about 15 years ago, they hit around 200 to 250 megabytes a second, and they stayed there. They haven't gone anywhere. What they have done is they've increased the density so that you can have more storage. So you can easily go out and buy 15 to 30 terabyte drive, but you're not going to get any more performance. So what they've done is they've added multiple actuators. So each one of these can do its own streaming and each one of these can actually do their own seeking. So you can get two and four. And I've even seen a talk about, you know eight actuator per disc. I, I don't think that, I think that's still theory, but but they could implement those. So that's one of the things that we're seeing. >> Paul: Old technology somehow finds a way to, to remain current. >> It does. >> Even it does even in the face of new alternatives. Kim Leyenaar, Storage Architect, Storage Performance Architect at Broadcom Thanks so much for being here with us today. Thank you so much for having me. >> This is Paul Gillin with Dave Nicholson here at SuperComputing 22. We'll be right back. (Outro music)

Published Date : Nov 16 2022

SUMMARY :

And the topic of this conversation is, is So, you know, the, the switches So how do you connect those systems? buy this new server because, you know, we you know, new breed So we're offloading it from, you know I know a lot of the You know, in in, in the What, you know, what's And then we realized, you know, hey, we're What is the Rocky standard? the data from, you know of the InfiniBand protocol So if you can, if you can So is that, I mean that's So we just released So we're like, you know, So for that, you know, we do have our it's a game of, you know, So make, so, so they're not out of that by doubling the PCIE bus. You can put, you can put And we've heard for, you know, for years so that you can have more storage. to remain current. Even it does even in the with Dave Nicholson here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kim LeyenaarPERSON

0.99+

Dave NicholsonPERSON

0.99+

DavePERSON

0.99+

15QUANTITY

0.99+

PaulPERSON

0.99+

BroadcomORGANIZATION

0.99+

KimPERSON

0.99+

30 megabytesQUANTITY

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

thousandsQUANTITY

0.99+

9,500QUANTITY

0.99+

28QUANTITY

0.99+

22 yearsQUANTITY

0.99+

six coresQUANTITY

0.99+

Paul GillinPERSON

0.99+

DellORGANIZATION

0.99+

fourQUANTITY

0.99+

DallasLOCATION

0.99+

24 gigQUANTITY

0.99+

two companiesQUANTITY

0.99+

first oneQUANTITY

0.99+

RockyORGANIZATION

0.98+

27.8QUANTITY

0.98+

todayDATE

0.98+

30 terabyteQUANTITY

0.98+

both worldsQUANTITY

0.98+

about 22 yearsQUANTITY

0.97+

two generationsQUANTITY

0.97+

each oneQUANTITY

0.97+

SuperComputing 22ORGANIZATION

0.97+

one controllerQUANTITY

0.97+

threeQUANTITY

0.96+

two really big announcementsQUANTITY

0.96+

250 megabytesQUANTITY

0.96+

one serverQUANTITY

0.94+

Gen fourCOMMERCIAL_ITEM

0.94+

up to 400 gigiQUANTITY

0.93+

Rocky standardsORGANIZATION

0.93+

tens of thousands of serversQUANTITY

0.93+

400 gigiQUANTITY

0.92+

around 200QUANTITY

0.92+

9,500 seriesQUANTITY

0.92+

excelTITLE

0.91+

9,500 seriesCOMMERCIAL_ITEM

0.9+

16QUANTITY

0.9+

InfiniBandORGANIZATION

0.89+

sub 8 microsecondsQUANTITY

0.89+

gen fourCOMMERCIAL_ITEM

0.89+

eight actuatorQUANTITY

0.89+

second bandwidthQUANTITY

0.88+

Atlas 2COMMERCIAL_ITEM

0.86+

GENUWAORGANIZATION

0.86+

ThorORGANIZATION

0.85+

fiveTITLE

0.85+

about 15 years agoDATE

0.84+

28 gigabytesQUANTITY

0.84+

Gen FiveCOMMERCIAL_ITEM

0.83+

27 gigabytes a secondQUANTITY

0.82+

Justin Emerson, Pure Storage | SuperComputing 22


 

(soft music) >> Hello, fellow hardware nerds and welcome back to Dallas Texas where we're reporting live from Supercomputing 2022. My name is Savannah Peterson, joined with the John Furrier on my left. >> Looking good today. >> Thank you, John, so are you. It's been a great show so far. >> We've had more hosts, more guests coming than ever before. >> I know. >> Amazing, super- >> We've got a whole thing going on. >> It's been a super computing performance. >> It, wow. And, we'll see how many times we can say super on this segment. Speaking of super things, I am in a very unique position right now. I am a flanked on both sides by people who have been doing content on theCUBE for 12 years. Yes, you heard me right, our next guest was on theCUBE 12 years ago, the third event, was that right, John? >> Man: First ever VM World. >> Yeah, the first ever VM World, third event theCUBE ever did. We are about to have a lot of fun. Please join me in welcoming Justin Emerson of Pure Storage. Justin, welcome back. >> It's a pleasure to be here. It's been too long, you never call, you don't write. (Savannah laughs) >> Great to see you. >> Yeah, likewise. >> How fun is this? Has the set evolved? Is everything looking good? >> I mean, I can barely remember what happened last week, so. (everyone laughs) >> Well, I remember lot's changed that VM world. You know, Paul Moritz was the CEO if you remember at that time. His actual vision actually happened but not the way, for VMware, but the industry, the cloud, he called the software mainframe. We were kind of riffing- >> It was quite the decade. >> Unbelievable where we are now, how we got here, but not where we're going to be. And you're with Pure Storage now which we've been, as you know, covering as well. Where's the connection into the supercomputing? Obviously storage performance, big part of this show. >> Right, right. >> What's the take? >> Well, I think, first of all it's great to be back at events in person. We were talking before we went on, and it's been so great to be back at live events now. It's been such a drought over the last several years, but yeah, yeah. So I'm very glad that we're doing in person events again. For Pure, this is an incredibly important show. You know, the product that I work with, with FlashBlade is you know, one of our key areas is specifically in this high performance computing, AI machine learning kind of space. And so we're really glad to be here. We've met a lot of customers, met a lot of other folks, had a lot of really great conversations. So it's been a really great show for me. And also just seeing all the really amazing stuff that's around here, I mean, if you want to find, you know, see what all the most cutting edge data center stuff that's going to be coming down the pipe, this is the place to do it. >> So one of the big themes of the show for us and probably, well, big theme of your life, is balancing power efficiency. You have a product in this category, Direct Flash. Can you tell us a little bit more about that? >> Yeah, so Pure as a storage company, right, what do we do differently from everybody else? And if I had to pick one thing, right, I would talk about, it's, you know, as the name implies, we're an all, we're purely flash, we're an all flash company. We've always been, don't plan to be anything else. And part of that innovation with Direct Flash is the idea of rather than treating a solid state disc as like a hard drive, right? Treat it as it actually is, treat it like who it really is and that's a very different kind of thing. And so Direct Flash is all about bringing native Flash interfaces to our product portfolio. And what's really exciting for me as a FlashBlade person, is now that's also part of our FlashBlade S portfolio, which just launched in June. And so the benefits of that are our myriad. But, you know, talking about efficiency, the biggest difference is that, you know, we can use like 90% less DRAM in our drives, which you know, everything uses, everything that you put in a drive uses power, it adds cost and all those things and so that really gives us an efficiency edge over everybody else and at a show like this, where, I mean, you walk the aisles and there's there's people doing liquid cooling and so much immersion stuff, and the reason they're doing that is because power is just increasing everywhere, right? So if you can figure out how do we use less power in some areas means you can shift that budget to other places. So if you can talk to a customer and say, well, if I could shrink your power budget for storage by two thirds or even, save you two-thirds of power, how many more accelerators, how many more CPUs, how much more work could you actually get done? So really exciting. >> I mean, less power consumption, more power and compute. >> Right. >> Kind of power center. So talk about the AI implications, where the use cases are. What are you seeing here? A lot of simulations, a lot of students, again, dorm room to the boardroom we've been saying here on theCUBE this is a great broad area, where's the action in the ML and the AI for you guys? >> So I think, not necessarily storage related but I think that right now there's this enormous explosion of custom silicon around AI machine learning which I as a, you said welcome hardware nerds at the beginning and I was like, ah, my people. >> We're all here, we're all here in Dallas. >> So wonderful. You know, as a hardware nerd we're talking about conferences, right? Who has ever attended hot chips and there's so much really amazing engineering work going on in the silicon space. It's probably the most exciting time for, CPU and accelerator, just innovation in, since the days before X 86 was the defacto standard, right? And you could go out and buy a different workstation with 16 different ISAs. That's really the most exciting thing, I walked past so many different places where you know, our booth is right next to Havana Labs with their gout accelerator, and they're doing this cute thing with one of the AI image generators in their booth, which is really cute. >> Woman: We're going to have to go check that out. >> Yeah, but that to me is like one of the more exciting things around like innovation at a, especially at a show like this where it's all about how do we move forward, the state of the art. >> What's different now than just a few years ago in terms of what's opening up the creativity for people to look at things that they could do with some of the scale that's different now. >> Yeah well, I mean, every time the state of the art moves forward what it means is, is that the entry level gets better, right? So if the high end is going faster, that means that the mid-range is going faster, and that means the entry level is going faster. So every time it pushes the boundary forward, it's a rising tide that floats all boats. And so now, the kind of stuff that's possible to do, if you're a student in a dorm room or if you're an enterprise, the world, the possible just keeps expanding dramatically and expanding almost, you know, geometrically like the amount of data that we are, that we have, as a storage guy, I was coming back to data but the amount of data that we have and the amount of of compute that we have, and it's not just about the raw compute, but also the advances in all sorts of other things in terms of algorithms and transfer learning and all these other things. There's so much amazing work going on in this area and it's just kind of this Kay Green explosion of innovation in the area. >> I love that you touched on the user experience for the community, no matter the level that you're at. >> Yeah. >> And I, it's been something that's come up a lot here. Everyone wants to do more faster, always, but it's not just that, it's about making the experience and the point of entry into this industry more approachable and digestible for folks who may not be familiar, I mean we have every end of the ecosystem here, on the show floor, where does Pure Storage sit in the whole game? >> Right, so as a storage company, right? What AI is all about deriving insights from data, right? And so everyone remembers that magazine cover data's the new oil, right? And it's kind of like, okay, so what do you do with it? Well, how do you derive value from all of that data? And AI machine learning and all of this supercomputing stuff is about how do we take all this data? How do we innovate with it? And so if you want data to innovate with, you need storage. And so, you know, our philosophy is that how do we make the best storage platforms that we can using the best technology for our customers that enable them to do really amazing things with AI machine learning and we've got different products, but, you know at the show here, what we're specifically showing off is our new flashlight S product, which, you know, I know we've had Pure folks on theCUBE before talking about FlashBlade, but for viewers out there, FlashBlade is our our scale out unstructured data platform and AI and machine learning and supercomputing is all about unstructured data. It's about sensor data, it's about imaging, it's about, you know, photogrammetry, all this other kinds of amazing stuff. But, you got to land all that somewhere. You got to process that all somewhere. And so really high performance, high throughput, highly scalable storage solutions are really essential. It's an enabler for all of the amazing other kinds of engineering work that goes on at a place like Supercomputing. >> It's interesting you mentioned data's oil. Remember in 2010, that year, our first year of theCUBE, Hadoop World, Hadoop just started to come on the scene, which became, you know kind of went away and, but now you got, Spark and Databricks and Snowflake- >> Justin: And it didn't go away, it just changed, right? >> It just got refactored and right size, I think for what the people wanted it to be easy to use but there's more data coming. How is data driving innovation as you bring, as people see clearly the more data's coming? How is data driving innovation as you guys look at your products, your roadmap and your customer base? How is data driving innovation for your customers? >> Well, I think every customer who has been, you know collecting all of this data, right? Is trying to figure out, now what do I do with it? And a lot of times people collect data and then it will end up on, you know, lower slower tiers and then suddenly they want to do something with it. And it's like, well now what do I do, right? And so there's all these people that are reevaluating you know, we, when we developed FlashBlade we sort of made this bet that unstructured data was going to become the new tier one data. It used to be that we thought unstructured data, it was emails and home directories and all that stuff the kind of stuff that you didn't really need a really good DR plan on. It's like, ah, we could, now of course, as soon as email goes down, you realize how important email is. But, the perspectives that people had on- >> Yeah, exactly. (all laughing) >> The perspectives that people had on unstructured data and it's value to the business was very different and so now- >> Good bet, by the way. >> Yeah, thank you. So now unstructured data is considered, you know, where companies are going to derive their value from. So it's whether they use the data that they have to build better products whether it's they use the data they have to develop you know, improvements in processes. All those kinds of things are data driven. And so all of the new big advancements in industry and in business are all about how do I derive insights from data? And so machine learning and AI has something to do with that, but also, you know, it all comes back to having data that's available. And so, we're working very hard on building platforms that customers can use to enable all of this really- >> Yeah, it's interesting, Savannah, you know, the top three areas we're covering for reinventing all the hyperscale events is data. How does it drive innovation and then specialized solutions to make customers lives easier? >> Yeah. >> It's become a big category. How do you compose stuff and then obviously compute, more and more compute and services to make the performance goes. So those seem to be the three hot areas. So, okay, data's the new oil refineries. You've got good solutions. What specialized solutions do you see coming out because once people have all this data, they might have either large scale, maybe some edge use cases. Do you see specialized solutions emerging? I mean, obviously it's got DPU emerging which is great, but like, do you see anything else coming out at that people are- >> Like from a hardware standpoint. >> Or from a customer standpoint, making the customer's lives easier? So, I got a lot of data flowing in. >> Yeah. >> It's never stopping, it keeps powering in. >> Yeah. >> Are there things coming out that makes their life easier? Have you seen anything coming out? >> Yeah, I think where we are as an industry right now with all of this new technology is, we're really in this phase of the standards aren't quite there yet. Everybody is sort of like figuring out what works and what doesn't. You know, there was this big revolution in sort of software development, right? Where moving towards agile development and all that kind of stuff, right? The way people build software change fundamentally this is kind of like another wave like that. I like to tell people that AI and machine learning is just a different way of writing software. What is the output of a training scenario, right? It's a model and a model is just code. And so I think that as all of these different, parts of the business figure out how do we leverage these technologies, what it is, is it's a different way of writing software and it's not necessarily going to replace traditional software development, but it's going to augment it, it's going to let you do other interesting things and so, where are things going? I think we're going to continue to start coalescing around what are the right ways to do things. Right now we talk about, you know, ML Ops and how development and the frameworks and all of this innovation. There's so much innovation, which means that the industry is moving so quickly that it's hard to settle on things like standards and, or at least best practices you know, at the very least. And that the best practices are changing every three months. Are they really best practices right? So I think, right, I think that as we progress and coalesce around kind of what are the right ways to do things that's really going to make customers' lives easier. Because, you know, today, if you're a software developer you know, we build a lot of software at Pure Storage right? And if you have people and developers who are familiar with how the process, how the factory functions, then their skills become portable and it becomes easier to onboard people and AI is still nothing like that right now. It's just so, so fast moving and it's so- >> Wild West kind of. >> It's not standardized. It's not industrialized, right? And so the next big frontier in all of this amazing stuff is how do we industrialize this and really make it easy to implement for organizations? >> Oil refineries, industrial Revolution. I mean, it's on that same trajectory. >> Yeah. >> Yeah, absolutely. >> Or industrial revolution. (John laughs) >> Well, we've talked a lot about the chaos and sort of we are very much at this early stage stepping way back and this can be your personal not Pure Storage opinion if you want. >> Okay. >> What in HPC or AIML I guess it all falls under the same umbrella, has you most excited? >> Ooh. >> So I feel like you're someone who sees a lot of different things. You've got a lot of customers, you're out talking to people. >> I think that there is a lot of advancement in the area of natural language processing and I think that, you know, we're starting to take things just like natural language processing and then turning them into vision processing and all these other, you know, I think the, the most exciting thing for me about AI is that there are a lot of people who are, you are looking to use these kinds of technologies to make technology more inclusive. And so- >> I love it. >> You know the ability for us to do things like automate captioning or the ability to automate descriptive, audio descriptions of video streams or things like that. I think that those are really,, I think they're really great in terms of bringing the benefits of technology to more people in an automated way because the challenge has always been bandwidth of how much a human can do. And because they were so difficult to automate and what AI's really allowing us to do is build systems whether that's text to speech or whether that's translation, or whether that's captioning or all these other things. I think the way that AI interfaces with humans is really the most interesting part. And I think the benefits that it can bring there because there's a lot of talk about all of the things that it does that people don't like or that they, that people are concerned about. But I think it's important to think about all the really great things that maybe don't necessarily personally impact you, but to the person who's not cited or to the person who you know is hearing impaired. You know, that's an enormously valuable thing. And the fact that those are becoming easier to do they're becoming better, the quality is getting better. I think those are really important for everybody. >> I love that you brought that up. I think it's a really important note to close on and you know, there's always the kind of terminator, dark side that we obsess over but that's actually not the truth. I mean, when we think about even just captioning it's a tool we use on theCUBE. It's, you know, we see it on our Instagram stories and everything else that opens the door for so many more people to be able to learn. >> Right? >> And the more we all learn, like you said the water level rises together and everything is magical. Justin, it has been a pleasure to have you on board. Last question, any more bourbon tasting today? >> Not that I'm aware of, but if you want to come by I'm sure we can find something somewhere. (all laughing) >> That's the spirit, that is the spirit of an innovator right there. Justin, thank you so much for joining us from Pure Storage. John Furrier, always a pleasure to interview with you. >> I'm glad I can contribute. >> Hey, hey, that's the understatement of the century. >> It's good to be back. >> Yeah. >> Hopefully I'll see you guys in, I'll see you guys in 2034. >> No. (all laughing) No, you've got the Pure Accelerate conference. We'll be there. >> That's right. >> We'll be there. >> Yeah, we have our Pure Accelerate conference next year and- >> Great. >> Yeah. >> I love that, I mean, feel free to, you know, hype that. That's awesome. >> Great company, great runs, stayed true to the mission from day one, all Flash, continue to innovate congratulations. >> Yep, thank you so much, it's pleasure being here. >> It's a fun ride, you are a joy to talk to and it's clear you're just as excited as we are about hardware, so thanks a lot Justin. >> My pleasure. >> And thank all of you for tuning in to this wonderfully nerdy hardware edition of theCUBE live from Dallas, Texas, where we're at, Supercomputing, my name's Savannah Peterson and I hope you have a wonderful night. (soft music)

Published Date : Nov 16 2022

SUMMARY :

and welcome back to Dallas Texas It's been a great show so far. We've had more hosts, more It's been a super the third event, was that right, John? Yeah, the first ever VM World, It's been too long, you I mean, I can barely remember for VMware, but the industry, the cloud, as you know, covering as well. and it's been so great to So one of the big the biggest difference is that, you know, I mean, less power consumption, in the ML and the AI for you guys? nerds at the beginning all here in Dallas. places where you know, have to go check that out. Yeah, but that to me is like one of for people to look at and the amount of of compute that we have, I love that you touched and the point of entry It's an enabler for all of the amazing but now you got, Spark and as you guys look at your products, the kind of stuff that Yeah, exactly. And so all of the new big advancements Savannah, you know, but like, do you see a hardware standpoint. the customer's lives easier? It's never stopping, it's going to let you do And so the next big frontier I mean, it's on that same trajectory. (John laughs) a lot about the chaos You've got a lot of customers, and I think that, you know, or to the person who you and you know, there's always And the more we all but if you want to come by that is the spirit of an Hey, hey, that's the Hopefully I'll see you guys We'll be there. free to, you know, hype that. all Flash, continue to Yep, thank you so much, It's a fun ride, you and I hope you have a wonderful night.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul MoritzPERSON

0.99+

JustinPERSON

0.99+

Justin EmersonPERSON

0.99+

JohnPERSON

0.99+

Savannah PetersonPERSON

0.99+

SavannahPERSON

0.99+

DallasLOCATION

0.99+

JuneDATE

0.99+

John FurrierPERSON

0.99+

12 yearsQUANTITY

0.99+

2010DATE

0.99+

Kay GreenPERSON

0.99+

Dallas, TexasLOCATION

0.99+

third eventQUANTITY

0.99+

Dallas TexasLOCATION

0.99+

last weekDATE

0.99+

12 years agoDATE

0.99+

two-thirdsQUANTITY

0.99+

FirstQUANTITY

0.98+

VM WorldEVENT

0.98+

firstQUANTITY

0.98+

two thirdsQUANTITY

0.98+

Havana LabsORGANIZATION

0.98+

Pure AccelerateEVENT

0.98+

next yearDATE

0.98+

todayDATE

0.98+

both sidesQUANTITY

0.98+

Pure StorageORGANIZATION

0.97+

first yearQUANTITY

0.97+

16 different ISAsQUANTITY

0.96+

FlashBladeTITLE

0.96+

three hot areasQUANTITY

0.94+

threeQUANTITY

0.94+

SnowflakeORGANIZATION

0.93+

oneQUANTITY

0.93+

2034DATE

0.93+

one thingQUANTITY

0.93+

SupercomputingORGANIZATION

0.9+

90% lessQUANTITY

0.89+

theCUBEORGANIZATION

0.86+

agileTITLE

0.84+

VM worldEVENT

0.84+

few years agoDATE

0.81+

day oneQUANTITY

0.81+

Hadoop WorldORGANIZATION

0.8+

VMwareORGANIZATION

0.79+

InstagramORGANIZATION

0.78+

Spark andORGANIZATION

0.77+

HadoopORGANIZATION

0.74+

yearsDATE

0.73+

lastDATE

0.73+

three monthsQUANTITY

0.69+

FlashBladeORGANIZATION

0.68+

Direct FlashTITLE

0.67+

yearDATE

0.65+

tier oneQUANTITY

0.58+

SupercomputingTITLE

0.58+

DirectTITLE

0.56+

FlashORGANIZATION

0.55+

86TITLE

0.55+

acesQUANTITY

0.55+

PureORGANIZATION

0.51+

DatabricksORGANIZATION

0.5+

2022ORGANIZATION

0.5+

XEVENT

0.45+

Lucas Snyder, Indiana University and Karl Oversteyns, Purdue University | SuperComputing 22


 

(upbeat music) >> Hello, beautiful humans and welcome back to Supercomputing. We're here in Dallas, Texas giving you live coverage with theCUBE. I'm joined by David Nicholson. Thank you for being my left arm today. >> Thank you Savannah. >> It's a nice little moral. Very excited about this segment. We've talked a lot about how the fusion between academia and the private sector is a big theme at this show. You can see multiple universities all over the show floor as well as many of the biggest companies on earth. We were very curious to learn a little bit more about this from people actually in the trenches. And we are lucky to be joined today by two Purdue students. We have Lucas and Karl. Thank you both so much for being here. >> One Purdue, one IU, I think. >> Savannah: Oh. >> Yeah, yeah, yeah. >> I'm sorry. Well then wait, let's give Indiana University their fair do. That's where Lucas is. And Karl is at Purdue. Sorry folks. I apparently need to go back to school to learn how to read. (chuckles) In the meantime, I know you're in the middle of a competition. Thank you so much for taking the time out. Karl, why don't you tell us what's going on? What is this competition? What brought you all here? And then let's dive into some deeper stuff. >> Yeah, this competition. So we're a joint team between Purdue and IU. We've overcome our rivalries, age old rivalries to computer at the competition. It's a multi-part competition where we're going head to head against other teams from all across the world, benchmarking our super computing cluster that we designed. >> Was there a moment of rift at all when you came together? Or was everyone peaceful? >> We came together actually pretty nicely. Our two advisors they were very encouraging and so we overcame that, no hostility basically. >> I love that. So what are you working on and how long have you guys been collaborating on it? You can go ahead and start Lucas. >> So we've been prepping for this since the summer and some of us even before that. >> Savannah: Wow. >> And so currently we're working on the application phase of the competition. So everybody has different specialties and basically the competition gives you a set of rules and you have to accomplish what they tell you to do in the allotted timeframe and run things very quickly. >> And so we saw, when we came and first met you, we saw that there are lights and sirens and a monitor looking at the power consumption involved. So part of this is how much power is being consumed. >> Karl: That's right. >> Explain exactly what are the what are the rules that you have to live within? >> So, yeah, so the main constraint is the time as we mentioned and the power consumption. So for the benchmarking phase, which was one, two days ago there was a hard camp of 3000 watts to be consumed. You can't go over that otherwise you would be penalized for that. You have to rerun, start from scratch basically. Now there's a dynamic one for the application section where it's it modulates at random times. So we don't know when it's going to go down when it's going to go back up. So we have to adapt to that in real time. >> David: Oh, interesting. >> Dealing with a little bit of real world complexity I guess probably is simulation is here. I think that's pretty fascinating. I want to know, because I am going to just confess when I was your age last week, I did not understand the power of supercomputing and high performance computing. Lucas, let's start with you. How did you know this was the path you wanted to go down in your academic career? >> David: Yeah, what's your background? >> Yeah, give us some. >> So my background is intelligence systems engineering which is kind of a fusion. It's between, I'm doing bioengineering and then also more classical computer engineering. So my background is biology actually. But I decided to go down this path kind of on a whim. My professor suggested it and I've kind of fallen in love with it. I did my summer internship doing HPC and I haven't looked back. >> When did you think you wanted to go into this field? I mean, in high school, did you have a special teacher that sparked it? What was it? >> Lucas: That's funny that you say that. >> What was in your background? >> Yes, I mean, in high school towards the end I just knew that, I saw this program at IU and it's pretty new and I just thought this would be a great opportunity for me and I'm loving it so far. >> Do you have family in tech or is this a different path for you? >> Yeah, this is a different path for me, but my family is so encouraging and they're very happy for me. They text me all the time. So I couldn't be happier. >> Savannah: Just felt that in my heart. >> I know. I was going to say for the parents out there get the tissue out. >> Yeah, yeah, yeah. (chuckles) >> These guys they don't understand. But, so Karl, what's your story? What's your background? >> My background, I'm a major in unmanned Aerial systems. So this is a drones commercial applications not immediately connected as you might imagine although there's actually more overlap than one might think. So a lot of unmanned systems today a lot of it's remote sensing, which means that there's a lot of image processing that takes place. Mapping of a field, what have you, or some sort of object, like a silo. So a lot of it actually leverages high performance computing in order to map, to visualize much replacing, either manual mapping that used to be done by humans in the field or helicopters. So a lot of cost reduction there and efficiency increases. >> And when did you get this spark that said I want to go to Purdue? You mentioned off camera that you're from Belgium. >> Karl: That's right. >> Did you, did you come from Belgium to Purdue or you were already in the States? >> No, so I have family that lives in the States but I grew up in Belgium. >> David: Okay. >> I knew I wanted to study in the States. >> But at what age did you think that science and technology was something you'd be interested in? >> Well, I've always loved computers from a young age. I've been breaking computers since before I can remember. (chuckles) Much to my parents dismay. But yeah, so I've always had a knack for technology and that's sort of has always been a hobby of mine. >> And then I want to ask you this question and then Lucas and then Savannah will get some time. >> Savannah: It cool, will just sit here and look pretty. >> Dream job. >> Karl: Dream job. >> Okay. So your undergrad both you. >> Savannah: Offering one of my questions. Kind of, It's adjacent though. >> Okay. You're undergrad now? Is there grad school in your future do you feel that's necessary? Is that something you want to pursue? >> I think so. Entrepreneurship is something that's been in the back of my head for a while as well. So may be or something. >> So when I say dream job, understand could be for yourself. >> Savannah: So just piggyback. >> Dream thing after academia or stay in academia. What's do you think at this point? >> That's a tough question. You're asking. >> You'll be able to review this video in 10 years. >> Oh boy. >> This is give us your five year plan and then we'll have you back on theCUBE and see 2027. >> What's the dream? There's people out here watching this. I'm like, go, hey, interesting. >> So as I mentioned entrepreneurship I'm thinking I'll start a company at some point. >> David: Okay. >> Yeah. In what? I don't know yet. We'll see. >> David: Lucas, any thoughts? >> So after graduation, I am planning to go to grad school. IU has a great accelerated master's degree program so I'll stay an extra year and get my master's. Dream job is, boy, that's impossible to answer but I remember telling my dad earlier this year that I was so interested in what NASA was doing. They're sending a probe to one of the moons of Jupiter. >> That's awesome. From a parent's perspective the dream often is let's get the kids off the payroll. So I'm sure that your families are happy to hear that you have. >> I think these two will be right in that department. >> I think they're going to be okay. >> Yeah, I love that. I was curious, I want to piggyback on that because I think when NASA's doing amazing we have them on the show. Who doesn't love space. >> Yeah. >> I'm also an entrepreneur though so I very much empathize with that. I was going to ask to your dream job, but also what companies here do you find the most impressive? I'll rephrase. Because I was going to say, who would you want to work with? >> David: Anything you think is interesting? >> But yeah. Have you even had a chance to walk the floor? I know you've been busy competing >> Karl: Very little. >> Yeah, I was going to say very little. Unfortunately I haven't been able to roam around very much. But I look around and I see names that I'm like I can't even, it's crazy to see them. Like, these are people who are so impressive in the space. These are people who are extremely smart. I'm surrounded by geniuses everywhere I look, I feel like, so. >> Savannah: That that includes us. >> Yeah. >> He wasn't talking about us. Yeah. (laughs) >> I mean it's hard to say any of these companies I would feel very very lucky to be a part of, I think. >> Well there's a reason why both of you were invited to the party, so keep that in mind. Yeah. But so not a lot of time because of. >> Yeah. Tomorrow's our day. >> Here to get work. >> Oh yes. Tomorrow gets play and go talk to everybody. >> Yes. >> And let them recruit you because I'm sure that's what a lot of these companies are going to be doing. >> Yeah. Hopefully it's plan. >> Have you had a second at all to look around Karl. >> A Little bit more I've been going to the bathroom once in a while. (laughs) >> That's allowed I mean, I can imagine that's a vital part of the journey. >> I've ruin my gaze a little bit to what's around all kinds of stuff. Higher education seems to be very important in terms of their presence here. I find that very, very impressive. Purdue has a big stand IU as well, but also others all from Europe as well and Asia. I think higher education has a lot of potential in this field. >> David: Absolutely. >> And it really is that union between academia and the private sector. We've seen a lot of it. But also one of the things that's cool about HPC is it's really not ageist. It hasn't been around for that long. So, I mean, well, at this scale it's obviously this show's been going on since 1988 before you guys were even probably a thought. But I think it's interesting. It's so fun to get to meet you both. Thank you for sharing about what you're doing and what your dreams are. Lucas and Karl. >> David: Thanks for taking the time. >> I hope you win and we're going to get you off the show here as quickly as possible so you can get back to your teams and back to competing. David, great questions as always, thanks for being here. And thank you all for tuning in to theCUBE Live from Dallas, Texas, where we are at Supercomputing. My name's Savannah Peterson and I hope you're having a beautiful day. (gentle upbeat music)

Published Date : Nov 16 2022

SUMMARY :

Thank you for being my left arm today. Thank you both so much for being here. I apparently need to go back from all across the world, and so we overcame that, So what are you working on since the summer and some and you have to accomplish and a monitor looking at the So for the benchmarking phase, How did you know this was the path But I decided to go down I saw this program at They text me all the time. I was going to say for Yeah, yeah, yeah. But, so Karl, what's your story? So a lot of unmanned systems today And when did you get that lives in the States I can remember. ask you this question Savannah: It cool, will of my questions. Is that something you want to pursue? I think so. So when I say dream job, understand What's do you think at this point? That's a tough question. You'll be able to review and then we'll have you back What's the dream? So as I mentioned entrepreneurship I don't know yet. planning to go to grad school. to hear that you have. I think these two will I was curious, I want to piggyback on that I was going to ask to your dream job, Have you even had I can't even, it's crazy to see them. Yeah. I mean it's hard to why both of you were invited go talk to everybody. And let them recruit you Have you had a second I've been going to the I mean, I can imagine that's I find that very, very impressive. It's so fun to get to meet you both. going to get you off the show

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SavannahPERSON

0.99+

DavidPERSON

0.99+

David NicholsonPERSON

0.99+

BelgiumLOCATION

0.99+

KarlPERSON

0.99+

NASAORGANIZATION

0.99+

3000 wattsQUANTITY

0.99+

LucasPERSON

0.99+

IUORGANIZATION

0.99+

EuropeLOCATION

0.99+

Karl OversteynsPERSON

0.99+

Savannah PetersonPERSON

0.99+

five yearQUANTITY

0.99+

AsiaLOCATION

0.99+

Lucas SnyderPERSON

0.99+

Dallas, TexasLOCATION

0.99+

PurdueORGANIZATION

0.99+

two advisorsQUANTITY

0.99+

TomorrowDATE

0.99+

twoQUANTITY

0.99+

PurdueLOCATION

0.99+

1988DATE

0.99+

last weekDATE

0.99+

JupiterLOCATION

0.99+

bothQUANTITY

0.99+

Purdue UniversityORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

two days agoDATE

0.98+

oneQUANTITY

0.98+

Indiana UniversityORGANIZATION

0.98+

Indiana UniversityORGANIZATION

0.97+

earlier this yearDATE

0.93+

earthLOCATION

0.93+

firstQUANTITY

0.92+

SupercomputingORGANIZATION

0.9+

2027TITLE

0.86+

HPCORGANIZATION

0.8+

theCUBEORGANIZATION

0.8+

StatesLOCATION

0.56+

secondQUANTITY

0.48+

22QUANTITY

0.38+

Kirk Bresniker, HPE | SuperComputing 22


 

>>Welcome back, everyone live here at Supercomputing 22 in Dallas, Texas. I'm John for host of the Queue here at Paul Gillin, editor of Silicon Angle, getting all the stories, bringing it to you live. Supercomputer TV is the queue right now. And bringing all the action Bresniker, chief architect of Hewlett Packard Labs with HP Cube alumnis here to talk about Supercomputing Road to Quantum. Kirk, great to see you. Thanks for coming on. >>Thanks for having me guys. Great to be >>Here. So Paul and I were talking and we've been covering, you know, computing as we get into the large scale cloud now on premises compute has been one of those things that just never stops. No one ever, I never heard someone say, I wanna run my application or workload on slower, slower hardware or processor or horsepower. Computing continues to go, but this, we're at a step function. It feels like we're at a level where we're gonna unleash new, new creativity, new use cases. You've been kind of working on this for many, many years at hp, Hewlett Packard Labs, I remember the machine and all the predecessor r and d. Where are we right now from your standpoint, HPE standpoint? Where are you in the computing? It's as a service, everything's changing. What's your view? >>So I think, you know, you capture so well. You think of the capabilities that you create. You create these systems and you engineer these amazing products and then you think, whew, it doesn't get any better than that. And then you remind yourself as an engineer. But wait, actually it has to, right? It has to because we need to continuously provide that next generation of scientists and engineer and artists and leader with the, with the tools that can do more and do more frankly with less. Because while we want want to run the program slower, we sure do wanna run them for less energy. And figuring out how we accomplish all of those things, I think is, is really where it's gonna be fascinating. And, and it's also, we think about that, we think about that now, scale data center billion, billion operations per second, the new science, arts and engineering that we'll create. And yet it's also what's beyond what's beyond that data center. How do we hook it up to those fantastic scientific instruments that are capable to generate so much information? We need to understand how we couple all of those things together. So I agree, we are at, at an amazing opportunity to raise the aspirations of the next generation. At the same time we have to think about what's coming next in terms of the technology. Is the silicon the only answer for us to continue to advance? >>You know, one of the big conversations is like refactoring, replatforming, we have a booth behind us that's doing energy. You can build it in data centers for compute. There's all kinds of new things. Is there anything in the paradigm of computing and now on the road to quantum, which I know you're involved, I saw you have on LinkedIn, you have an open rec for that. What paradigm elements are changing that weren't in play a few years ago that you're looking at right now as you look at the 20 mile stair into quantum? >>So I think for us it's fascinating because we've had a tailwind at our backs my whole career, 33 years at hp. And what I could count on was transistors got at first they got cheaper, faster and they use less energy. And then, you know, that slowed down a little bit. Now they're still cheaper and faster. As we look in that and that Moore's law continues to flatten out of it, there has to be something better to do than, you know, yet another copy of the prior design opening up that diversity of approach. And whether that is the amazing wafer scale accelerators, we see these application specific silicon and then broadening out even farther next to the next to the silicon. Here's the analog computational accelerator here is now the, the emergence of a potential quantum accelerator. So seeing that diversity of approaches, but what we have to happen is we need to harness all of those efficiencies and yet we still have to realize that there are human beings that need to create the application. So how do we bridge, how do we accommodate the physical of, of new kinds of accelerator? How do we imagine the cyber physical connection to the, to the rest of the supercomputer? And then finally, how do we bridge that productivity gap? Especially not for people who like me who have been around for a long time, we wanna think about that next generation cuz they're the ones that need to solve the problems and write the code that will do it. >>You mentioned what exists beyond silicon. In fact, are you looking at different kinds of materials that computers in the future will be built upon? >>Oh absolutely. You think of when, when we, we look at the quantum, the quantum modalities then, you know, whether it is a trapped ion or a superconducting, a piece of silicon or it is a neutral ion. There's just no, there's about half a dozen of these novel systems because really what we're doing when we're using a a quantum mechanical computer, we're creating a tiny universe. We're putting a little bit of material in there and we're manipulating at, at the subatomic level, harnessing the power of of, of quantum physics. That's an incredible challenge. And it will take novel materials, novel capabilities that we aren't just used to seeing. Not many people have a helium supplier in their data center today, but some of them might tomorrow. And understanding again, how do we incorporate industrialize and then scale all of these technologies. >>I wanna talk Turkey about quantum because we've been talking for, for five years. We've heard a lot of hyperbole about quantum. We've seen some of your competitors announcing quantum computers in the cloud. I don't know who's using these, these computers, what kind of work they're being used, how much of the, how real is quantum today? How close are we to having workable true quantum computers and what can you point to any examples of how it's being, how that technology is being used in the >>Field? So it, it remains nascent. We'll put it that way. I think part of the challenge is we see this low level technology and of course it was, you know, professor Richard Fineman who first pointed us in this direction, you know, more than 30 years ago. And you know, I I I trust his judgment. Yes. You know that there's probably some there there especially for what he was doing, which is how do we understand and engineer systems at the quantum mechanical level. Well he said a quantum mechanical system's probably the way to go. So understanding that, but still part of the challenge we see is that people have been working on the low level technology and they're reaching up to wondering will I eventually have a problem that that I can solve? And the challenge is you can improve something every single day and if you don't know where the bar is, then you don't ever know if you'll be good enough. >>I think part of the approach that we like to understand, can we start with the problem, the thing that we actually want to solve and then figure out what is the bespoke combination of classical supercomputing, advanced AI accelerators, novel quantum quantum capabilities. Can we simulate and design that? And we think there's probably nothing better to do that than than an next to scale supercomputer. Yeah. Can we simulate and design that bespoke environment, create that digital twin of this environment and if we, we've simulated it, we've designed it, we can analyze it, see is it actually advantageous? Cuz if it's not, then we probably should go back to the drawing board. And then finally that then becomes the way in which we actually run the quantum mechanical system in this hybrid environment. >>So it's na and you guys are feeling your way through, you get some moonshot, you work backwards from use cases as a, as a more of a discovery navigational kind of mission piece. I get that. And Exoscale has been a great role for you guys. Congratulations. Has there been strides though in quantum this year? Can you point to what's been the, has the needle moved a little bit a lot or, I mean it's moving I guess to some, there's been some talk but we haven't really been able to put our finger on what's moving, like what need, where's the needle moved I >>Guess in quantum. And I think, I think that's part of the conversation that we need to have is how do we measure ourselves. I know at the World Economic Forum, quantum Development Network, we had one of our global future councils on the future of quantum computing. And I brought in a scene I EEE fellow Par Gini who, you know, created the international technology roadmap for semiconductors. And I said, Paulo, could you come in and and give us examples, how was the semiconductor community so effective not only at developing the technology but predicting the development of technology so that whether it's an individual deciding if they should change careers or it's a nation state deciding if they should spend a couple billion dollars, we have that tool to predict the rate of change and improvement. And so I think that's part of what we're hoping by participating will bring some of that road mapping skill and technology and understanding so we can make those better reasoned investments. >>Well it's also fun to see super computing this year. Look at the bigger picture, obviously software cloud natives running modern applications, infrastructure as code that's happening. You're starting to see the integration of, of environments almost like a global distributed operating system. That's the way I call it. Silicon and advancements have been a big part of what we see now. Merchant silicon, but also dpu are on the scene. So the role role of silicon is there. And also we have supply chain problems. So how, how do you look at that as a a, a chief architect of h Hewlett Packard Labs? Because not only you have to invent the future and dream it up, but you gotta deal with the realities and you get the realities are silicon's great, we need more of that quantums around the corner, but supply chain, how do you solve that? What's your thoughts and how do you, how, how is HPE looking at silicon innovation and, and supply chain? >>And so for us it, it is really understanding that partnership model and understanding and contributing. And so I will do things like I happen to be the, the systems and architectures chapter editor for the I eee International Roadmap for devices and systems, that community that wants to come together and provide that guidance. You know, so I'm all about telling the semiconductor and the post semiconductor community, okay, this is where we need to compute. I have a partner in the applications and benchmark that says, this is what we need to compute. And when you can predict in the future about where you need to compute, what you need to compute, you can have a much richer set of conversations because you described it so well. And I think our, our senior fellow Nick Dubey would, he's coined the term internet of workflows where, you know, you need to harness everything from the edge device all the way through the extra scale computer and beyond. And it's not just one sort of static thing. It is a very interesting fluid topology. I'll use this compute at the edge, I'll do this information in the cloud, I want to have this in my exoscale data center and I still need to provide the tool so that an individual who's making that decision can craft that work flow across all of those different resources. >>And those workflows, by the way, are complicated. Now you got services being turned on and off. Observability is a hot area. You got a lot more data in in cycle inflow. I mean a lot more action. >>And I think you just hit on another key point for us and part of our research at labs, I have, as part of my other assignments, I help draft our AI ethics global policies and principles and not only tell getting advice about, about how we should live our lives, it also became the basis for our AI research lab at Shewl Packard Labs because they saw, here's a challenge and here's something where I can't actually believe, maintain my ethical compliance. I need to have engineer new ways of, of achieving artificial intelligence. And so much of that comes back to governance over that data and how can we actually create those governance systems and and do that out in the open >>That's a can of worms. We're gonna do a whole segment on that one, >>On that >>Technology, on that one >>Piece I wanna ask you, I mean, where rubber meets the road is where you're putting your dollars. So you've talked a lot, a lot of, a lot of areas of, of progress right now, where are you putting your dollars right now at Hewlett Packard Labs? >>Yeah, so I think when I draw, when I draw my 2030 vision slide, you know, I, for me the first column is about heterogeneous, right? How do we bring all of these novel computational approaches to be able to demonstrate their effectiveness, their sustainability, and also the productivity that we can drive from, from, from them. So that's my first column. My section column is that edge to exoscale workflow that I need to be able to harness all of those computational and data resources. I need to be aware of the energy consequence of moving data, of doing computation and find all of that while still maintaining and solving for security and privacy. But the last thing, and, and that's one was a, one was a how one was aware. The last thing is a who, right? And is is how do we take that subject matter expert? I think of a, a young engineer starting their career at hpe. It'll be very different than my 33 years. And part of it, you know, they will be undaunted by any, any scale. They will be cloud natives, maybe they metaverse natives, they will demand to design an open cooperative environment. So for me it's thinking about that individual and how do I take those capabilities, heterogeneous edge to exito scale workflows and then make them productive. And for me, that's, that's where we were putting our emphasis on those three. When, where and >>Who. Yeah. And making it compatible for the next generation. We see the student cluster competition going on over there. This is the only show that we cover that we've been to that is from the dorm room to the boardroom and this cuz Supercomputing now is elevating up into that workflow, into integration, multiple environments, cloud, premise, edge, metaverse. This is like a whole nother world. >>And, and, but I think it's, it's the way that regardless of which human pursuit you're in, you know, everyone is going to be demand simulation and modeling ai, ML and massive data m l and massive data analytics that's gonna be at heart of, of everything. And that's what you see. That's what I love about coming here. This isn't just the way we're gonna do science. This is the way we're gonna do everything. >>We're gonna come by your booth, check it out. We've talked to some of the folks, hpe obviously HPE Discover this year, GreenLake with center stage, it's now consumption is a service for technology. Whole nother ballgame. Congratulations on, on all this. I would say the massive, I won't say pivot, but you know, a change >>It >>Is and how you guys >>Operate. And you know, it's funny sometimes you think about the, the pivot to as a services benefiting the customer, but as someone who has supported designs over decades, you know, that ability to to to operate and at peak efficiency, to always keep in perfect operating order and to continuously change while still meeting the customer expectations that actually allows us to deliver innovation to our customers faster than when we are delivering warranted individual packaged products. >>Kirk, thanks for coming on Paul. Great conversation here. You know, the road to Quantum's gonna be paved through computing supercomputing software integrated workflows from the dorm room to the boardroom to Cube, bringing all the action here at Supercomputing 22. I'm Jacque Forer with Paul Gillin. Thanks for watching. We'll be right back.

Published Date : Nov 16 2022

SUMMARY :

bringing it to you live. Great to be I remember the machine and all the predecessor r and d. Where are we right now from At the same time we have to think about what's coming next in terms of the technology. You know, one of the big conversations is like refactoring, replatforming, we have a booth behind us that's And then, you know, that slowed down a little bit. that computers in the future will be built upon? And understanding again, how do we incorporate industrialize and true quantum computers and what can you point to any examples And the challenge is you can improve something every single day and if you don't know where the bar is, I think part of the approach that we like to understand, can we start with the problem, lot or, I mean it's moving I guess to some, there's been some talk but we haven't really been able to put And I think, I think that's part of the conversation that we need to have is how do we need more of that quantums around the corner, but supply chain, how do you solve that? in the future about where you need to compute, what you need to compute, you can have a much richer set of Now you got services being turned on and off. And so much of that comes back to governance over that data and how can we actually create That's a can of worms. a lot of, a lot of areas of, of progress right now, where are you putting your dollars right And part of it, you know, they will be undaunted by any, any scale. This is the only show that we cover that we've been to that And that's what you see. the massive, I won't say pivot, but you know, a change And you know, it's funny sometimes you think about the, the pivot to as a services benefiting the customer, You know, the road to Quantum's gonna be paved through

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillinPERSON

0.99+

Nick DubeyPERSON

0.99+

PaulPERSON

0.99+

BresnikerPERSON

0.99+

Richard FinemanPERSON

0.99+

20 mileQUANTITY

0.99+

Hewlett Packard LabsORGANIZATION

0.99+

KirkPERSON

0.99+

PauloPERSON

0.99+

tomorrowDATE

0.99+

33 yearsQUANTITY

0.99+

first columnQUANTITY

0.99+

Jacque ForerPERSON

0.99+

Dallas, TexasLOCATION

0.99+

Shewl Packard LabsORGANIZATION

0.99+

LinkedInORGANIZATION

0.99+

Kirk BresnikerPERSON

0.99+

JohnPERSON

0.99+

threeQUANTITY

0.99+

todayDATE

0.98+

hpORGANIZATION

0.98+

MoorePERSON

0.98+

five yearsQUANTITY

0.98+

HPEORGANIZATION

0.97+

firstQUANTITY

0.97+

2030DATE

0.97+

h Hewlett Packard LabsORGANIZATION

0.97+

this yearDATE

0.96+

oneQUANTITY

0.96+

HP CubeORGANIZATION

0.95+

GreenLakeORGANIZATION

0.93+

about half a dozenQUANTITY

0.91+

billion,QUANTITY

0.91+

World Economic ForumORGANIZATION

0.9+

quantum Development NetworkORGANIZATION

0.9+

few years agoDATE

0.88+

couple billion dollarsQUANTITY

0.84+

more than 30 years agoDATE

0.84+

GiniORGANIZATION

0.78+

Supercomputing Road to QuantumTITLE

0.68+

Supercomputing 22ORGANIZATION

0.68+

ParPERSON

0.67+

billion operations per secondQUANTITY

0.67+

Silicon AngleORGANIZATION

0.66+

EEEORGANIZATION

0.66+

singleQUANTITY

0.66+

TurkeyORGANIZATION

0.56+

SuperComputing 22ORGANIZATION

0.52+

CubeORGANIZATION

0.48+

ExoscaleTITLE

0.44+

InternationalTITLE

0.4+

Anthony Dina, Dell Technologies and Bob Crovella, NVIDIA | SuperComputing 22


 

>>How do y'all, and welcome back to Supercomputing 2022. We're the Cube, and we are live from Dallas, Texas. I'm joined by my co-host, David Nicholson. David, hello. Hello. We are gonna be talking about data and enterprise AI at scale during this segment. And we have the pleasure of being joined by both Dell and Navidia. Anthony and Bob, welcome to the show. How you both doing? Doing good. >>Great. Great show so far. >>Love that. Enthusiasm, especially in the afternoon on day two. I think we all, what, what's in that cup? Is there something exciting in there that maybe we should all be sharing with you? >>Just say it's just still Yeah, water. >>Yeah. Yeah. I love that. So I wanna make sure that, cause we haven't talked about this at all during the show yet, on the cube, I wanna make sure that everyone's on the same page when we're talking about data unstructured versus structured data. I, it's in your title, Anthony, tell me what, what's the difference? >>Well, look, the world has been based in analytics around rows and columns, spreadsheets, data warehouses, and we've made predictions around the forecast of sales maintenance issues. But when we take computers and we give them eyes, ears, and fingers, cameras, microphones, and temperature and vibration sensors, we now translate that into more human experience. But that kind of data, the sensor data, that video camera is unstructured or semi-structured, that's what that >>Means. We live in a world of unstructured data structure is something we add to later after the fact. But the world that we see and the world that we experience is unstructured data. And one of the promises of AI is to be able to take advantage of everything that's going on around us and augment that, improve that, solve problems based on that. And so if we're gonna do that job effectively, we can't just depend on structured data to get the problem done. We have to be able to incorporate everything that we can see here, taste, smell, touch, and use >>That as, >>As part of the problem >>Solving. We want the chaos, bring it. >>Chaos has been a little bit of a theme of our >>Show. It has been, yeah. And chaos is in the eye of the beholder. You, you think about, you think about the reason for structuring data to a degree. We had limited processing horsepower back when everything was being structured as a way to allow us to be able to, to to reason over it and gain insights. So it made sense to put things into rows and tables. How does, I'm curious, diving right into where Nvidia fits into this, into this puzzle, how does NVIDIA accelerate or enhance our ability to glean insight from or reason over unstructured data in particular? >>Yeah, great question. It's really all about, I would say it's all about ai and Invidia is a leader in the AI space. We've been investing and focusing on AI since at least 2012, if not before, accelerated computing that we do it. Invidia is an important part of it, really. We believe that AI is gonna revolutionize nearly every aspect of computing. Really nearly every aspect of problem solving, even nearly every aspect of programming. And one of the reasons is for what we're talking about now is it's a little impact. Being able to incorporate unstructured data into problem solving is really critical to being able to solve the next generation of problems. AI unlocks, tools and methodologies that we can realistically do that with. It's not realistic to write procedural code that's gonna look at a picture and solve all the problems that we need to solve if we're talking about a complex problem like autonomous driving. But with AI and its ability to naturally absorb unstructured data and make intelligent reason decisions based on it, it's really a breakthrough. And that's what NVIDIA's been focusing on for at least a decade or more. >>And how does NVIDIA fit into Dell's strategy? >>Well, I mean, look, we've been partners for many, many years delivering beautiful experiences on workstations and laptops. But as we see the transition away from taking something that was designed to make something pretty on screen to being useful in solving problems in life sciences, manufacturing in other places, we work together to provide integrated solutions. So take for example, the dgx a 100 platform, brilliant design, revolutionary bus technologies, but the rocket ship can't go to Mars without the fuel. And so you need a tank that can scale in performance at the same rate as you throw GPUs at it. And so that's where the relationship really comes alive. We enable people to curate the data, organize it, and then feed those algorithms that get the answers that Bob's been talking about. >>So, so as a gamer, I must say you're a little shot at making things pretty on a screen. Come on. That was a low blow. That >>Was a low blow >>Sassy. What I, >>I Now what's in your cup? That's what I wanna know, Dave, >>I apparently have the most boring cup of anyone on you today. I don't know what happened. We're gonna have to talk to the production team. I'm looking at all of you. We're gonna have to make that better. One of the themes that's been on this show, and I love that you all embrace the chaos, we're, we're seeing a lot of trend in the experimentation phase or stage rather. And it's, we're in an academic zone of it with ai, companies are excited to adopt, but most companies haven't really rolled out their strategy. What is necessary for us to move from this kind of science experiment, science fiction in our heads to practical application at scale? Well, >>Let me take this, Bob. So I've noticed there's a pattern of three levels of maturity. The first level is just what you described. It's about having an experience, proof of value, getting stakeholders on board, and then just picking out what technology, what algorithm do I need? What's my data source? That's all fun, but it is chaos over time. People start actually making decisions based on it. This moves us into production. And what's important there is normality, predictability, commonality across, but hidden and embedded in that is a center of excellence. The community of data scientists and business intelligence professionals sharing a common platform in the last stage, we get hungry to replicate those results to other use cases, throwing even more information at it to get better accuracy and precision. But to do this in a budget you can afford. And so how do you figure out all the knobs and dials to turn in order to make, take billions of parameters and process that, that's where casual, what's >>That casual decision matrix there with billions of parameters? >>Yeah. Oh, I mean, >>But you're right that >>That's, that's exactly what we're, we're on this continuum, and this is where I think the partnership does really well, is to marry high performant enterprise grade scalability that provides the consistency, the audit trail, all of the things you need to make sure you don't get in trouble, plus all of the horsepower to get to the results. Bob, what would you >>Add there? I think the thing that we've been talking about here is complexity. And there's complexity in the AI problem solving space. There's complexity everywhere you look. And we talked about the idea that NVIDIA can help with some of that complexity from the architecture and the software development side of it. And Dell helps with that in a whole range of ways, not the least of which is the infrastructure and the server design and everything that goes into unlocking the performance of the technology that we have available to us today. So even the center of excellence is an example of how do I take this incredibly complex problem and simplify it down so that the real world can absorb and use this? And that's really what Dell and Vidia are partnering together to do. And that's really what the center of excellence is. It's an idea to help us say, let's take this extremely complex problem and extract some good value out of >>It. So what is Invidia's superpower in this realm? I mean, look, we're we are in, we, we are in the era of Yeah, yeah, yeah. We're, we're in a season of microprocessor manufacturers, one uping, one another with their latest announcements. There's been an ebb and a flow in our industry between doing everything via the CPU versus offloading processes. Invidia comes up and says, Hey, hold on a second, gpu, which again, was focused on graphics processing originally doing something very, very specific. How does that translate today? What's the Nvidia again? What's, what's, what's the superpower? Because people will say, well, hey, I've got a, I've got a cpu, why do I need you? >>I think our superpower is accelerated computing, and that's really a hardware and software thing. I think your question is slanted towards the hardware side, which is, yes, it is very typical and we do make great processors, but the processor, the graphics processor that you talked about from 10 or 20 years ago was designed to solve a very complex task. And it was exquisitely designed to solve that task with the resources that we had available at that time. Time. Now, fast forward 10 or 15 years, we're talking about a new class of problems called ai. And it requires both exquisite, soft, exquisite processor design as well as very complex and exquisite software design sitting on top of it as well. And the systems and infrastructure knowledge, high performance storage and everything that we're talking about in the solution today. So Nvidia superpower is really about that accelerated computing stack at the bottom. You've got hardware above that, you've got systems above that, you have middleware and libraries and above that you have what we call application SDKs that enable the simplification of this really complex problem to this domain or that domain or that domain, while still allowing you to take advantage of that processing horsepower that we put in that exquisitely designed thing called the gpu >>Decreasing complexity and increasing speed to very key themes of the show. Shocking, no one, you all wanna do more faster. Speaking of that, and I'm curious because you both serve a lot of different unique customers, verticals and use cases, is there a specific project that you're allowed to talk about? Or, I mean, you know, you wanna give us the scoop, that's totally cool too. We're here for the scoop on the cube, but is there a specific project or use case that has you personally excited Anthony? We'll start with that. >>Look, I'm, I've always been a big fan of natural language processing. I don't know why, but to derive intent based on the word choices is very interesting to me. I think what compliments that is natural language generation. So now we're having AI programs actually discover and describe what's inside of a package. It wouldn't surprise me that over time we move from doing the typical summary on the economic, the economics of the day or what happened in football. And we start moving that towards more of the creative advertising and marketing arts where you are no longer needed because the AI is gonna spit out the result. I don't think we're gonna get there, but I really love this idea of human language and computational linguistics. >>What a, what a marriage. I agree. Think it's fascinating. What about you, Bob? It's got you >>Pumped. The thing that really excites me is the problem solving, sort of the tip of the spear in problem solving. The stuff that you've never seen before, the stuff that you know, in a geeky way kind of takes your breath away. And I'm gonna jump or pivot off of what Anthony said. Large language models are really one of those areas that are just, I think they're amazing and they're just kind of surprising everyone with what they can do here on the show floor. I was looking at a demonstration from a large language model startup, basically, and they were showing that you could ask a question about some obscure news piece that was reported only in a German newspaper. It was about a little shipwreck that happened in a hardware. And I could type in a query to this system and it would immediately know where to find that information as if it read the article, summarized it for you, and it even could answer questions that you could only only answer by looking pic, looking at pictures in that article. Just amazing stuff that's going on. Just phenomenal >>Stuff. That's a huge accessibility. >>That's right. And I geek out when I see stuff like that. And that's where I feel like all this work that Dell and Invidia and many others are putting into this space is really starting to show potential in ways that we wouldn't have dreamed of really five years ago. Just really amazing. And >>We see this in media and entertainment. So in broadcasting, you have a sudden event, someone leaves this planet where they discover something new where they get a divorce and they're a major quarterback. You wanna go back somewhere in all of your archives to find that footage. That's a very laborist project. But if you can use AI technology to categorize that and provide the metadata tag so you can, it's searchable, then we're off to better productions, more interesting content and a much richer viewer experience >>And a much more dynamic picture of what's really going on. Factoring all of that in, I love that. I mean, David and I are both nerds and I know we've had take our breath away moments, so I appreciate that you just brought that up. Don't worry, you're in good company. In terms of the Geek Squad over >>Here, I think actually maybe this entire show for Yes, exactly. >>I mean, we were talking about how steampunk some of the liquid cooling stuff is, and you know, this is the only place on earth really, or the only show where you would come and see it at this level in scale and, and just, yeah, it's, it's, it's very, it's very exciting. How important for the future of innovation in HPC are partnerships like the one that Navia and Dell have? >>You wanna start? >>Sure, I would, I would just, I mean, I'm gonna be bold and brash and arrogant and say they're essential. Yeah, you don't not, you do not want to try and roll this on your own. This is, even if we just zoomed in to one little beat, little piece of the technology, the software stack that do modern, accelerated deep learning is incredibly complicated. There can be easily 20 or 30 components that all have to be the right version with the right buttons pushed, built the right way, assembled the right way, and we've got lots of technologies to help with that. But you do not want to be trying to pull that off on your own. That's just one little piece of the complexity that we talked about. And we really need, as technology providers in this space, we really need to do as much as we do to try to unlock the potential. We have to do a lot to make it usable and capable as well. >>I got a question for Anthony. All >>Right, >>So in your role, and I, and I'm, I'm sort of, I'm sort of projecting here, but I think, I think, I think your superpower personally is likely in the realm of being able to connect the dots between technology and the value that that technology holds in a variety of contexts. That's right. Whether it's business or, or whatever, say sentences. Okay. Now it's critical to have people like you to connect those dots. Today in the era of pervasive ai, how important will it be to have AI have to explain its answer? In other words, words, should I trust the information the AI is giving me? If I am a decision maker, should I just trust it on face value? Or am I going to want a demand of the AI kind of what you deliver today, which is No, no, no, no, no, no. You need to explain this to me. How did you arrive at that conclusion, right? How important will that be for people to move forward and trust the results? We can all say, oh hey, just trust us. Hey, it's ai, it's great, it's got Invidia, you know, Invidia acceleration and it's Dell. You can trust us, but come on. So many variables in the background. It's >>An interesting one. And explainability is a big function of ai. People want to know how the black box works, right? Because I don't know if you have an AI engine that's looking for potential maladies in an X-ray, but it misses it. Do you sue the hospital, the doctor or the software company, right? And so that accountability element is huge. I think as we progress and we trust it to be part of our everyday decision making, it's as simply as a recommendation engine. It isn't actually doing all of the decisions. It's supporting us. We still have, after decades of advanced technology algorithms that have been proven, we can't predict what the market price of any object is gonna be tomorrow. And you know why? You know why human beings, we are so unpredictable. How we feel in the moment is radically different. And whereas we can extrapolate for a population to an individual choice, we can't do that. So humans and computers will not be separated. It's a, it's a joint partnership. But I wanna get back to your point, and I think this is very fundamental to the philosophy of both companies. Yeah, it's about a community. It's always about the people sharing ideas, getting the best. And anytime you have a center of excellence and algorithm that works for sales forecasting may actually be really interesting for churn analysis to make sure the employees or students don't leave the institution. So it's that community of interest that I think is unparalleled at other conferences. This is the place where a lot of that happens. >>I totally agree with that. We felt that on the show. I think that's a beautiful note to close on. Anthony, Bob, thank you so much for being here. I'm sure everyone feels more educated and perhaps more at peace with the chaos. David, thanks for sitting next to me asking the best questions of any host on the cube. And thank you all for being a part of our community. Speaking of community here on the cube, we're alive from Dallas, Texas. It's super computing all week. My name is Savannah Peterson and I'm grateful you're here. >>So I.

Published Date : Nov 16 2022

SUMMARY :

And we have the pleasure of being joined by both Dell and Navidia. Great show so far. I think we all, cause we haven't talked about this at all during the show yet, on the cube, I wanna make sure that everyone's on the same page when we're talking about But that kind of data, the sensor data, that video camera is unstructured or semi-structured, And one of the promises of AI is to be able to take advantage of everything that's going on We want the chaos, bring it. And chaos is in the eye of the beholder. And one of the reasons is for what we're talking about now is it's a little impact. scale in performance at the same rate as you throw GPUs at it. So, so as a gamer, I must say you're a little shot at making things pretty on a I apparently have the most boring cup of anyone on you today. But to do this in a budget you can afford. the horsepower to get to the results. and simplify it down so that the real world can absorb and use this? What's the Nvidia again? So Nvidia superpower is really about that accelerated computing stack at the bottom. We're here for the scoop on the cube, but is there a specific project or use case that has you personally excited And we start moving that towards more of the creative advertising and marketing It's got you And I'm gonna jump or pivot off of what That's a huge accessibility. And I geek out when I see stuff like that. and provide the metadata tag so you can, it's searchable, then we're off to better productions, so I appreciate that you just brought that up. I mean, we were talking about how steampunk some of the liquid cooling stuff is, and you know, this is the only place on earth really, There can be easily 20 or 30 components that all have to be the right version with the I got a question for Anthony. to have people like you to connect those dots. And anytime you have a center We felt that on the show.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David NicholsonPERSON

0.99+

BobPERSON

0.99+

AnthonyPERSON

0.99+

Bob CrovellaPERSON

0.99+

DellORGANIZATION

0.99+

20QUANTITY

0.99+

InvidiaORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

Savannah PetersonPERSON

0.99+

MarsLOCATION

0.99+

VidiaORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

10QUANTITY

0.99+

bothQUANTITY

0.99+

DavePERSON

0.99+

Dallas, TexasLOCATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

Dallas, TexasLOCATION

0.99+

NavidiaORGANIZATION

0.99+

OneQUANTITY

0.99+

first levelQUANTITY

0.99+

both companiesQUANTITY

0.98+

TodayDATE

0.98+

oneQUANTITY

0.98+

2012DATE

0.98+

todayDATE

0.98+

billionsQUANTITY

0.98+

earthLOCATION

0.97+

10DATE

0.96+

Anthony DinaPERSON

0.96+

five years agoDATE

0.96+

30 componentsQUANTITY

0.95+

NaviaORGANIZATION

0.95+

day twoQUANTITY

0.94+

one little pieceQUANTITY

0.91+

tomorrowDATE

0.87+

three levelsQUANTITY

0.87+

HPCORGANIZATION

0.86+

20 years agoDATE

0.83+

one littleQUANTITY

0.77+

billions of parametersQUANTITY

0.75+

a decadeQUANTITY

0.74+

decadesQUANTITY

0.68+

GermanOTHER

0.68+

dgx a 100 platformCOMMERCIAL_ITEM

0.67+

themesQUANTITY

0.63+

secondQUANTITY

0.57+

22QUANTITY

0.48+

SquadORGANIZATION

0.4+

Supercomputing 2022ORGANIZATION

0.36+

Kelly Gaither, University of Texas | SuperComputing 22


 

>>Good afternoon everyone, and thank you so much for joining us. My name is Savannah Peterson, joined by my co-host Paul for the afternoon. Very excited. Oh, Savannah. Hello. I'm, I'm pumped for this. This is our first bit together. Exactly. >>It's gonna be fun. Yes. We have a great guest to kick off with. >>We absolutely do. We're at Supercomputing 2022 today, and very excited to talk to our next guest. We're gonna be talking about data at scale and data that really matters to us joining us. Kelly Gayer, thank you so much for being here and you are with tech. Tell everyone what TAC is. >>Tech is the Texas Advanced Computing Center at the University of Texas at Austin. And thank you so much for having me here. >>It is wonderful to have you. Your smile's contagious. And one of the themes that's come up a lot with all of our guests, and we just talked about it, is how good it is to be back in person, how good it is to be around our hardware, community tech. You did some very interesting research during the pandemic. Can you tell us about that? >>I can. I did. So when we realized sort of mid-March, we realized that, that this was really not normal times and the pandemic was statement. Yes. That pandemic was really gonna touch everyone. I think a lot of us at the center and me personally, we dropped everything to plug in and that's what we do. So UT's tagline is what starts here changes the world and tax tagline is powering discoveries that change the world. So we're all about impact, but I plugged in with the research group there at UT Austin, Dr. Lauren Myers, who's an epidemiologist, and just we figured out how to plug in and compute so that we could predict the spread of, of Covid 19. >>And you did that through the use of mobility data, cell phone signals. Tell us more about what exactly you were choreographing. >>Yeah, so that was really interesting. Safe graph during the pandemic made their mobility data. Typically it was used for marketing purposes to know who was going into Walmart. The offenses >>For advertising. >>Absolutely, yeah. They made all of their mobility data available for free to people who were doing research and plugging in trying to understand Covid. 19, I picked that data up and we used it as a proxy for human behavior. So we knew we had some idea, we got weekly mobility updates, but it was really mobility all day long, you know, anonymized. I didn't know who they were by cell phones across the US by census block group or zip code if we wanted to look at it that way. And we could see how people were moving around. We knew what their neighbor, their home neighborhoods were. We knew how they were traveling or not traveling. We knew where people were congregating, and we could get some idea of, of how people were behaving. Were they really, were they really locking down or were they moving in their neighborhoods or were they going outside of their neighborhoods? >>What a, what a fascinating window into our pandemic lives. So now that you were able to do this for this pandemic, as we look forward, what have you learned? How quickly could we forecast? What's the prognosis? >>Yeah, so we, we learned a tremendous amount. I think during the pandemic we were reacting, we were really trying. It was a, it was an interesting time as a scientist, we were reacting to things almost as if the earth was moving underneath us every single day. So it was something new every day. And I've told people since I've, I haven't, I haven't worked that hard since I was a graduate student. So it was really daylight to dark 24 7 for a long period of time because it was so important. And we knew, we, we knew we were, we were being a part of history and affecting something that was gonna make a difference for a really long time. And, and I think what we've learned is that indeed there is a lot of data being collected that we can use for good. We can really understand if we get organized and we get set up, we can use this data as a means of perhaps predicting our next pandemic or our next outbreak of whatever. It is almost like using it as a canary in the coal mine. There's a lot in human behavior we can use, given >>All the politicization of, of this last pandemic, knowing what we know now, making us better prepared in theory for the next one. How confident are you that at least in the US we will respond proactively and, and effectively when the next one comes around? >>Yeah, I mean, that's a, that's a great question and, and I certainly understand why you ask. I think in my experience as a scientist, certainly at tech, the more transparent you are with what you do and the more you explain things. Again, during the pandemic, things were shifting so rapidly we were reacting and doing the best that we could. And I think one thing we did right was we admitted where we felt uncertain. And that's important. You have to really be transparent to the general public. I, I don't know how well people are gonna react. I think if we have time to prepare, to communicate and always be really transparent about it. I think those are three factors that go into really increasing people's trust. >>I think you nailed it. And, and especially during times of chaos and disaster, you don't know who to trust or what to believe. And it sounds like, you know, providing a transparent source of truth is, is so critical. How do you protect the sensitive data that you're working with? I know it's a top priority for you and the team. >>It is, it is. And we, we've adopted the medical mantra, do no harm. So we have, we feel a great responsibility there. There's, you know, two things that you have to really keep in mind when you've got sensitive data. One is the physical protection of it. And so that's, that's governed by rule, federal rules, hipaa, ferpa, whatever, whatever kind of data that you have. So we certainly focus on the physical protection of it, but there's also sort of the ethical protection of it. What, what is the quote? There's lies, damn lies and statistics. >>Yes. Twain. >>Yeah. So you, you really have to be responsible with what you're doing with the data, how you're portraying the results. And again, I think it comes back to transparency is is basically if people are gonna reproduce what I did, I have to be really transparent with what I did. >>I, yeah, I think that's super important. And one of the themes with, with HPC that we've been talking about a lot too is, you know, do people trust ai? Do they trust all the data that's going into these systems? And I love that you just talked about the storytelling aspect of that, because there is a duty, it's not, you can cut data kind of however you want. I mean, I come from marketing background and we can massage it to, to do whatever we want. So in addition to being the deputy director at Tech, you are also the DEI officer. And diversity I know is important to you probably both as an individual, but also in the work that you're doing. Talk to us about that. >>Yeah, I mean, I, I very passionate about diversity, equity and inclusion in a sense of belongingness. I think that's one of the key aspects of it. Core >>Of community too. >>I got a computer science degree back in the eighties. I was akin to a unicorn in a, in an engineering computer science department. And, but I was really lucky in a couple of respects. I had a, I had a father that was into science that told me I could do anything I, I wanted to set my mind to do. So that was my whole life, was really having that support system. >>He was cheers to dad. >>Yeah. Oh yeah. And my mom as well, actually, you know, they were educators. I grew up, you know, in that respect, very, very privileged, but it was still really hard to make it. And I couldn't have told you back in that time why I made it and, and others didn't, why they dropped out. But I made it a mission probably back, gosh, maybe 10, 15 years ago, that I was really gonna do all that I could to change the needle. And it turns out that there are a number of things that you can do grassroots. There are certainly best practices. There are rules and there are things that you really, you know, best practices to follow to make people feel more included in an organization, to feel like they belong it, shared mission. But there are also clever things that you can do with programming to really engage students, to meet people and students where they are interested and where they are engaged. And I think that's what, that's what we've done over, you know, the course of our programming over the course of about maybe since 2016. We have built a lot of programming ATAC that really focuses on that as well, because I'm determined the needle is gonna change before it's all said and done. It just really has to. >>So what, what progress have you made and what goals have you set in this area? >>Yeah, that, that's a great question. So, you know, at first I was a little bit reluctant to set concrete goals because I really didn't know what we could accomplish. I really wasn't sure what grassroots efforts was gonna be able to, you're >>So honest, you can tell how transparent you are with the data as well. That's >>Great. Yeah, I mean, if I really, most of the successful work that I've done is both a scientist and in the education and outreach space is really trust relationships. If I break that trust, I'm done. I'm no longer effective. So yeah, I am really transparent about it. But, but what we did was, you know, the first thing we did was we counted, you know, to the extent that we could, what does the current picture look like? Let's be honest about it. Start where we are. Yep. It was not a pretty picture. I mean, we knew that anecdotally it was not gonna be a great picture, but we put it out there and we leaned into it. We said, this is what it is. We, you know, I hesitated to say we're gonna look 10% better next year because I'm, I'm gonna be honest, I don't always know we're gonna do our best. >>The things that I think we did really well was that we stopped to take time to talk and find out what people were interested in. It's almost like being present and listening. My grandmother had a saying, you have two errors in one mouth for a reason, just respect the ratio. Oh, I love that. Yeah. And I think it's just been building relationships, building trust, really focusing on making a difference, making it a priority. And I think now what we're doing is we've been successful in pockets of people in the center and we are, we are getting everybody on board. There's, there's something everyone can do, >>But the problem you're addressing doesn't begin in college. It begins much, much, that's right. And there's been a lot of talk about STEM education, particularly for girls, how they're pushed out of the system early on. Also for, for people of color. Do you see meaningful progress being made there now after years of, of lip service? >>I do. I do. But it is, again, grassroots. We do have a, a, a researcher who was a former teacher at the center, Carol Fletcher, who is doing research and for CS for all we know that the workforce, so if you work from the current workforce, her projected workforce backwards, we know that digital skills of some kind are gonna be needed. We also know we have a, a, a shortage. There's debate on how large that shortage is, but about roughly about 1 million unmet jobs was projected in 2020. It hasn't gotten a lot better. We can work that problem backwards. So what we do there is a little, like a scatter shot approach. We know that people come in all forms, all shapes, all sizes. They get interested for all different kinds of reasons. We expanded our set of pathways so that we can get them where they can get on to the path all the way back K through 12, that's Carol's work. Rosie Gomez at the center is doing sort of the undergraduate space. We've got Don Hunter that does it, middle school, high school space. So we are working all parts of the problem. I am pretty passionate about what we consider opportunity youth people who never had the opportunity to go to college. Is there a way that we can skill them and get, get them engaged in some aspect and perhaps get them into this workforce. >>I love that you're starting off so young. So give us an example of one of those programs. What are you talking to kindergartners about when it comes to CS education? >>You know, I mean, gaming. Yes. Right. It's what everybody can wrap their head around. So most kids have had some sort of gaming device. You talk in the context, in the context of something they understand. I'm not gonna talk to them about high performance computing. It, it would go right over their heads. And I think, yeah, you know, I, I'll go back to something that you said Paul, about, you know, girls were pushed out. I don't know that girls are being pushed out. I think girls aren't interested and things that are being presented and I think they, I >>Think you're generous. >>Yeah. I mean, I was a young girl and I don't know why I stayed. Well, I do know why I stayed with it because I had a father that saw something in me and I had people at critical points in my life that saw something in me that I didn't see. But I think if we ch, if we change the way we teach it, maybe in your words they don't get pushed out or they, or they won't lose interest. There's, there's some sort of computing in everything we do. Well, >>Absolutely. There's also the bro culture, which begins at a very early >>Age. Yeah, that's a different problem. Yeah. That's just having boys in the classroom. Absolutely. You got >>It. That's a whole nother case. >>That's a whole other thing. >>Last question for you, when we are sitting here, well actually I've got, it's two parter, let's put it that way. Is there a tool or something you wish you could flick a magic wand that would make your job easier? Where you, you know, is there, can you identify the, the linchpin in the DEI challenge? Or is it all still prototyping and iterating to figure out the best fit? >>Yeah, that is a, that's a wonderful question. I can tell you what I get frustrated with is that, that >>Counts >>Is that I, I feel like a lot of people don't fully understand the level of effort and engagement it takes to do something meaningful. The >>Commitment to a program, >>The commitment to a program. Totally agree. It's, there is no one and done. No. And in fact, if I do that, I will lose them forever. They'll be, they will, they will be lost in the space forever. Rather. The engagement is really sort of time intensive. It's relationship intensive, but there's a lot of follow up too. And the, the amount of funding that goes into this space really is not, it, it, it's not equal to the amount of time and effort that it really takes. And I think, you know, I think what you work in this space, you realize that what you gain is, is really more of, it's, it really feels good to make a difference in somebody's life, but it's really hard to do on a shoer budget. So if I could kind of wave a magic wand, yes, I would increase understanding. I would get people to understand that it's all of our responsibility. Yes, everybody is needed to make the difference and I would increase the funding that goes to the programs. >>I think that's awesome, Kelly, thank you for that. You all heard that. More funding for diversity, equity, and inclusion. Please Paul, thank you for a fantastic interview, Kelly. Hopefully everyone is now inspired to check out tac perhaps become a, a Longhorn, hook 'em and, and come deal with some of the most important data that we have going through our systems and predicting the future of our pandemics. Ladies and gentlemen, thank you for joining us online. We are here in Dallas, Texas at Supercomputing. My name is Savannah Peterson and I look forward to seeing you for our next segment.

Published Date : Nov 16 2022

SUMMARY :

Good afternoon everyone, and thank you so much for joining us. It's gonna be fun. Kelly Gayer, thank you so much for being here and you are with tech. And thank you so much for having me here. And one of the themes that's come up a to plug in and compute so that we could predict the spread of, And you did that through the use of mobility data, cell phone signals. Yeah, so that was really interesting. but it was really mobility all day long, you know, So now that you were able to do this for this pandemic, as we look forward, I think during the pandemic we were reacting, in the US we will respond proactively and, and effectively when And I think one thing we did right was we I think you nailed it. There's, you know, two things that you have to really keep And again, I think it comes back to transparency is is basically And I love that you just talked about the storytelling aspect of I think that's one of the key aspects of it. I had a, I had a father that was into science I grew up, you know, in that respect, very, very privileged, I really wasn't sure what grassroots efforts was gonna be able to, you're So honest, you can tell how transparent you are with the data as well. but what we did was, you know, the first thing we did was we counted, you And I think now what we're doing is we've been successful in Do you see meaningful progress being all we know that the workforce, so if you work from the current workforce, I love that you're starting off so young. And I think, yeah, you know, I, I'll go back to something that But I think if we ch, There's also the bro culture, which begins at a very early That's just having boys in the classroom. you know, is there, can you identify the, the linchpin in the DEI challenge? I can tell you what I get frustrated with of effort and engagement it takes to do something meaningful. you know, I think what you work in this space, you realize that what I look forward to seeing you for our next segment.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kelly GayerPERSON

0.99+

KellyPERSON

0.99+

SavannahPERSON

0.99+

Savannah PetersonPERSON

0.99+

Carol FletcherPERSON

0.99+

Rosie GomezPERSON

0.99+

2020DATE

0.99+

PaulPERSON

0.99+

Lauren MyersPERSON

0.99+

CarolPERSON

0.99+

Kelly GaitherPERSON

0.99+

WalmartORGANIZATION

0.99+

2016DATE

0.99+

10%QUANTITY

0.99+

USLOCATION

0.99+

next yearDATE

0.99+

Dallas, TexasLOCATION

0.99+

todayDATE

0.99+

two errorsQUANTITY

0.99+

firstQUANTITY

0.99+

Covid 19OTHER

0.99+

AustinLOCATION

0.99+

eightiesDATE

0.99+

three factorsQUANTITY

0.99+

bothQUANTITY

0.99+

TACORGANIZATION

0.98+

two parterQUANTITY

0.98+

one mouthQUANTITY

0.98+

earthLOCATION

0.98+

UTORGANIZATION

0.98+

mid-MarchDATE

0.97+

pandemicEVENT

0.97+

two thingsQUANTITY

0.97+

University of TexasORGANIZATION

0.97+

first bitQUANTITY

0.97+

oneQUANTITY

0.97+

OneQUANTITY

0.97+

one thingQUANTITY

0.96+

SupercomputingORGANIZATION

0.96+

Don HunterPERSON

0.95+

Texas Advanced Computing CenterORGANIZATION

0.95+

ATACORGANIZATION

0.93+

Covid. 19OTHER

0.93+

24 7QUANTITY

0.86+

UT AustinORGANIZATION

0.82+

10, 15 years agoDATE

0.81+

Supercomputing 2022ORGANIZATION

0.79+

every single dayQUANTITY

0.79+

about 1 million unmet jobsQUANTITY

0.77+

12QUANTITY

0.74+

SuperComputingORGANIZATION

0.74+

outbreakEVENT

0.7+

Dr.PERSON

0.56+

DEIORGANIZATION

0.54+

TwainPERSON

0.51+

Brian Payne, Dell Technologies and Raghu Nambiar, AMD | SuperComputing 22


 

(upbeat music) >> We're back at SC22 SuperComputing Conference in Dallas. My name's Paul Gillan, my co-host, John Furrier, SiliconANGLE founder. And huge exhibit floor here. So much activity, so much going on in HPC, and much of it around the chips from AMD, which has been on a roll lately. And in partnership with Dell, our guests are Brian Payne, Dell Technologies, VP of Product Management for ISG mid-range technical solutions, and Raghu Nambiar, corporate vice president of data system, data center ecosystem, and application engineering, that's quite a mouthful, at AMD, And gentlemen, welcome. Thank you. >> Thanks for having us. >> This has been an evolving relationship between you two companies, obviously a growing one, and something Dell was part of the big general rollout, AMD's new chip set last week. Talk about how that relationship has evolved over the last five years. >> Yeah, sure. Well, so it goes back to the advent of the EPIC architecture. So we were there from the beginning, partnering well before the launch five years ago, thinking about, "Hey how can we come up with a way to solve customer problems? address workloads in unique ways?" And that was kind of the origin of the relationship. We came out with some really disruptive and capable platforms. And then it continues, it's continued till then, all the way to the launch of last week, where we've introduced four of the most capable platforms we've ever had in the PowerEdge portfolio. >> Yeah, I'm really excited about the partnership with the Dell. As Brian said, we have been partnering very closely for last five years since we introduced the first generation of EPIC. So we collaborate on, you know, system design, validation, performance benchmarks, and more importantly on software optimizations and solutions to offer out of the box experience to our customers. Whether it is HPC or databases, big data analytics or AI. >> You know, you guys have been on theCUBE, you guys are veterans 2012, 2014 back in the day. So much has changed over the years. Raghu, you were on the founding chair of the TPC for AI. We've talked about the different iterations of power service. So much has changed. Why the focus on these workloads now? What's the inflection point that we're seeing here at SuperComputing? It feels like we've been in this, you know run the ball, get, gain a yard, move the chains, you know, but we feel, I feel like there's a moment where the there's going to be an unleashing of innovation around new use cases. Where's the workloads? Why the performance? What are some of those use cases right now that are front and center? >> Yeah, I mean if you look at today, the enterprise ecosystem has become extremely complex, okay? People are running traditional workloads like Relational Database Management Systems, also new generation of workloads with the AI and HPC and actually like AI actually HPC augmented with some of the AI technologies. So what customers are looking for is, as I said, out of the box experience, or time to value is extremely critical. Unlike in the past, you know, people, the customers don't have the time and resources to run months long of POCs, okay? So that's one idea that we are focusing, you know, working closely with Dell to give out of the box experience. Again, you know, the enterprise applicate ecosystem is, you know, really becoming complex and the, you know, as you mentioned, some of the industry standard benchmark is designed to give the fair comparison of performance, and price performance for the, our end customers. And you know, Brian and my team has been working closely to demonstrate our joint capabilities in the AI space with, in a set of TPCx-AI benchmark cards last week it was the major highlight of our launch last week. >> Brian, you got showing the demo in the booth at Dell here. Not demo, the product, it's available. What are you seeing for your use cases that customers are kind of rallying around now, and what are they doubling down on. >> Yeah, you know, I, so Raghu I think teed it up well. The really data is the currency of business and all organizations today. And that's what's pushing people to figure out, hey, both traditional workloads as well as new workloads. So we've got in the traditional workload space, you still have ERP systems like SAP, et cetera, and we've announced world records there, a hundred plus percent improvements in our single socket system, 70% and dual. We actually posted a 40% advantage over the best Genoa result just this week. So, I mean, we're excited about that in the traditional space. But what's exciting, like why are we here? Why, why are people thinking about HPC and AI? It's about how do we make use of that data, that data being the currency and how do we push in that space? So Raghu mentioned the TPC AI benchmark. We launched, or we announced in collaboration you talk about how do we work together, nine world records in that space. In one case it's a 3x improvement over prior generations. So the workloads that people care about is like how can I process this data more effectively? How can I store it and secure it more effectively? And ultimately, how do I make decisions about where we're going, whether it's a scientific breakthrough, or a commercial application. That's what's really driving the use cases and the demand from our customers today. >> I think one of the interesting trends we've seen over the last couple of years is a resurgence in interest in task specific hardware around AI. In fact venture capital companies invested a $1.8 billion last year in AI hardware startups. I wonder, and these companies are not doing CPUs necessarily, or GPUs, they're doing accelerators, FPGAs, ASICs. But you have to be looking at that activity and what these companies are doing. What are you taking away from that? How does that affect your own product development plans? Both on the chip side and on the system side? >> I think the future of computing is going to be heterogeneous. Okay. I mean a CPU solving certain type of problems like general purpose computing databases big data analytics, GPU solving, you know, problems in AI and visualization and DPUs and FPGA's accelerators solving you know, offloading, you know, some of the tasks from the CPU and providing realtime performance. And of course, you know, the, the software optimizes are going to be critical to stitch everything together, whether it is HPC or AI or other workloads. You know, again, as I said, heterogeneous computing is going to be the future. >> And, and for us as a platform provider, the heterogeneous, you know, solutions mean we have to design systems that are capable of supporting that. So if as you think about the compute power whether it's a GPU or a CPU, continuing to push the envelope in terms of, you know, to do the computations, power consumption, things like that. How do we design a system that can be, you know, incredibly efficient, and also be able to support the scaling, you know, to solve those complex problems. So that gets into challenges around, you know, both liquid cooling, but also making the most out of air cooling. And so we're seeing not only are we we driving up you know, the capability of these systems, we're actually improving the energy efficiency. And those, the most recent systems that we launched around the CPU, which is still kind of at the heart of everything today, you know, are seeing 50% improvement, you know, gen to gen in terms of performance per watt capabilities. So it's, it's about like how do we package these systems in effective ways and make sure that our customers can get, you know, the advertised benefits, so to speak, of the new chip technologies. >> Yeah. To add to that, you know, performance, scalability total cost of ownership, these are the key considerations, but now energy efficiency has become more important than ever, you know, our commitment to sustainability. This is one of the thing that we have demonstrated last week was with our new generation of EPIC Genoa based systems, we can do a one five to one consolidation, significantly reducing the energy requirement. >> Power's huge costs are going up. It's a global issue. >> Raghu: Yeah, it is. >> How do you squeeze more performance too out of it at the same time, I mean, smaller, faster, cheaper. Paul, you wrote a story about, you know, this weekend about hardware and AI making hardware so much more important. You got more power requirements, you got the sustainability, but you need more horsepower, more compute. What's different in the architecture if you guys could share like today versus years ago, what's different in as these generations step function value increases? >> So one of the major drivers from the processor perspective is if you look at the latest generation of processors, the five nanometer technology, bringing efficiency and density. So we are able to pack 96 processor cores, you know, in a two socket system, we are talking about 196 processor cores. And of course, you know, other enhancements like IPC uplift, bringing DDR5 to the market PC (indistinct) for the market, offering overall, you know, performance uplift of more than 2.5x for certain workloads. And of course, you know, significantly reducing the power footprint. >> Also, I was just going to cut, I mean, architecturally speaking, you know, then how do we take the 96 cores and surround it, deliver a balanced ecosystem to make sure that we can get the, the IO out of the system, and make sure we've got the right data storage. So I mean, you'll see 60% improvements and total storage in the system. I think in 2012 we're talking about 10 gig ethernet. Well, you know, now we're on to 100 and 400 on the forefront. So it's like how do we keep up with this increased power, by having, or computing capabilities both offload and core computing and make sure we've got a system that can deliver the desired (indistinct). >> So the little things like the bus, the PCI cards, the NICs, the connectors have to be rethought through. Is that what you're getting at? >> Yeah, absolutely. >> Paul: And the GPUs, which are huge power consumers. >> Yeah, absolutely. So I mean, cooling, we introduce, and we call it smart cooling is a part of our latest generation of servers. I mean, the thermal design inside of a server is a is a complex, you know, complex system, right? And doing that efficiently because of course fans consume power. So I mean, yeah, those are the kind of considerations that we have to put through to make sure that you're not either throttling performance because you don't have you know, keeping the chips at the right temperature. And, and you know, ultimately when you do that, you're hurting the productivity of the investment. So I mean, it's, it's our responsibility to put our thoughts and deliver those systems that are (indistinct) >> You mention data too, if you bring in the data, one of the big discussions going into the big Amazon show coming up, re:Invent is egress costs. Right, So now you've got compute and how you design data latency you know, processing. It's not just contained in a machine. You got to think about outside that machine talking to other machines. Is there an intelligent (chuckles) network developing? I mean, what's the future look like? >> Well, I mean, this is a, is an area that, that's, you know, it's fun and, you know, Dell's in a unique position to work on this problem, right? We have 70% of the mission housed, 70% of the mission critical data that exists in the world. How do we bring that closer to compute? How do we deliver system level solutions? So server compute, so recently we announced innovations around NVMe over Fabrics. So now you've got the NVMe technology and the SAN. How do we connect that more efficiently across the servers? Those are the kinds, and then guide our customers to make use of that. Those are the kinds of challenges that we're trying to unlock the value of the data by making sure we're (indistinct). >> There are a lot of lessons learned from, you know, classic HPC and some of the, you know big data analytics. Like, you know, Hadoops of the world, you know, you know distributor processing for crunching a large amount of amount of data. >> With the growth of the cloud, you see, you know, some pundits saying that data centers will become obsolete in five years, and everything's going to move to the cloud. Obviously data center market that's still growing, and is projected to continue to grow. But what's the argument for captive hardware, for owning a data center these days when the cloud offers such convenience and allegedly cost benefit? >> I would say the reality is that we're, and I think the industry at large has acknowledged this, that we're living in a multicloud world and multicloud methods are going to be necessary to you know, to solve problems and compete. And so, I mean, you know, in some cases, whether it's security or latency, you know, there's a push to have things in your own data center. And then of course growth at the edge, right? I mean, that's, that's really turning, you know, things on their head, if you will, getting data closer to where it's being generated. And so I would say we're going to live in this edge cloud, you know, and core data center environment with multi, you know, different cloud providers providing solutions and services where it makes sense, and it's incumbent on us to figure out how do we stitch together that data platform, that data layer, and help customers, you know, synthesize this data to, to generate, you know, the results they need. >> You know, one of the things I want to get into on the cloud you mentioned that Paul, is that we see the rise of graph databases. And so is that on the radar for the AI? Because a lot of more graph data is being brought in, the database market's incredibly robust. It's one of the key areas that people want performance out of. And as cloud native becomes the modern application development, a lot more infrastructure as code's happening, which means that the internet and the networks and the process should be programmable. So graph database has been one of those things. Have you guys done any work there? What's some data there you can share on that? >> Yeah, actually, you know, we have worked closely with a company called TigerGraph, there in the graph database space. And we have done a couple of case studies, one on the healthcare side, and the other one on the financial side for fraud detection. Yeah, I think they have a, this is an emerging area, and we are able to demonstrate industry leading performance for graph databases. Very excited about it. >> Yeah, it's interesting. It brings up the vertical versus horizontal applications. Where is the AI HPC kind of shining? Is it like horizontal and vertical solutions or what's, what's your vision there. >> Yeah, well, I mean, so this is a case where I'm also a user. So I own our analytics platform internally. We actually, we have a chat box for our product development organization to figure out, hey, what trends are going on with the systems that we sell, whether it's how they're being consumed or what we've sold. And we actually use graph database technology in order to power that chat box. So I'm actually in a position where I'm like, I want to get these new systems into our environment so we can deliver. >> Paul: Graphs under underlie most machine learning models. >> Yeah, Yeah. >> So we could talk about, so much to talk about in this space, so little time. And unfortunately we're out of that. So fascinating discussion. Brian Payne, Dell Technologies, Raghu Nambiar, AMD. Congratulations on the successful launch of your new chip set and the growth of, in your relationship over these past years. Thanks so much for being with us here on theCUBE. >> Super. >> Thank you much. >> It's great to be back. >> We'll be right back from SuperComputing 22 in Dallas. (upbeat music)

Published Date : Nov 16 2022

SUMMARY :

and much of it around the chips from AMD, over the last five years. in the PowerEdge portfolio. you know, system design, So much has changed over the years. Unlike in the past, you know, demo in the booth at Dell here. Yeah, you know, I, so and on the system side? And of course, you know, the heterogeneous, you know, This is one of the thing that we It's a global issue. What's different in the And of course, you know, other Well, you know, now the connectors have to Paul: And the GPUs, which And, and you know, you know, processing. is an area that, that's, you know, the world, you know, you know With the growth of the And so, I mean, you know, in some cases, on the cloud you mentioned that Paul, Yeah, actually, you know, Where is the AI HPC kind of shining? And we actually use graph Paul: Graphs under underlie Congratulations on the successful launch SuperComputing 22 in Dallas.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrianPERSON

0.99+

Brian PaynePERSON

0.99+

PaulPERSON

0.99+

Paul GillanPERSON

0.99+

DallasLOCATION

0.99+

50%QUANTITY

0.99+

60%QUANTITY

0.99+

70%QUANTITY

0.99+

2012DATE

0.99+

RaghuPERSON

0.99+

John FurrierPERSON

0.99+

DellORGANIZATION

0.99+

96 coresQUANTITY

0.99+

two companiesQUANTITY

0.99+

40%QUANTITY

0.99+

100QUANTITY

0.99+

$1.8 billionQUANTITY

0.99+

400QUANTITY

0.99+

TigerGraphORGANIZATION

0.99+

AMDORGANIZATION

0.99+

last weekDATE

0.99+

Raghu NambiarPERSON

0.99+

2014DATE

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

96 processor coresQUANTITY

0.99+

last yearDATE

0.99+

BothQUANTITY

0.99+

AmazonORGANIZATION

0.98+

five yearsQUANTITY

0.98+

two socketQUANTITY

0.98+

3xQUANTITY

0.98+

this weekDATE

0.98+

five years agoDATE

0.98+

todayDATE

0.98+

first generationQUANTITY

0.98+

fourQUANTITY

0.98+

SiliconANGLEORGANIZATION

0.97+

more than 2.5xQUANTITY

0.97+

fiveQUANTITY

0.97+

one ideaQUANTITY

0.97+

ISGORGANIZATION

0.96+

one caseQUANTITY

0.95+

five nanometerQUANTITY

0.95+

SuperComputingORGANIZATION

0.94+

EPICORGANIZATION

0.93+

yearsDATE

0.93+

GenoaORGANIZATION

0.92+

Raghu NambiarORGANIZATION

0.92+

SC22 SuperComputing ConferenceEVENT

0.91+

last couple of yearsDATE

0.9+

hundred plus percentQUANTITY

0.89+

TPCORGANIZATION

0.88+

nine worldQUANTITY

0.87+

SuperComputing 22ORGANIZATION

0.87+

about 196 processor coresQUANTITY

0.85+

Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22


 

(upbeat music) (logo swooshing) >> Good morning and welcome back to Dallas, ladies and gentlemen, we are here with theCUBE Live from Supercomputing 2022. David, my cohost, how are you doing? Exciting, day two, feeling good? >> Very exciting. Ready to start off the day. >> Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >> Thank you for having us. >> Thank you for having us. >> I'm excited that you're starting off the day because we've been hearing a lot of rumors about Ethernet as the fabric for HPC, but we really haven't done a deep dive yet during the show. You all seem all in on Ethernet. Tell us about that. Armando, why don't you start? >> Yeah, I mean, when you look at Ethernet, customers are asking for flexibility and choice. So when you look at HPC, InfiniBand's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial in their enterprise customers. And not everybody wants to be in the top 500, what they want to do is improve their job time and improve their latency over the network. And when you look at Ethernet, you kind of look at the sweet spot between 8, 12, 16, 32 nodes, that's a perfect fit for Ethernet in that space and those types of jobs. >> I love that. Pete, you want to elaborate? >> Yeah, sure. I mean, I think one of the biggest things you find with Ethernet for HPC is that, if you look at where the different technologies have gone over time, you've had old technologies like, ATM, Sonic, Fifty, and pretty much everything is now kind of converged toward Ethernet. I mean, there's still some technologies such as InfiniBand, Omni-Path, that are out there. But basically, they're single source at this point. So what you see is that there is a huge ecosystem behind Ethernet. And you see that also the fact that Ethernet is used in the rest of the enterprise, is used in the cloud data centers, that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia into enterprise, into cloud service providers, it's much easier to integrate it with the same technology you're already using in those data centers, in those networks. >> So what's the state of the art for Ethernet right now? What's the leading edge? what's shipping now and what's in the near future? You're with Broadcom, you guys designed this stuff. >> Pete: Yeah. >> Savannah: Right. >> Yeah, so leading edge right now, got a couple things-- >> Savannah: We love good stage prop here on the theCUBE. >> Yeah, so this is Tomahawk 4. So this is what is in production, it's shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 terabytes per second. >> David: Okay. >> Which matches any other technology out there. Like if you look at say, InfinBand, highest they have right now that's just starting to get into production is 25.6 T. So state of the art right now is what we introduced, We announced this in August, This is Tomahawk 5, so this is 51.2 terabytes per second. So double the bandwidth, out of any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, actually, winds up being a factor of six efficiency. >> Savannah: Wow. >> 'Cause if you want, I can go into that, but... >> Why not? >> Well, what I want to know, please tell me that in your labs, you have a poster on the wall that says T five, with some like Terminator kind of character. (all laughs) 'Cause that would be cool. If it's not true, just don't say anything. I'll just... >> Pete: This can actually shift into a terminator. >> Well, so this is from a switching perspective. >> Yeah. >> When we talk about the end nodes, when we talk about creating a fabric, what's the latest in terms of, well, the nicks that are going in there, what speed are we talking about today? >> So as far as 30 speeds, it tends to be 50 gigabits per second. >> David: Okay. >> Moving to a hundred gig PAM-4. >> David: Okay. >> And we do see a lot of nicks in the 200 gig Ethernet port speed. So that would be four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon, 800 gig in the future. But say state of the art right now, we're seeing for the end node tends to be 200 gig E based on 50 gig PAM-4. >> Wow. >> Yeah, that's crazy. >> Yeah, that is great. My mind is act actively blown. I want to circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen, where do you think we are on the adoption curve and sort of in that cycle? Armando, do you want to go? >> Yeah, well, if you look at the market research, they're actually telling you it's 50/50 now. So Ethernet is at the level of 50%, InfinBand's at 50%, right? >> Savannah: Interesting. >> Yeah, and so what's interesting to us, customers are coming to us and say, hey, we want to see flexibility and choice and, hey, let's look at Ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we their have switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially MPI. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, hey, I've been InfiniBand but now I want to go Ethernet, there's going to be some learning curves there. And so what we want to do is really simplify that so that we can make it easy to install, get the cluster up and running and they can actually get some value out the cluster. >> Yeah, Pete, talk about that partnership. what does that look like? I mean, are you working with Dell before the T six comes out? Or you just say what would be cool is we'll put this in the T six? >> No, we've had a very long partnership both on the hardware and the software side. Dell's been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group, within Broadcom, we've then gotten very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, Dell can take it and deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again, in both the hardware and the software. >> So I'm fascinated by... I always like to know like what, yeah, exactly. Look, you start talking about the largest supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be two million CPUs, 2 million CPU cores. Exoflap of performance. What are the outward limits of T five in switches, building out a fabric, what does that look like? What are the increments in terms of how many... And I know it's a depends answer, but how many nodes can you support in a scale out cluster before you need another switch? Or what does that increment of scale look like today? >> Yeah, so this is 51.2 terabytes per second. Where we see the most common implementation based on this would be with 400 gig Ethernet ports. >> David: Okay. >> So that would be 128, 400 gig E ports connected to one chip. Now, if you went to 200 gig, which is kind of the state of the art for the nicks, you can have double that. So in a single hop, you can have 256 end nodes connected through one switch. >> Okay, so this T five, that thing right there, (all laughing) inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what's the form factor look like for where that T five sits? Is there just one in a chassis or you have.. What does that look like? >> It tends to be pizza boxes these days. What you've seen overall is that the industry's moved away from chassis for these high end systems more towardS pizza boxes. And you can have composable systems where, in the past you would have line cards, either the fabric cards that the line cards are plug into or interfaced to. These days what tends to happen is you'd have a pizza box and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the line card. >> David: Okay. >> So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a 2RU, with 64 OSFP ports. And often each of those OSFP, which is an 800 gig E or 800 gig port, we've broken out into two 400 gig ports. >> So yeah, in 2RU, and this is all air cooled, in 2RU, you've got 51.2 T. We do see some cases where customers would like to have different optics and they'll actually deploy 4RU, just so that way they have the phase-space density. So they can plug in 128, say QSFP 112. But yeah, it really depends on which optics, if you want to have DAK connectivity combined with optics. But those are the two most common form factors. >> And Armando, Ethernet isn't necessarily Ethernet in the sense that many protocols can be run over it. >> Right. >> I think I have a projector at home that's actually using Ethernet physical connections. But, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center Ethernet, or is this RDMA over converged Ethernet? What Are we talking about? >> Yeah, so RDMA, right? So when you look at running, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on Ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on Ethernet. If you look at NPIs officially, built to, hey, it was designed to run on InfiniBand but now what you see with Broadcom, with the great work they're doing, now we can make that work on Ethernet and get same performance, so that's huge for customers. >> Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of Ethernet in HPC in terms of AI and ML, where do you think we're going to be next year or 10 years from now? >> You want to go first or you want me to go first? >> I can start, yeah. >> Savannah: Pete feels ready. >> So I mean, what I see, I mean, Ethernet, what we've seen is that as far as on, starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. >> That's impressive. >> Pete: Yeah. >> Nicely done, casual, humble brag there. That was great, I love that. I'm here for you. >> I mean, I think that's one of the benefits of Ethernet, is the ecosystem, is the trajectory the roadmap we've had, I mean, you don't see that in any of the networking technology. >> David: More who? (all laughing) >> So I see that, that trajectory is going to continue as far as the switches doubling in bandwidth, I think that they're evolving protocols, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on RDMA, for the supercomputing, the AI/ML workloads. But we do see that as you have a mix of the applications running on these end nodes, maybe they're interfacing to the CPUs for some processing, you might use a different mix of protocols. So I'd say it's going to be doubling a bandwidth over time, evolution of the protocols. I mean, I expect that Rocky is probably going to evolve over time depending on the AI/ML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-packed optics. So right now, this chip is, all the balls in the back here, there's electrical connections. >> How many are there, by the way? 9,000 plus on the back of that-- >> 9,352. >> I love how specific it is. It's brilliant. >> Yeah, so right now, all the SERDES, all the signals are coming out electrically based, but we've actually shown, we actually we have a version of Tomahawk 4 at 25.6 T that has co-packed optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk 5. >> Nice. >> Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. >> Wow. Cool. >> So I see there's the bandwidth, there's radix's increasing, protocols, different physical connectivity. So I think there's a lot of things throughout, and the protocol stack's also evolving. So a lot of excitement, a lot of new technology coming to bear. >> Okay, You just threw a carrot down the rabbit hole. I'm only going to chase this one, okay? >> Peter: All right. >> So I think of individual discreet physical connections to the back of those balls. >> Yeah. >> So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that many optical connections? What's the mapping there? What does that look like? >> So what we've announced for Tomahawk 5 is it would have FR4 optics coming out. So you'd actually have 512 fiber pairs coming out. So basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because-- >> It's miraculous, essentially. >> Savannah: I know. >> Yeah. So a lot of people are going to be looking at this and thinking in terms of InfiniBand versus Ethernet, I think you've highlighted some of the benefits of specifically running Ethernet moving forward as HPC which sort of just trails slightly behind super computing as we define it, becomes more pervasive AI/ML. What are some of the other things that maybe people might not immediately think about when they think about the advantages of running Ethernet in that environment? Is it about connecting the HPC part of their business into the rest of it? What are the advantages? >> Yeah, I mean, that's a big thing. I think, and one of the biggest things that Ethernet has again, is that the data centers, the networks within enterprises, within clouds right now are run on Ethernet. So now, if you want to add services for your customers, the easiest thing for you to do is the drop in clusters that are connected with the same networking technology. So I think one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to Ethernet. So now you've got to train your technicians, you train your assist admins on two different network technologies. You need to have all the debug technology, all the interconnect for that. So here, the easiest thing is you can use Ethernet, it's going to give you the same performance and actually, in some cases, we've seen better performance than we've seen with Omni-Path, better than in InfiniBand. >> That's awesome. Armando, we didn't get to you, so I want to make sure we get your future hot take. Where do you see the future of Ethernet here in HPC? >> Well, Pete hit on a big thing is bandwidth, right? So when you look at, train a model, okay? So when you go and train a model in AI, you need to have a lot of data in order to train that model, right? So what you do is essentially, you build a model, you choose whatever neural network you want to utilize. But if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the CPU. And essentially, if you're going to do it maybe on CPU only, but if you do it on accelerators, well, guess what? You need a big pipe in order to get all that data through. And here's the deal, the bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage, maybe it's some new way you design a product, but that's a benefit of speed, you want faster, faster, faster. >> It's all about making it faster and easier-- for the users. >> Armando: It is. >> I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas, stakes, there's a lot going on with that. >> Making me hungry. >> I know, exactly. I'm sitting out here thinking, man, I did not have big enough breakfast. How did you come up with the name Tomahawk? >> So Tomahawk, I think it just came from a list. So we have a tried end product line. >> Savannah: Ah, yes. >> Which is a missile product line. And Tomahawk is being kind of like the bigger and batter missile, so. >> Savannah: Love this. Yeah, I mean-- >> So do you like your engineers? You get to name it. >> Had to ask. >> It's collaborative. >> Okay. >> We want to make sure everyone's in sync with it. >> So just it's not the Aquaman tried. >> Right. >> It's the steak Tomahawk. I think we're good now. >> Now that we've cleared that-- >> Now we've cleared that up. >> Armando, Pete, it was really nice to have both you. Thank you for teaching us about the future of Ethernet and HCP. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to theCUBE live from Dallas. We're here talking all things HPC and supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us. (soft music)

Published Date : Nov 16 2022

SUMMARY :

David, my cohost, how are you doing? Ready to start off the day. Gentlemen, thank you about Ethernet as the fabric for HPC, So when you look at HPC, Pete, you want to elaborate? So what you see is that You're with Broadcom, you stage prop here on the theCUBE. So this is what is in production, So state of the art right 'Cause if you want, I have a poster on the wall Pete: This can actually Well, so this is from it tends to be 50 gigabits per second. 800 gig in the future. that you brought up a second ago, So Ethernet is at the level of 50%, So if you have a customer that, I mean, are you working with Dell and on the APIs, on the operating system that exist today, and you Yeah, so this is 51.2 of the art for the nicks, chassis or you have.. in the past you would have line cards, for this is they tend to be two, if you want to have DAK in the sense that many as what you think of So when you look at running, Both of you get to see a lot starting off of the switch side, I'm here for you. in any of the networking technology. But we do see that as you have a mix I love how specific it is. And if you look at, from the bottom, you actually have fibers and the protocol stack's also evolving. carrot down the rabbit hole. So I think of individual How do you do that many coming out of the sides there. What are some of the other things the easiest thing for you to do is Where do you see the future So the faster you can train for the users. I love that. How did you come up So we have a tried end product line. kind of like the bigger Yeah, I mean-- So do you like your engineers? everyone's in sync with it. It's the steak Tomahawk. And thank you all for tuning

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

2019DATE

0.99+

David NicholsonPERSON

0.99+

2020DATE

0.99+

PetePERSON

0.99+

TexasLOCATION

0.99+

AugustDATE

0.99+

PeterPERSON

0.99+

SavannahPERSON

0.99+

30 speedsQUANTITY

0.99+

200 gigQUANTITY

0.99+

Savannah PetersonPERSON

0.99+

50 gigQUANTITY

0.99+

ArmandoPERSON

0.99+

128QUANTITY

0.99+

DellORGANIZATION

0.99+

9,000QUANTITY

0.99+

400 gigQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

50%QUANTITY

0.99+

twoQUANTITY

0.99+

128, 400 gigQUANTITY

0.99+

800 gigQUANTITY

0.99+

DallasLOCATION

0.99+

512 channelsQUANTITY

0.99+

9,352QUANTITY

0.99+

24 monthsQUANTITY

0.99+

one chipQUANTITY

0.99+

Tomahawk 4COMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

North AmericaLOCATION

0.99+

next yearDATE

0.99+

oneQUANTITY

0.98+

512 fiberQUANTITY

0.98+

seven timesQUANTITY

0.98+

Tomahawk 5COMMERCIAL_ITEM

0.98+

four lanesQUANTITY

0.98+

9,000 plusQUANTITY

0.98+

Dell TechnologiesORGANIZATION

0.98+

todayDATE

0.97+

AquamanPERSON

0.97+

BothQUANTITY

0.97+

InfiniBandORGANIZATION

0.97+

QSFP 112OTHER

0.96+

hundred gigQUANTITY

0.96+

Peter Del VecchioPERSON

0.96+

25.6 terabytes per secondQUANTITY

0.96+

two fascinating guestsQUANTITY

0.96+

single sourceQUANTITY

0.96+

64 OSFPQUANTITY

0.95+

RockyORGANIZATION

0.95+

two million CPUsQUANTITY

0.95+

25.6 T.QUANTITY

0.95+

Andrea Booker, Dell Technologies | SuperComputing 22


 

>> Hello everyone and welcome back to theCUBE, where we're live from Dallas, Texas here at Super computing 2022. I am joined by my cohost David Nicholson. Thank you so much for being here with me and putting up with my trashy jokes all day. >> David: Thanks for having me. >> Yeah. Yes, we are going to be talking about AI this morning and I'm very excited that our guest has has set the stage for us here quite well. Please welcome Andrea Booker. Andrea, thank you so much for being here with us. >> Absolutely. Really excited to be here. >> Savannah: How's your show going so far? >> It's been really cool. I think being able to actually see people in person but also be able to see the latest technologies and and have the live dialogue that connects us in a different way than we have been able to virtually. >> Savannah: Oh yeah. No, it's all, it's all about that human connection and that it is driving towards our first question. So as we were just chit chatting. You said you are excited about making AI real and humanizing that. >> Andrea: Absolutely. >> What does that mean to you? >> So I think when it comes down to artificial intelligence it means so many different things to different people. >> Savannah: Absolutely. >> I was talking to my father the other day for context, he's in his late seventies, right. And I'm like, oh, artificial intelligence, this or that, and he is like, machines taking over the world. Right. >> Savannah: Very much the dark side. >> A little bit Terminator. And I'm like, well, not so much. So that was a fun discussion. And then you flip it to the other side and I'm talking to my 11 year old daughter and she's like, Alexa make sure you know my song preferences. Right. And that's the other very real way in which it's kind of impacting our lives. >> Savannah: Yeah. >> Right. There's so many different use cases that I don't think everyone understands how that resonates. Right. It's the simple things from, you know, recommend Jason Engines when you're on Amazon and it suggests just a little bit more. >> Oh yeah. >> I'm a little bit to you that one, right. To stuff that's more impactful in regards to getting faster diagnoses from your doctors. Right. Such peace of mind being able to actually hear that answer faster know how to go tackle something. >> Savannah: Great point, yeah. >> You know, and, and you know, what's even more interesting is from a business perspective, you know the projections are over the next five years about 90% of customers are going to use AI applications in in some fashion, right. >> Savannah: Wow. >> And the reason why that's interesting is because if you look at it today, only about 15% of of them are doing so. Right. So we're early. So when we're talking growth and the opportunity, it's, it's amazing. >> Yeah. I can, I can imagine. So when you're talking to customers, what are are they excited? Are they nervous? Are you educating them on how to apply Dell technology to advance their AI? Where are they off at because we're so early? >> Yeah well, I think they're figuring it out what it means to them, right? >> Yeah. Because there's so many different customer applications of it, right? You have those in which, you know, are on on the highest end in which that our new XE products are targeting that when they think of it. You know, I I, I like to break it down in this fashion in which artificial intelligence can actually save human lives, right? And this is those extreme workloads that I'm talking about. We actually can develop a Covid vaccine faster, right. Pandemic tracking, you know with global warming that's going on. And we have these extreme weather events with hurricanes and tsunamis and all these things to be able to get advanced notice to people to evacuate, to move. I mean, that's a pretty profound thing. And it is, you know so it could be used in that way to save lives, right? >> Absolutely. >> Which is it's the natural outgrowth of the speeds and feeds discussions that we might have internally. It's, it's like, oh, oh, speed doubled. Okay. Didn't it double last year? Yeah. Doubled last year too. So it's four x now. What does that mean to your point? >> Andrea: Yeah, yeah. >> Savannah: Yeah. >> Being able to deliver faster insight insights that are meaningful within a timeframe when otherwise they wouldn't be meaningful. >> Andrea: Yeah. >> If I tell you, within a two month window whether it's going to rain this weekend, that doesn't help you. In hindsight, we did the calculation and we figured out it's going to be 40 degrees at night last Thursday >> Knowing it was going to completely freeze here in Dallas to our definition in Texas but we prepare better to back to bring clothes. >> We were talking to NASA about that yesterday too. I mean, I think it's, it's must be fascinating for you to see your technology deployed in so many of these different use cases as well. >> Andrea: Absolutely, absolutely. >> It's got to be a part of one of the more >> Andrea: Not all of them are extreme, right? >> Savannah: Yeah. >> There's also examples of, you know natural language processing and what it does for us you know, the fact that it can break down communication barriers because we're global, right? We're all in a global environment. So if you think about conference calls in which we can actually clearly understand each other and what the intent is, and the messaging brings us closer in different ways as well. Which, which is huge, right? You don't want things lost in translation, right? So it, it helps on so many fronts. >> You're familiar with the touring test idea of, of, you know whether or not, you know, the test is if you can't discern within a certain number of questions that you're interacting with an AI versus a real human, then it passes the touring test. I think there should be a natural language processing test where basically I say, fine >> Andrea: You see if people was mad or not. >> You tell me, you tell me. >> I love this idea, David. >> You know? >> Yeah. This is great. >> Okay. AI lady, >> You tell me what I meant. >> Yeah, am I actually okay? >> How far from, that's silly example but how far do you think we are from that? I mean, what, what do you seeing out there in terms of things where you're kind of like, whoa, they did this with technology I'm responsible for, that was impressive. Or have you heard of things that are on the horizon that, you know, again, you, you know they're the big, they're the big issues. >> Yeah. >> But any, anything kind of interesting and little >> I think we're seeing it perfected and tweaked, right? >> Yeah. >> You know, I think going back to my daughter it goes from her screaming at Alexa 'cause she did hear her right the first time to now, oh she understands and modifies, right? Because we're constantly tweaking that technology to have a better experience with it. And it's a continuum, right? The voice to text capabilities, right. You know, I I'd say early on it got most of those words, right Right now it's, it's getting pretty dialed in. Right. >> Savannah: That's a great example. >> So, you know, little things, little things. >> Yeah. I think I, I love the, the this thought of your daughter as the example of training AI. What, what sort of, you get to look into the future quite a bit, I'm sure with your role. >> Andrea: Absolutely. >> Where, what is she going to be controlling next? >> The world. >> The world. >> No, I mean if you think about it just from a generational front, you know technology when I was her age versus what she's experiencing, she lives and breathes it. I mean, that's the generational change. So as these are coming out, you have new folks growing with it that it's so natural that they are so open to adopting it in their common everyday behaviors. Right? >> Savannah: Yeah. >> But they'd they never, over time they learn, oh well how it got there is 'cause of everything we're doing now, right. >> Savannah: Yeah. >> You know, one, one fun example, you know as my dad was like machines are taking over the world is not, not quite right. Even if when you look at manufacturing, there's a difference in using AI to go build a digital simulation of a factory to be able to optimize it and design it right before you're laying the foundation that saves cost, time and money. That's not taking people's jobs in that extreme event. >> Right. >> It's really optimizing for faster outcomes and, and and helping our customers get there which is better for everyone. >> Savannah: Yeah and safer too. I mean, using the factory example, >> Totally safer. >> You're able to model out what a workplace injury might be or what could happen. Or even the ergonomics of how people are using. >> Andrea: Yeah, should it be higher so they don't have to bend over? Right. >> Exactly. >> There's so many fantastic positive ways. >> Yeah so, so for your dad, you know, I mean it's going to help us, it's going to make, it's going to take away when I. Well I'm curious what you think, David when I think about AI, I think it's going to take out a lot of the boring things in life that, that we don't like >> Andrea: Absolutely. Doing. The monotony and the repetitive and let us optimize our creative selves maybe. >> However, some of the boring things are people's jobs. So, so it is, it it it will, it will it will push a transition in our economy in the global economy, in my opinion. That would be painful for some, for some period of time. But overall beneficial, >> Savannah: Yes. But definitely as you know, definitely there will be there will be people who will be disrupted and, you know. >> Savannah: Tech's always kind of done that. >> We No, but we need, I, I think we need to make sure that the digital divide doesn't get so wide that you know that, that people might not be negative, negatively affected. And, but, but I know that like organizations like Dell I believe what you actually see is, >> Andrea: Yeah. >> No, it's, it's elevating people. It's actually taking away >> Andrea: Easier. >> Yeah. It's, it's, it's allowing people to spend their focus on things that are higher level, more interesting tasks. >> Absolutely. >> David: So a net, A net good. But definitely some people disrupted. >> Yes. >> I feel, I feel disrupted. >> I was going to say, are, are we speaking for a friend or for ourselves here today on stage? >> I'm tired of software updates. So maybe if you could, if you could just standardize. So AI and ML. >> Andrea: Yeah. >> People talk about machine learning and, and, and and artificial intelligence. How would you differentiate the two? >> Savannah: Good question. >> It it, it's, it's just the different applications and the different workloads of it, right? Because you actually have artificial intelligence you have machine learning in which the learn it's learning from itself. And then you have like the deep learning in which it's diving deeper in in its execution and, and modeling. And it really depends on the workload applications as long as well as how large the data set is that's feeding into it for those applications. Right. And that really leads into the, we have to make sure we have the versatility in our offerings to be able to meet every dimension of that. Right. You know our XE products that we announced are really targeted for that, those extreme AI HPC workloads. Right. Versus we also have our entire portfolio products that we make sure we have GPU diversity throughout for the other applications that may be more edge centric or telco centric, right? Because AI isn't just these extreme situations it's also at the edge. It's in the cloud, it's in the data center, right? So we want to make sure we have, you know versatility in our offerings and we're really meeting customers where they're at in regards to the implementation and and the AI workloads that they have. >> Savannah: Let's dig in a little bit there. So what should customers expect with the next generation acceleration trends that Dell's addressing in your team? You had three exciting product announcements here >> Andrea: We did, we did. >> Which is very exciting. So you can talk about that a little bit and give us a little peek. >> Sure. So, you know, for, for the most extreme applications we have the XE portfolio that we built upon, right? We already had the XC 85 45 and we've expanded that out in a couple ways. The first of which is our very first XC 96 88 way offering in which we have Nvidia's H 100 as well as 8 100. 'Cause we want choice, right? A choice between performance, power, what really are your needs? >> Savannah: Is that the first time you've combined? >> Andrea: It's the first time we've had an eight way offering. >> Yeah. >> Andrea: But we did so mindful that the technology is emerging so much from a thermal perspective as well as a price and and other influencers that we wanted that choice baked into our next generation of product as we entered the space. >> Savannah: Yeah, yeah. >> The other two products we have were both in the four way SXM and OAM implementation and we really focus on diversifying and not only from vendor partnerships, right. The XC 96 40 is based off Intel Status Center max. We have the XE 86 40 that is going to be in or Nvidia's NB length, their latest H 100. But the key differentiator is we have air cold and we have liquid cold, right? So depending on where you are from that data center journey, I mean, I think one of the common themes you've heard is thermals are going up, performance is going up, TBPs are going up power, right? >> Savannah: Yeah. >> So how do we kind of meet in the middle to be able to accommodate for that? >> Savannah: I think it's incredible how many different types of customers you're able to accommodate. I mean, it's really impressive. I feel lucky we've gotten to see these products you're describing. They're here on the show floor. There's millions of dollars of hardware literally sitting in your booth. >> Andrea: Oh yes. >> Which is casual only >> Pies for you. Yeah. >> Yeah. We were, we were chatting over there yesterday and, and oh, which, which, you know which one of these is more expensive? And the response was, they're both expensive. It was like, okay perfect >> But assume the big one is more. >> David: You mentioned, you mentioned thermals. One of the things I've been fascinated by walking around is all of the different liquid cooling solutions. >> Andrea: Yeah. >> And it's almost hysterical. You look, you look inside, it looks like something from it's like, what is, what is this a radiator system for a 19th century building? >> Savannah: Super industrial? >> Because it looks like Yeah, yeah, exactly. Exactly, exactly. It's exactly the way to describe it. But just the idea that you're pumping all of this liquid over this, over this very, very valuable circuitry. A lot of the pitches have to do with, you know this is how we prevent disasters from happening based on the cooling methods. >> Savannah: Quite literally >> How, I mean, you look at the power requirements of a single rack in a data center, and it's staggering. We've talked about this a lot. >> Savannah: Yeah. >> People who aren't kind of EV you know electric vehicle nerds don't appreciate just how much power 90 kilowatts of power is for an individual rack and how much heat that can generate. >> Andrea: Absolutely. >> So Dell's, Dell's view on this is air cooled water cooled figure it out fit for for function. >> Andrea: Optionality, optionality, right? Because our customers are a complete diverse set, right? You have those in which they're in a data center 10 to 15 kilowatt racks, right? You're not going to plum a liquid cool power hungry or air power hungry thing in there, right? You might get one of these systems in, in that kind of rack you know, architecture, but then you have the middle ground the 50 to 60 is a little bit of choice. And then the super extreme, that's where liquid cooling makes sense to really get optimized and have the best density and, and the most servers in that solution. So that's why it really depends, and that's why we're taking that approach of diversity, of not only vendors and, and choice but also implementation and ways to be able to address that. >> So I think, again, again, I'm, you know electric vehicle nerd. >> Yeah. >> It's hysterical when you, when you mention a 15 kilowatt rack at kind of flippantly, people don't realize that's way more power than the average house is consuming. >> Andrea: Yeah, yeah >> So it's like your entire house is likely more like five kilowatts on a given day, you know, air conditioning. >> Andrea: Maybe you have still have solar panel. >> In Austin, I'm sorry >> California, Austin >> But, but, but yeah, it's, it's staggering amounts of power staggering amounts of heat. There are very real problems that you guys are are solving for to drive all of these top line value >> Andrea: Yeah. >> Propositions. It's super interesting. >> Savannah: It is super interesting. All right, Andrea, last question. >> Yes. Yes. >> Dell has been lucky to have you for the last decade. What is the most exciting part about you for the next decade of your Dell career given the exciting stuff that you get to work on. >> I think, you know, really working on what's coming our way and working with my team on that is is just amazing. You know, I can't say it enough from a Dell perspective I have the best team. I work with the most, the smartest people which creates such a fun environment, right? So then when we're looking at all this optionality and and the different technologies and, and, and you know partners we work with, you know, it's that coming together and figuring out what's that best solution and then bringing our customers along that journey. That kind of makes it fun dynamic that over the next 10 years, I think you're going to see fantastic things. >> David: So I, before, before we close, I have to say that's awesome because this event is also a recruiting event where some of these really really smarts students that are surrounding us. There were some sirens going off. They're having competitions back here. >> Savannah: Yeah, yeah, yeah. >> So, so when they hear that. >> Andrea: Where you want to be. >> David: That's exactly right. That's exactly right. >> Savannah: Well played. >> David: That's exactly right. >> Savannah: Well played. >> Have fun. Come on over. >> Well, you've certainly proven that to us. Andrea, thank you so much for being with us This was such a treat. David Nicholson, thank you for being here with me and thank you for tuning in to theCUBE a lot from Dallas, Texas. We are all things HPC and super computing this week. My name's Savannah Peterson and we'll see you soon. >> Andrea: Awesome.

Published Date : Nov 16 2022

SUMMARY :

Thank you so much for being here Andrea, thank you so much Really excited to be here. and have the live You said you are excited things to different people. machines taking over the world. And that's the other very real way things from, you know, in regards to getting faster business perspective, you know and the opportunity, it's, it's amazing. Are you educating them You have those in which, you know, are on What does that mean to your point? Being able to deliver faster insight out it's going to be 40 in Dallas to our definition in Texas for you to see your technology deployed So if you think about conference calls you know, the test is if you can't discern Andrea: You see if on the horizon that, you right the first time to now, So, you know, little What, what sort of, you get to look I mean, that's the generational change. But they'd they never, Even if when you look at and helping our customers get there Savannah: Yeah and safer too. You're able to model out what don't have to bend over? There's so many of the boring things in life The monotony and the repetitive in the global economy, in my opinion. But definitely as you know, Savannah: Tech's that the digital divide doesn't It's actually taking away people to spend their focus on things David: So a net, A net good. So maybe if you could, if you could How would you differentiate the two? So we want to make sure we have, you know that Dell's addressing in your team? So you can talk about that we built upon, right? Andrea: It's the first time that the technology is emerging so much We have the XE 86 40 that is going to be They're here on the show floor. Yeah. oh, which, which, you know is all of the different You look, you look inside, have to do with, you know How, I mean, you look People who aren't kind of EV you know So Dell's, Dell's view on this is the 50 to 60 is a little bit of choice. So I think, again, again, I'm, you know power than the average house on a given day, you Andrea: Maybe you have problems that you guys are It's super interesting. Savannah: It is super interesting. What is the most exciting part about you I think, you know, that are surrounding us. David: That's exactly right. Come on over. and we'll see you soon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AndreaPERSON

0.99+

SavannahPERSON

0.99+

David NicholsonPERSON

0.99+

DavidPERSON

0.99+

DallasLOCATION

0.99+

AustinLOCATION

0.99+

40 degreesQUANTITY

0.99+

TexasLOCATION

0.99+

Savannah PetersonPERSON

0.99+

Andrea BookerPERSON

0.99+

XE 86 40COMMERCIAL_ITEM

0.99+

DellORGANIZATION

0.99+

NASAORGANIZATION

0.99+

twoQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

last yearDATE

0.99+

15 kilowattQUANTITY

0.99+

yesterdayDATE

0.99+

first questionQUANTITY

0.99+

XC 85 45COMMERCIAL_ITEM

0.99+

90 kilowattsQUANTITY

0.99+

XC 96 40COMMERCIAL_ITEM

0.99+

10QUANTITY

0.99+

Dallas, TexasLOCATION

0.99+

AmazonORGANIZATION

0.99+

H 100COMMERCIAL_ITEM

0.99+

two monthQUANTITY

0.99+

50QUANTITY

0.99+

19th centuryDATE

0.99+

Dave Jent, Indiana University and Aaron Neal, Indiana University | SuperComputing 22


 

(upbeat music) >> Welcome back. We're here at Supercomputing 22 in Dallas. My name's Paul Gill, I'm your host. With me, Dave Nicholson, my co-host. And one thing that struck me about this conference arriving here, was the number of universities that are exhibiting here. I mean, big, big exhibits from universities. Never seen that at a conference before. And one of those universities is Indiana University. Our two guests, Dave Jent, who's the AVP of Networks at Indiana University, Aaron Neal, Deputy CIO at Indiana University. Welcome, thanks for joining us. >> Thank you for having us. >> Thank you. >> I've always thought that the CIO job at a university has got to be the toughest CIO job there is, because you're managing this sprawling network, people are doing all kinds of different things on it. You've got to secure it. You've got to make it performant. And it just seems to be a big challenge. Talk about the network at Indiana University and what you have done particularly since the pandemic, how that has affected the architecture of your network. And what you do to maintain the levels of performance and security that you need. >> On the network side one of the things we've done is, kept in close contact with what the incoming students are looking for. It's a different environment than it was then 10 years ago when a student would come, maybe they had a phone, maybe they had one laptop. Today they're coming with multiple phones, multiple laptops, gaming devices. And the expectation that they have to come on a campus and plug all that stuff in causes lots of problems for us, in managing just the security aspect of it, the capacity, the IP space required to manage six, seven devices per student when you have 35,000 students on campus, has always been a challenge. And keeping ahead of that knowing what students are going to come in with, has been interesting. During the pandemic the campus was closed for a bit of time. What we found was our biggest challenge was keeping up with the number of people who wanted to VPN to campus. We had to buy additional VPN licenses so they could do their work, authenticate to the network. We doubled, maybe even tripled our our VPN license count. And that has settled down now that we're back on campus. But again, they came back with a vengeance. More gaming devices, more things to be connected, and into an environment that was a couple years old, that we hadn't done much with. We had gone through a pretty good size network deployment of new hardware to try to get ready for them. And it's worked well, but it's always challenging to keep up with students. >> Aaron, I want to ask you about security because that really is one of your key areas of focus. And you're collaborating with counties, local municipalities, as well as other educational institutions. How's your security strategy evolving in light of some of the vulnerabilities of VPNs that became obvious during the pandemic, and this kind of perfusion of new devices that that Dave was talking about? >> Yeah, so one of the things that we we did several years ago was establish what we call OmniSOC, which is a shared security operations center in collaboration with other institutions as well as research centers across the United States and in Indiana. And really what that is, is we took the lessons that we've learned and the capabilities that we've had within the institution and looked to partner with those key institutions to bring that data in-house, utilize our staff such that we can look for security threats and share that information across the the other institutions so that we can give each of those areas a heads up and work with those institutions to address any kind of vulnerabilities that might be out there. One of the other things that you mentioned is, we're partnering with Purdue in the Indiana Office of Technology on a grant to actually work with municipalities, county governments, to really assess their posture as it relates to security in those areas. It's a great opportunity for us to work together as institutions as well as work with the state in general to increase our posture as it relates to security. >> Dave, what brings IU to Supercomputing 2022? >> We've been here for a long time. And I think one of the things that we're always interested in is, what's next? What's new? There's so many, there's network vendors, software vendors, hardware vendors, high performance computing suppliers. What is out there that we're interested in? IU runs a large Cray system in Indiana called Big Red 200. And with any system you procure it, you get it running, you operate it, and your next goal is to upgrade it. And what's out there that we might be interested? That I think why we come to IU. We also like to showcase what we do at IU. If you come by the booth you'll see the OmniSOC, there's some video on that. The GlobalNOC, which I manage, which supports a lot of the RNE institutions in the country. We talk about that. Being able to have a place for people to come and see us. If you stand by the booth long enough people come and find you, and want to talk about a project they have, or a collaboration they'd like to partner with. We had a guy come by a while ago wanting a job. Those are all good things having a big booth can do for you. >> Well, so on that subject, in each of your areas of expertise and your purview are you kind of interleaved with the academic side of things on campus? Do you include students? I mean, I would think it would be a great source of cheap labor for you at least. Or is there kind of a wall between what you guys are responsible for and what students? >> Absolutely we try to support faculty and students as much as we can. And just to go back a little bit on the OmniSOC discussion. One of the things that we provide is internships for each of the universities that we work with. They have to sponsor at least three students every year and make that financial commitment. We bring them on site for three weeks. They learn us alongside the other analysts, information security analysts and work in a real world environment and gain those skills to be able to go back to their institutions and do an additional work there. So it's a great program for us to work with students. I think the other thing that we do is we provide obviously the infrastructure that enable our faculty members to do the research that they need to do. Whether that's through Big Red 200, our Supercomputer or just kind of the everyday infrastructure that allows them to do what they need to do. We have an environment on premise called our Intelligent Infrastructure, that we provide managed access to hardware and storage resources in a way that we know it's secure and they can utilize that environment to do virtually anything that they need in a server environment. >> Dave, I want to get back to the GigaPOP, which you mentioned earlier you're the managing director of the Indiana GigaPOP. What exactly is it? >> Well, the GigaPOP and there are a number of GigaPOP around the country. It was really the aggregation facility for Indiana and all of the universities in Indiana to connect to outside resources. GigaPOP has connections to internet too, the commodity internet, Esnet, the Big Ten or the BTAA a network in Chicago. It's a way for all universities in Indiana to connect to a single source to allow them to connect nationally to research organizations. >> And what are the benefits of having this collaboration of university. >> If you could think of a researcher at Indiana wants to do something with a researcher in Wisconsin, they both connect to their research networks in Wisconsin and Indiana, and they have essentially direct connection. There's no commodity internet, there's no throttling of of capacity. Both networks and the interconnects because we use internet too, are essentially UNT throttled access for the researchers to do anything they need to do. It's secure, it's fast, easy to use, in fact, so easy they don't even know that they're using it. It just we manage the networks and organize the networks in a way configure them that's the path of least resistance and that's the path traffic will take. And that's nationally. There are lots of these that are interconnected in various ways. I do want to get back to the labor point, just for a moment. (laughs) Because... >> You're here to claim you're not violating any labor laws. Is that what you're going to be? >> I'm here to hopefully hire, get more people to be interested to coming to IU. >> Stop by the booth. >> It's a great place to work. >> Exactly. >> We hire lots of interns and in the network space hiring really experienced network engineers, really hard to do, hard to attract people. And these days when you can work from anywhere, you don't have to be any place to work for anybody. We try to attract as many students as we can. And really we're exposing 'em to an environment that exists in very few places. Tens of thousands of wireless access points, big fast networks, interconnections and national international networks. We support the Noah network which supports satellite systems and secure traffic. It really is a very unique experience and you can come to IU, spend lots of years there and never see the same thing twice. We think we have an environment that's really a good way for people to come out of college, graduate school, work for some number of years and hopefully stay at IU, but if not, leave and get a good job and talk well about IU. In fact, the wireless network today here at SC was installed and is managed by a person who manages our campus network wireless, James Dickerson. That's the kind of opportunity we can provide people at IU. >> Aaron, I'd like to ask, you hear a lot about everything moving to the cloud these days, but in the HPC world I don't think that move is happening as quickly as it is in some areas. In fact, there's a good argument some workloads should never move to the cloud. You're having to balance these decisions. Where are you on the thinking of what belongs in the data center and what belongs in the cloud? >> I think our approach has really been specific to what the needs are. As an institution, we've not pushed all our chips in on the cloud, whether it be for high performance computing or otherwise. It's really looking at what the specific need is and addressing it with the proper solution. We made an investment several years ago in a data center internally, and we're leveraging that through the intelligent infrastructure that I spoke about. But really it's addressing what the specific need is and finding the specific solution, rather than going all in in one direction or another. I dunno if Jet Stream is something that you would like to bring up as well. >> By having our own data center and having our own facilities we're able to compete for NSF grants and work on projects that provide shared resources for the research community. Just dream is a project that does that. Without a data center and without the ability to work on large projects, we don't have any of that. If you don't have that then you're dependent on someone else. We like to say that, what we are proud of is the people come to IU and ask us if they can partner on our projects. Without a data center and those resources we are the ones who have to go out and say can we partner on your project? We'd like to be the leaders of that in that space. >> I wanted to kind of double click on something you mentioned. Couple of things. Historically IU has been I'm sure closely associated with Chicago. You think of what are students thinking of doing when they graduate? Maybe they're going to go home, but the sort of center of gravity it's like Chicago. You mentioned talking about, especially post pandemic, the idea that you can live anywhere. Not everybody wants to live in Manhattan or Santa Clara. And of course, technology over decades has given us the ability to do things remotely and IU is plugged into the globe, doesn't matter where you are. But have you seen either during or post pandemic 'cause we're really in the early stages of this. Are you seeing that? Are you seeing people say, Hey, thinking about their family, where do I want to live? Where do I want to raise my family? I'm in academia and no, I don't want to live in Manhattan. Hey, we can go to IU and we're plugged into the globe. And then students in California we see this, there's some schools on the central coast where people loved living there when they were in college but there was no economic opportunity there. Are you seeing a shift, are basically houses in Bloomington becoming unaffordable because people are saying, you know what, I'm going to stay here. What does that look like? >> I mean, for our group there are a lot of people who do work from home, have chosen to stay in Bloomington. We have had some people who for various reasons want to leave. We want to retain them, so we allow them to work remotely. And that has turned into a tool for recruiting. The kid that graduates from Caltech. Doesn't want to stay in Caltech in California, we have an opportunity now he can move to wherever between here and there and we can hire him do work. We love to have people come to Indiana. We think it is a unique experience, Bloomington, Indianapolis are great places. But I think the reality is, we're not going to get everybody to come live, be a Hoosier, how do we get them to come and work at IU? In some ways disappointing when we don't have buildings full of people, but 40 paying Zoom or teams window, not kind the same thing. But I think this is what we're going to have to figure out, how do we make this kind of environment work. >> Last question here, give you a chance to put in a plug for Indiana University. For those those data scientists those researchers who may be open to working somewhere else, why would they come to Indiana University? What's different about what you do from what every other academic institution does, Aaron? >> Yeah, I think a lot of what we just talked about today in terms of from a network's perspective, that were plugged in globally. I think if you look beyond the networks I think there are tremendous opportunities for folks to come to Bloomington and experience some bleeding edge technology and to work with some very talented people. I've been amazed, I've been at IU for 20 years and as I look at our peers across higher ed, well, I don't want to say they're not doing as well I do want brag at how well we're doing in terms of organizationally addressing things like security in a centralized way that really puts us in a better position. We're just doing a lot of things that I think some of our peers are catching up to and have been catching up to over the last 10, 12 years. >> And I think to sure scale of IU goes unnoticed at times. IU has the largest medical school in the country. One of the largest nursing schools in the country. And people just kind of overlook some of that. Maybe we need to do a better job of talking about it. But for those who are aware there are a lot of opportunities in life sciences, healthcare, the social sciences. IU has the largest logistics program in the world. We teach more languages than anybody else in the world. The varying kinds of things you can get involved with at IU including networks, I think pretty unparalleled. >> Well, making the case for high performance computing in the Hoosier State. Aaron, Dave, thanks very much for joining you making a great case. >> Thank you. >> Thank you. >> We'll be back right after this short message. This is theCUBE. (upbeat music)

Published Date : Nov 16 2022

SUMMARY :

that are exhibiting here. and security that you need. of the things we've done is, in light of some of the and looked to partner with We also like to showcase what we do at IU. of cheap labor for you at least. that they need to do. of the Indiana GigaPOP. and all of the universities in Indiana And what are the benefits and that's the path traffic will take. You're here to claim you're get more people to be and in the network space but in the HPC world I and finding the specific solution, the people come to IU and IU is plugged into the globe, We love to have people come to Indiana. open to working somewhere else, and to work with some And I think to sure scale in the Hoosier State. This is theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

AaronPERSON

0.99+

CaliforniaLOCATION

0.99+

IUORGANIZATION

0.99+

IndianaLOCATION

0.99+

Dave JentPERSON

0.99+

Aaron NealPERSON

0.99+

WisconsinLOCATION

0.99+

ChicagoLOCATION

0.99+

Paul GillPERSON

0.99+

DavePERSON

0.99+

ManhattanLOCATION

0.99+

20 yearsQUANTITY

0.99+

BloomingtonLOCATION

0.99+

DallasLOCATION

0.99+

James DickersonPERSON

0.99+

three weeksQUANTITY

0.99+

35,000 studentsQUANTITY

0.99+

United StatesLOCATION

0.99+

two guestsQUANTITY

0.99+

Indiana UniversityORGANIZATION

0.99+

CaltechORGANIZATION

0.99+

Santa ClaraLOCATION

0.99+

eachQUANTITY

0.99+

IULOCATION

0.99+

oneQUANTITY

0.99+

NSFORGANIZATION

0.99+

twiceQUANTITY

0.99+

40QUANTITY

0.99+

OneQUANTITY

0.99+

thousandsQUANTITY

0.99+

Hoosier StateLOCATION

0.99+

BTAAORGANIZATION

0.98+

todayDATE

0.98+

pandemicEVENT

0.98+

bothQUANTITY

0.98+

TodayDATE

0.98+

OmniSOCORGANIZATION

0.98+

10 years agoDATE

0.98+

Indiana Office of TechnologyORGANIZATION

0.98+

one laptopQUANTITY

0.97+

EsnetORGANIZATION

0.97+

six, seven devicesQUANTITY

0.97+

GlobalNOCORGANIZATION

0.96+

Big TenORGANIZATION

0.96+

single sourceQUANTITY

0.95+

one directionQUANTITY

0.93+

Jet StreamORGANIZATION

0.93+

several years agoDATE

0.92+