Florian Berberich, PRACE AISBL | SuperComputing 22
>>We're back at Supercomputing 22 in Dallas, winding down day four of this conference. I'm Paul Gillan, my co-host Dave Nicholson. We are talking, we've been talking super computing all week and you hear a lot about what's going on in the United States, what's going on in China, Japan. What we haven't talked a lot about is what's going on in Europe and did you know that two of the top five supercomputers in the world are actually from European countries? Well, our guest has a lot to do with that. Florian, bearish, I hope I pronounce that correctly. My German is, German is not. My strength is the operations director for price, ais, S B L. And let's start with that. What is price? >>So, hello and thank you for the invitation. I'm Flon and Price is a partnership for Advanced Computing in Europe. It's a non-profit association with the seat in Brussels in Belgium. And we have 24 members. These are representatives from different European countries dealing with high performance computing in at their place. And we, so far, we provided the resources for our European research communities. But this changed in the last year, this oral HPC joint undertaking who put a lot of funding in high performance computing and co-funded five PET scale and three preis scale systems. And two of the preis scale systems. You mentioned already, this is Lumi and Finland and Leonardo in Bologna in Italy were in the place for and three and four at the top 500 at least. >>So why is it important that Europe be in the top list of supercomputer makers? >>I think Europe needs to keep pace with the rest of the world. And simulation science is a key technology for the society. And we saw this very recently with a pandemic, with a covid. We were able to help the research communities to find very quickly vaccines and to understand how the virus spread around the world. And all this knowledge is important to serve the society. Or another example is climate change. Yeah. With these new systems, we will be able to predict more precise the changes in the future. So the more compute power you have, the better the smaller the grid and there is resolution you can choose and the lower the error will be for the future. So these are, I think with these systems, the big or challenges we face can be addressed. This is the climate change, energy, food supply, security. >>Who are your members? Do they come from businesses? Do they come from research, from government? All of the >>Above. Yeah. Our, our members are public organization, universities, research centers, compute sites as a data centers, but But public institutions. Yeah. And we provide this services for free via peer review process with excellence as the most important criteria to the research community for free. >>So 40 years ago when, when the idea of an eu, and maybe I'm getting the dates a little bit wrong, when it was just an idea and the idea of a common currency. Yes. Reducing friction between, between borders to create a trading zone. Yes. There was a lot of focus there. Fast forward to today, would you say that these efforts in supercomputing, would they be possible if there were not an EU super structure? >>No, I would say this would not be possible in this extent. I think when though, but though European initiatives are, are needed and the European Commission is supporting these initiatives very well. And before praise, for instance 2008, there were research centers and data centers operating high performance computing systems, but they were not talking to each other. So it was isolated praise created community of operation sites and it facilitated the exchange between them and also enabled to align investments and to, to get the most out of the available funding. And also at this time, and still today for one single country in Europe, it's very hard to provide all the different architectures needed for all the different kind of research communities and applications. If you want to, to offer always the latest technologies, though this is really hardly possible. So with this joint action and opening the resources for other research groups from other countries, you, we, we were able to, yeah, get access to the latest technology for different communities at any given time though. And >>So, so the fact that the two systems that you mentioned are physically located in Finland and in Italy, if you were to walk into one of those facilities and meet the people that are there, they're not just fins in Finland and Italians in Italy. Yeah. This is, this is very much a European effort. So this, this is true. So, so in this, in that sense, the geography is sort of abstracted. Yeah. And the issues of sovereignty that make might take place in in the private sector don't exist or are there, are there issues with, can any, what are the requirements for a researcher to have access to a system in Finland versus a system in Italy? If you've got a EU passport, Hmm. Are you good to go? >>I think you are good to go though. But EU passport, it's now it becomes complicated and political. It's, it's very much, if we talk about the recent systems, well first, let me start a praise. Praise was inclusive and there was no any constraints as even we had users from US, Australia, we wanted just to support excellence in science. And we did not look at the nationality of the organization, of the PI and and so on. There were quotas, but these quotas were very generously interpreted. So, and if so, now with our HPC joint undertaking, it's a question from what European funds, these systems were procured and if a country or being country are associated to this funding, the researchers also have access to these systems. And this addresses basically UK and and Switzerland, which are not in the European Union, but they were as created to the Horizon 2020 research framework. And though they could can access the systems now available, Lumi and Leono and the Petascale system as well. How this will develop in the future, I don't know. It depends to which research framework they will be associated or not. >>What are the outputs of your work at price? Are they reference designs? Is it actual semiconductor hardware? Is it the research? What do you produce? >>So the, the application we run or the simulation we run cover all different scientific domains. So it's, it's science, it's, but also we have industrial let projects with more application oriented targets. Aerodynamics for instance, for cars or planes or something like this. But also fundamental science like the physical elementary physics particles for instance or climate change, biology, drug design, protein costa, all these >>Things. Can businesses be involved in what you do? Can they purchase your, your research? Do they contribute to their, I'm sure, I'm sure there are many technology firms in Europe that would like to be involved. >>So this involving industry though our calls are open and is, if they want to do open r and d, they are invited to submit also proposals. They will be evaluated and if this is qualifying, they will get the access and they can do their jobs and simulations. It's a little bit more tricky if it's in production, if they use these resources for their business and do not publish the results. They are some, well, probably more sites who, who are able to deal with these requests. Some are more dominant than others, but this is on a smaller scale, definitely. Yeah. >>What does the future hold? Are you planning to, are there other countries who will be joining the effort, other institutions? Do you plan to expand your, your scope >>Well, or I think or HPC joint undertaking with 36 member states is quite, covers already even more than Europe. And yeah, clearly if, if there are other states interest interested to join that there is no limitation. Although the focus lies on European area and on union. >>When, when you interact with colleagues from North America, do you, do you feel that there is a sort of European flavor to supercomputing that is different or are we so globally entwined? No. >>So research is not national, it's not European, it's international. This is also clearly very clear and I can, so we have a longstanding collaboration with our US colleagues and also with Chap and South Africa and Canada. And when Covid hit the world, we were able within two weeks to establish regular seminars inviting US and European colleagues to talk to to other, to each other and exchange the results and find new collaboration and to boost the research activities. So, and I have other examples as well. So when we, we already did the joint calls US exceed and in Europe praise and it was a very interesting experience. So we received applications from different communities and we decided that we will review this on our side, on European, with European experts and US did it in US with their experts. And you can guess what the result was at the meeting when we compared our results, it was matching one by one. It was exactly the same. Recite >>That it, it's, it's refreshing to hear a story of global collaboration. Yeah. Where people are getting along and making meaningful progress. >>I have to mention you, I have to to point out, you did not mention China as a country you were collaborating with. Is that by, is that intentional? >>Well, with China, definitely we have less links and collaborations also. It's also existing. There, there was initiative to look at the development of the technologies and the group meet on a regular basis. And there, there also Chinese colleagues involved. It's on a lower level, >>Yes, but is is the con conversations are occurring. We're out of time. Florian be operations director of price, European Super Computing collaborative. Thank you so much for being with us. I'm always impressed when people come on the cube and submit to an interview in a language that is not their first language. Yeah, >>Absolutely. >>Brave to do that. Yeah. Thank you. You're welcome. Thank you. We'll be right back after this break from Supercomputing 22 in Dallas.
SUMMARY :
Well, our guest has a lot to do with that. And we have 24 members. And we saw this very recently with excellence as the most important criteria to the research Fast forward to today, would you say that these the exchange between them and also enabled to So, so the fact that the two systems that you mentioned are physically located in Finland nationality of the organization, of the PI and and so on. But also fundamental science like the physical Do they contribute to their, I'm sure, I'm sure there are many technology firms in business and do not publish the results. Although the focus lies on European area is different or are we so globally entwined? so we have a longstanding collaboration with our US colleagues and That it, it's, it's refreshing to hear a story of global I have to mention you, I have to to point out, you did not mention China as a country you the development of the technologies and the group meet Yes, but is is the con conversations are occurring. Brave to do that.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Paul Gillan | PERSON | 0.99+ |
Florian Berberich | PERSON | 0.99+ |
Brussels | LOCATION | 0.99+ |
Finland | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
European Commission | ORGANIZATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
Italy | LOCATION | 0.99+ |
Bologna | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
24 members | QUANTITY | 0.99+ |
Florian | PERSON | 0.99+ |
United States | LOCATION | 0.99+ |
two systems | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
2008 | DATE | 0.99+ |
Belgium | LOCATION | 0.99+ |
Australia | LOCATION | 0.99+ |
four | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
last year | DATE | 0.99+ |
EU | ORGANIZATION | 0.99+ |
Covid | PERSON | 0.99+ |
pandemic | EVENT | 0.99+ |
first language | QUANTITY | 0.98+ |
two weeks | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Canada | LOCATION | 0.98+ |
South Africa | LOCATION | 0.97+ |
European | OTHER | 0.97+ |
36 member states | QUANTITY | 0.97+ |
Chap | ORGANIZATION | 0.97+ |
40 years ago | DATE | 0.97+ |
Horizon 2020 | TITLE | 0.96+ |
HPC | ORGANIZATION | 0.96+ |
Flon | ORGANIZATION | 0.96+ |
European | LOCATION | 0.96+ |
day four | QUANTITY | 0.94+ |
Chinese | OTHER | 0.93+ |
Switzerland | LOCATION | 0.92+ |
UK | LOCATION | 0.92+ |
ais | ORGANIZATION | 0.91+ |
one of those facilities | QUANTITY | 0.86+ |
five supercomputers | QUANTITY | 0.86+ |
European Union | ORGANIZATION | 0.85+ |
Lumi and | ORGANIZATION | 0.8+ |
Leonardo | ORGANIZATION | 0.79+ |
three preis scale systems | QUANTITY | 0.78+ |
one single country | QUANTITY | 0.78+ |
China, | LOCATION | 0.78+ |
Price | ORGANIZATION | 0.76+ |
Finland | ORGANIZATION | 0.69+ |
Europe | ORGANIZATION | 0.68+ |
22 | OTHER | 0.67+ |
500 | QUANTITY | 0.66+ |
China | LOCATION | 0.65+ |
five PET | QUANTITY | 0.64+ |
S B L. | PERSON | 0.6+ |
price | ORGANIZATION | 0.6+ |
scale | OTHER | 0.58+ |
Petascale | TITLE | 0.57+ |
Jay Boisseau, Dell Technologies | SuperComputing 22
>>We are back in the final stretch at Supercomputing 22 here in Dallas. I'm your host Paul Gillum with my co-host Dave Nicholson, and we've been talking to so many smart people this week. It just, it makes, boggles my mind are next guest. J Poso is the HPC and AI technology strategist at Dell. Jay also has a PhD in astronomy from the University of Texas. And I'm guessing you were up watching the Artemis launch the other night? >>I, I wasn't. I really should have been, but, but I wasn't, I was in full super computing conference mode. So that means discussions at, you know, various venues with people into the wee hours. >>How did you make the transition from a PhD in astronomy to an HPC expert? >>It was actually really straightforward. I did theoretical astrophysics and I was modeling what white dwarfs look like when they create matter and then explode as type one A super Novi, which is a class of stars that blow up. And it's a very important class because they blow up almost exactly the same way. So if you know how bright they are physically, not just how bright they appear in the sky, but if you can determine from first principles how bright they're, then you have a standard ruler for the universe when they go off in a galaxy, you know how far the galaxy is about how faint it is. So to model these though, you had to understand equations of physics, including electron degeneracy pressure, as well as normal fluid dynamics kinds of of things. And so you were solving for an explosive burning front, ripping through something. And that required a supercomputer to have anywhere close to the fat fidelity to get a reasonable answer and, and hopefully some understanding. >>So I've always said electrons are degenerate. I've always said it and I, and I mentioned to Paul earlier, I said, finally we're gonna get a guest to consort through this whole dark energy dark matter thing for us. We'll do that after, after, after the segment. >>That's a whole different, >>So, well I guess super computing being a natural tool that you would use. What is, what do you do in your role as a strategist? >>So I'm in the product management team. I spend a lot of time talking to customers about what they want to do next. HPC customers are always trying to be maximally productive of what they've got, but always wanting to know what's coming next. Because if you think about it, we can't simulate the entire human body cell for cell on any supercomputer day. We can simulate parts of it, cell for cell or the whole body with macroscopic physics, but not at the, you know, atomic level, the entire organism. So we're always trying to build more powerful computers to solve larger problems with more fidelity and less approximations in it. And so I help people try to understand which technologies for their next system might give them the best advance in capabilities for their simulation work, their data analytics work, their AI work, et cetera. Another part of it is talking to our great technology partner ecosystem and learning about which technologies they have. Cause it feeds the first thing, right? So understanding what's coming, and Dell has a, we're very proud of our large partner ecosystem. We embrace many different partners in that with different capabilities. So understanding those helps understand what your future systems might be. That those are two of the major roles in it. Strategic customers and strategic technologies. >>So you've had four days to wander the, this massive floor here and lots of startups, lots of established companies doing interesting things. What have you seen this week that really excites you? >>So I'm gonna tell you a dirty little secret here. If you are working for someone who makes super computers, you don't get as much time to wander the floor as you would think because you get lots of meetings with people who really want to understand in an NDA way, not just in the public way that's on the floor, but what's, what are you not telling us on the floor? What's coming next? And so I've been in a large number of customer meetings as well as being on the floor. And while I can't obviously share the everything, that's a non-disclosure topic in those, some things that we're hearing a lot about, people are really concerned with power because they see the TDP on the roadmaps for all the silicon providers going way up. And so people with power comes heat as waste. And so that means cooling. >>So power and cooling has been a big topic here. Obviously accelerators are, are increasing in importance in hpc not just for AI calculations, but now also for simulation calculations. And we are very proud of the three new accelerator platforms we launched here at the show that are coming out in a quarter or so. Those are two of the big topics we've seen. You know, there's, as you walk the floor here, you see lots of interesting storage vendors. HPC community's been do doing storage the same way for roughly 20 years. But now we see a lot of interesting players in that space. We have some great things in storage now and some great things that, you know, are coming in a year or two as well. So it's, it's interesting to see that diversity of that space. And then there's always the fun, exciting topics like quantum computing. We unveiled our first hybrid classical quantum computing system here with I on Q and I can't say what the future holds in this, in this format, but I can say we believe strongly in the future of quantum computing and that this, that future will be integrated with the kind of classical computing infrastructure that we make and that will help make quantum computing more powerful downstream. >>Well, let's go down that rabbit hole because, oh boy, boy, quantum computing has been talked about for a long time. There was a lot of excitement about it four or five years ago, some of the major vendors were announcing quantum computers in the cloud. Excitement has kind of died down. We don't see a lot of activity around, no, not a lot of talk around commercial quantum computers, yet you're deep into this. How close are we to have having a true quantum computer or is it a, is it a hybrid? More >>Likely? So there are probably more than 20 and I think close to 40 companies trying different approaches to make quantum computers. So, you know, Microsoft's pursuing a topol topological approach, do a photonics based approach. I, on Q and i on trap approach. These are all different ways of trying to leverage the quantum properties of nature. We know the properties exist, we use 'em in other technologies. We know the physics, but trying the engineering is very difficult. It's very difficult. I mean, just like it was difficult at one point to split the atom. It's very difficult to build technologies that leverage quantum properties of nature in a consistent and reliable and durable way, right? So I, you know, I wouldn't wanna make a prediction, but I will tell you I'm an optimist. I believe that when a tremendous capability with, with tremendous monetary gain potential lines up with another incentive, national security engineering seems to evolve faster when those things line up, when there's plenty of investment and plenty of incentive things happen. >>So I think a lot of my, my friends in the office of the CTO at Dell Technologies, when they're really leading this effort for us, you know, they would say a few to several years probably I'm an optimist, so I believe that, you know, I, I believe that we will sell some of the solution we announced here in the next year for people that are trying to get their feet wet with quantum. And I believe we'll be selling multiple quantum hybrid classical Dell quantum computing systems multiple a year in a year or two. And then of course you hope it goes to tens and hundreds of, you know, by the end of the decade >>When people talk about, I'm talking about people writ large, super leaders in supercomputing, I would say Dell's name doesn't come up in conversations I have. What would you like them to know that they don't know? >>You know, I, I hope that's not true, but I, I, I guess I understand it. We are so good at making the products from which people make clusters that we're number one in servers, we're number one in enterprise storage. We're number one in so many areas of enterprise technology that I, I think in some ways being number one in those things detracts a little bit from a subset of the market that is a solution subset as opposed to a product subset. But, you know, depending on which analyst you talk to and how they count, we're number one or number two in the world in supercomputing revenue. We don't always do the biggest splashy systems. We do the, the frontier system at t, the HPC five system at ENI in Europe. That's the largest academic supercomputer in the world and the largest industrial super >>That's based the world on Dell. Dell >>On Dell hardware. Yep. But we, I think our vision is really that we want to help more people use HPC to solve more problems than any vendor in the world. And those problems are various scales. So we are really concerned about the more we're democratizing HPC to make it easier for more people to get in at any scale that their budget and workloads require, we're optimizing it to make sure that it's not just some parts they're getting, that they are validated to work together with maximum scalability and performance. And we have a great HPC and AI innovation lab that does this engineering work. Cuz you know, one of the myths is, oh, I can just go buy a bunch of servers from company X and a network from company Y and a storage system from company Z and then it'll all work as an equivalent cluster. Right? Not true. It'll probably work, but it won't be the highest performance, highest scalability, highest reliability. So we spend a lot of time optimizing and then we are doing things to try to advance the state of HPC as well. What our future systems look like in the second half of this decade might be very different than what they look like right. Now. >>You mentioned a great example of a limitation that we're running up against right now. You mentioned an entire human body as a, as a, as an organism >>Or any large system that you try to model at the atomic level, but it's a huge macro system, >>Right? So will we be able to reach milestones where we can get our arms entirely around something like an entire human organism with simply quantitative advances as opposed to qualitative advances? Right now, as an example, let's just, let's go down to the basics from a Dell perspective. You're in a season where microprocessor vendors are coming out with next gen stuff and those next NextGen microprocessors, GPUs and CPUs are gonna be plugged into NextGen motherboards, PCI e gen five, gen six coming faster memory, bigger memory, faster networking, whether it's NS or InfiniBand storage controllers, all bigger, better, faster, stronger. And I suspect that systems like Frontera, I don't know, but I suspect that a lot of the systems that are out there are not on necessarily what we would think of as current generation technology, but maybe they're n minus one as a practical matter. So, >>But yeah, I mean they have a lifetime, so Exactly. >>The >>Lifetime is longer than the evolution. >>That's the normal technologies. Yeah. So, so what some people miss is this is, this is the reality that when, when we move forward with the latest things that are being talked about here, it's often a two generation move for an individual, for an individual organization. Yep. >>So now some organizations will have multiple systems and they, the system's leapfrog and technology generations, even if one is their real large system, their next one might be newer technology, but smaller, the next one might be a larger one with newer technology and such. Yeah. So the, the biggest super computing sites are, are often running more than one HPC system that have been specifically designed with the latest technologies and, and designed and configured for maybe a different subset of their >>Workloads. Yeah. So, so the, the, to go back to kinda the, the core question, in your opinion, do we need that qualitative leap to something like quantum computing in order to get to the point, or is it simply a question of scale and power at the, at the, at the individual node level to get us to the point where we can in fact gain insight from a digital model of an entire human body, not just looking at a, not, not just looking at an at, at an organ. And to your point, it's not just about human body, any system that we would characterize as being chaotic today, so a weather system, whatever. Do you, are there any milestones that you're thinking of where you're like, wow, you know, I have, I, I understand everything that's going on, and I think we're, we're a year away. We're a, we're, we're a, we're a compute generation away from being able to gain insight out of systems that right now we can't simply because of scale. It's a very, very long question that I just asked you, but I think I, but hopefully, hopefully you're tracking it. What, what are your, what are your thoughts? What are these, what are these inflection points that we, that you've, in your mind? >>So I, I'll I'll start simple. Remember when we used to buy laptops and we worried about what gigahertz the clock speed was Exactly. Everybody knew the gigahertz of it, right? There's some tasks at which we're so good at making the hardware that now the primary issues are how great is the screen? How light is it, what's the battery life like, et cetera. Because for the set of applications on there, we we have enough compute power. We don't, you don't really need your laptop. Most people don't need their laptop to have twice as powerful a processor that actually rather up twice the battery life on it or whatnot, right? We make great laptops. We, we design for all of those, configure those parameters now. And what, you know, we, we see some customers want more of x, somewhat more of y but the, the general point is that the amazing progress in, in microprocessors, it's sufficient for most of the workloads at that level. Now let's go to HPC level or scientific and technical level. And when it needs hpc, if you're trying to model the orbit of the moon around the earth, you don't really need a super computer for that. You can get a highly accurate model on a, on a workstation, on a server, no problem. It won't even really make it break a sweat. >>I had to do it with a slide rule >>That, >>That >>Might make you break a sweat. Yeah. But to do it with a, you know, a single body orbiting with another body, I say orbiting around, but we both know it's really, they're, they're both ordering the center of mass. It's just that if one is much larger, it seems like one's going entirely around the other. So that's, that's not a super computing problem. What about the stars in a galaxy trying to understand how galaxies form spiral arms and how they spur star formation. Right now you're talking a hundred billion stars plus a massive amount of inter stellar medium in there. So can you solve that on that server? Absolutely not. Not even close. Can you solve it on the largest super computer in the world today? Yes and no. You can solve it with approximations on the largest super computer in the world today. But there's a lot of approximations that go into even that. >>The good news is the simulations produce things that we see through our great telescopes. So we know the approximations are sufficient to get good fidelity, but until you really are doing direct numerical simulation of every particle, right? Right. Which is impossible to do. You need a computer as big as the universe to do that. But the approximations and the science in the science as well as the known parts of the science are good enough to give fidelity. So, and answer your question, there's tremendous number of problem scales. There are problems in every field of science and study that exceed the der direct numerical simulation capabilities of systems today. And so we always want more computing power. It's not macho flops, it's real, we need it, we need exo flops and we will need zeta flops beyond that and yada flops beyond that. But an increasing number of problems will be solved as we keep working to solve problems that are farther out there. So in terms of qualitative steps, I do think technologies like quantum computing, to be clear as part of a hybrid classical quantum system, because they're really just accelerators for certain kinds of algorithms, not for general purpose algorithms. I do think advances like that are gonna be necessary to solve some of the very hardest problem. It's easy to actually formulate an optimization problem that is absolutely intractable by the larger systems in the world today, but quantum systems happen to be in theory when they're big and stable enough, great at that kind of problem. >>I, that should be understood. Quantum is not a cure all for absolutely. For the, for the shortage of computing power. It's very good for certain, certain >>Problems. And as you said at this super computing, we see some quantum, but it's a little bit quieter than I probably expected. I think we're in a period now of everybody saying, okay, there's been a lot of buzz. We know it's gonna be real, but let's calm down a little bit and figure out what the right solutions are. And I'm very proud that we offered one of those >>At the show. We, we have barely scratched the surface of what we could talk about as we get into intergalactic space, but unfortunately we only have so many minutes and, and we're out of them. Oh, >>I'm >>J Poso, HPC and AI technology strategist at Dell. Thanks for a fascinating conversation. >>Thanks for having me. Happy to do it anytime. >>We'll be back with our last interview of Supercomputing 22 in Dallas. This is Paul Gillen with Dave Nicholson. Stay with us.
SUMMARY :
We are back in the final stretch at Supercomputing 22 here in Dallas. So that means discussions at, you know, various venues with people into the wee hours. the sky, but if you can determine from first principles how bright they're, then you have a standard ruler for the universe when We'll do that after, after, after the segment. What is, what do you do in your role as a strategist? We can simulate parts of it, cell for cell or the whole body with macroscopic physics, What have you seen this week that really excites you? not just in the public way that's on the floor, but what's, what are you not telling us on the floor? the kind of classical computing infrastructure that we make and that will help make quantum computing more in the cloud. We know the properties exist, we use 'em in other technologies. And then of course you hope it goes to tens and hundreds of, you know, by the end of the decade What would you like them to know that they don't know? detracts a little bit from a subset of the market that is a solution subset as opposed to a product subset. That's based the world on Dell. So we are really concerned about the more we're You mentioned a great example of a limitation that we're running up against I don't know, but I suspect that a lot of the systems that are out there are not on That's the normal technologies. but smaller, the next one might be a larger one with newer technology and such. And to your point, it's not just about human of the moon around the earth, you don't really need a super computer for that. But to do it with a, you know, a single body orbiting with another are sufficient to get good fidelity, but until you really are doing direct numerical simulation I, that should be understood. And as you said at this super computing, we see some quantum, but it's a little bit quieter than We, we have barely scratched the surface of what we could talk about as we get into intergalactic J Poso, HPC and AI technology strategist at Dell. Happy to do it anytime. This is Paul Gillen with Dave Nicholson.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Paul Gillum | PERSON | 0.99+ |
Jay Boisseau | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Jay | PERSON | 0.99+ |
Dallas | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
J Poso | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
tens | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Paul Gillen | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
University of Texas | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
four | DATE | 0.99+ |
first principles | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
more than 20 | QUANTITY | 0.99+ |
two generation | QUANTITY | 0.98+ |
Supercomputing 22 | TITLE | 0.98+ |
one point | QUANTITY | 0.98+ |
twice | QUANTITY | 0.98+ |
hundreds | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
five years ago | DATE | 0.97+ |
both | QUANTITY | 0.97+ |
earth | LOCATION | 0.96+ |
more than one | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
a year | QUANTITY | 0.96+ |
this week | DATE | 0.96+ |
first thing | QUANTITY | 0.95+ |
20 years | QUANTITY | 0.94+ |
four days | QUANTITY | 0.93+ |
second half of this decade | DATE | 0.93+ |
ENI | ORGANIZATION | 0.91+ |
Z | ORGANIZATION | 0.9+ |
40 companies | QUANTITY | 0.9+ |
e gen five | COMMERCIAL_ITEM | 0.86+ |
a year | QUANTITY | 0.84+ |
hundred billion stars | QUANTITY | 0.83+ |
HPC | ORGANIZATION | 0.83+ |
three new accelerator platforms | QUANTITY | 0.81+ |
end of the decade | DATE | 0.8+ |
hpc | ORGANIZATION | 0.8+ |
Frontera | ORGANIZATION | 0.8+ |
single body | QUANTITY | 0.79+ |
X | ORGANIZATION | 0.76+ |
NextGen | ORGANIZATION | 0.73+ |
Supercomputing 22 | ORGANIZATION | 0.69+ |
five system | QUANTITY | 0.62+ |
gen six | QUANTITY | 0.61+ |
number one | QUANTITY | 0.57+ |
approximations | QUANTITY | 0.53+ |
particle | QUANTITY | 0.53+ |
a quarter | QUANTITY | 0.52+ |
Y | ORGANIZATION | 0.49+ |
type | OTHER | 0.49+ |
22 | OTHER | 0.49+ |
Satish Iyer, Dell Technologies | SuperComputing 22
>>We're back at Super Computing, 22 in Dallas, winding down the final day here. A big show floor behind me. Lots of excitement out there, wouldn't you say, Dave? Just >>Oh, it's crazy. I mean, any, any time you have NASA presentations going on and, and steampunk iterations of cooling systems that the, you know, it's, it's >>The greatest. I've been to hundreds of trade shows. I don't think I've ever seen NASA exhibiting at one like they are here. Dave Nicholson, my co-host. I'm Paul Gell, in which with us is Satish Ier. He is the vice president of emerging services at Dell Technologies and Satit, thanks for joining us on the cube. >>Thank you. Paul, >>What are emerging services? >>Emerging services are actually the growth areas for Dell. So it's telecom, it's cloud, it's edge. So we, we especially focus on all the growth vectors for, for the companies. >>And, and one of the key areas that comes under your jurisdiction is called apex. Now I'm sure there are people who don't know what Apex is. Can you just give us a quick definition? >>Absolutely. So Apex is actually Dells for a into cloud, and I manage the Apex services business. So this is our way of actually bringing cloud experience to our customers, OnPrem and in color. >>But, but it's not a cloud. I mean, you don't, you don't have a Dell cloud, right? It's, it's of infrastructure as >>A service. It's infrastructure and platform and solutions as a service. Yes, we don't have our own e of a public cloud, but we want to, you know, this is a multi-cloud world, so technically customers want to consume where they want to consume. So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. >>You, you mentioned something just ahead of us going on air. A great way to describe Apex, to contrast Apex with CapEx. There's no c there's no cash up front necessary. Yeah, I thought that was great. Explain that, explain that a little more. Well, >>I mean, you know, one, one of the main things about cloud is the consumption model, right? So customers would like to pay for what they consume, they would like to pay in a subscription. They would like to not prepay CapEx ahead of time. They want that economic option, right? So I think that's one of the key tenets for anything in cloud. So I think it's important for us to recognize that and think Apex is basically a way by which customers pay for what they consume, right? So that's a absolutely a key tenant for how, how we want to design Apex. So it's absolutely right. >>And, and among those services are high performance computing services. Now I was not familiar with that as an offering in the Apex line. What constitutes a high performance computing Apex service? >>Yeah, I mean, you know, I mean, this conference is great, like you said, you know, I, there's so many HPC and high performance computing folks here, but one of the things is, you know, fundamentally, if you look at high performance computing ecosystem, it is quite complex, right? And when you call it as an Apex HPC or Apex offering offer, it brings a lot of the cloud economics and cloud, you know, experience to the HPC offer. So fundamentally, it's about our ability for customers to pay for what they consume. It's where Dell takes a lot of the day to day management of the infrastructure on our own so that customers don't need to do the grunge work of managing it, and they can really focus on the actual workload, which actually they run on the CHPC ecosystem. So it, it is, it is high performance computing offer, but instead of them buying the infrastructure, running all of that by themself, we make it super easy for customers to consume and manage it across, you know, proven designs, which Dell always implements across these verticals. >>So what, what makes the high performance computing offering as opposed to, to a rack of powered servers? What do you add in to make it >>Hpc? Ah, that's a great question. So, I mean, you know, so this is a platform, right? So we are not just selling infrastructure by the drink. So we actually are fundamentally, it's based on, you know, we, we, we launch two validated designs, one for life science sales, one for manufacturing. So we actually know how these PPO work together, how they actually are validated design tested solution. And we also, it's a platform. So we actually integrate the softwares on the top. So it's just not the infrastructure. So we actually integrate a cluster manager, we integrate a job scheduler, we integrate a contained orchestration layer. So a lot of these things, customers have to do it by themself, right? If they're buy the infrastructure. So by basically we are actually giving a platform or an ecosystem for our customers to run their workloads. So make it easy for them to actually consume those. >>That's Now is this, is this available on premises for customer? >>Yeah, so we, we, we make it available customers both ways. So we make it available OnPrem for customers who want to, you know, kind of, they want to take that, take that economics. We also make it available in a colo environment if the customers want to actually, you know, extend colo as that OnPrem environment. So we do both. >>What are, what are the requirements for a customer before you roll that equipment in? How do they sort of have to set the groundwork for, >>For Well, I think, you know, fundamentally it starts off with what the actual use case is, right? So, so if you really look at, you know, the two validated designs we talked about, you know, one for, you know, healthcare life sciences, and one other one for manufacturing, they do have fundamentally different requirements in terms of what you need from those infrastructure systems. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require a lot of memory intensive loads, or do they actually require something which has got a lot of compute power. So, you know, it all depends on what they would require in terms of the workloads to be, and then we do havet sizing. So we do have small, medium, large, we have, you know, multiple infrastructure options, CPU core options. Sometimes the customer would also wanna say, you know what, as long as the regular CPUs, I also want some GPU power on top of that. So those are determinations typically a customer makes as part of the ecosystem, right? And so those are things which would, they would talk to us about to say, okay, what is my best option in terms of, you know, kind of workloads I wanna run? And then they can make a determination in terms of how, how they would actually going. >>So this, this is probably a particularly interesting time to be looking at something like HPC via Apex with, with this season of Rolling Thunder from various partners that you have, you know? Yep. We're, we're all expecting that Intel is gonna be rolling out new CPU sets from a powered perspective. You have your 16th generation of PowerEdge servers coming out, P C I E, gen five, and all of the components from partners like Invidia and Broadcom, et cetera, plugging into them. Yep. What, what does that, what does that look like from your, from your perch in terms of talking to customers who maybe, maybe they're doing things traditionally and they're likely to be not, not fif not 15 G, not generation 15 servers. Yeah. But probably more like 14. Yeah, you're offering a pretty huge uplift. Yep. What, what do those conversations look >>Like? I mean, customers, so talking about partners, right? I mean, of course Dell, you know, we, we, we don't bring any solutions to the market without really working with all of our partners, whether that's at the infrastructure level, like you talked about, you know, Intel, amd, Broadcom, right? All the chip vendors, all the way to software layer, right? So we have cluster managers, we have communities orchestrators. So we usually what we do is we bring the best in class, whether it's a software player or a hardware player, right? And we bring it together as a solution. So we do give the customers a choice, and the customers always want to pick what you they know actually is awesome, right? So they that, that we actually do that. And, you know, and one of the main aspects of, especially when you talk about these things, bringing it as a service, right? >>We take a lot of guesswork away from our customer, right? You know, one of the good example of HPC is capacity, right? So customers, these are very, you know, I would say very intensive systems. Very complex systems, right? So customers would like to buy certain amount of capacity, they would like to grow and, you know, come back, right? So give, giving them the flexibility to actually consume more if they want, giving them the buffer and coming down. All of those things are very important as we actually design these things, right? And that takes some, you know, customers are given a choice, but it actually, they don't need to worry about, oh, you know, what happens if I actually have a spike, right? There's already buffer capacity built in. So those are awesome things. When we talk about things as a service, >>When customers are doing their ROI analysis, buying CapEx on-prem versus, versus using Apex, is there a point, is there a crossover point typically at which it's probably a better deal for them to, to go OnPrem? >>Yeah, I mean, it it like specifically talking about hpc, right? I mean, why, you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, right? That's not gonna go away, right? But there are certain reasons why they would look at OnPrem or they would look at, for example, Ola environment, right? One of the main reasons they would like to do that is purely have to do with cost, right? These are pretty expensive systems, right? There is a lot of ingress, egress, there is a lot of data going back and forth, right? Public cloud, you know, it costs money to put data in or actually pull data back, right? And the second one is data residency and security requirements, right? A lot of these things are probably proprietary set of information. We talked about life sciences, there's a lot of research, right? >>Manufacturing, a lot of these things are just, just in time decision making, right? You are on a factory floor, you gotta be able to do that. Now there is a latency requirement. So I mean, I think a lot of things play, you know, plays into this outside of just cost, but data residency requirements, ingress, egress are big things. And when you're talking about mass moments of data you wanna put and pull it back in, they would like to kind of keep it close, keep it local, and you know, get a, get a, get a price >>Point. Nevertheless, I mean, we were just talking to Ian Coley from aws and he was talking about how customers have the need to sort of move workloads back and forth between the cloud and on-prem. That's something that they're addressing without posts. You are very much in the, in the on-prem world. Do you have, or will you have facilities for customers to move workloads back and forth? Yeah, >>I wouldn't, I wouldn't necessarily say, you know, Dell's cloud strategy is multi-cloud, right? So we basically, so it kind of falls into three, I mean we, some customers, some workloads are suited always for public cloud. It's easier to consume, right? There are, you know, customers also consume on-prem, the customers also consuming Kohler. And we also have like Dell's amazing piece of software like storage software. You know, we make some of these things available for customers to consume a software IP on their public cloud, right? So, you know, so this is our multi-cloud strategy. So we announced a project in Alpine, in Delta fold. So you know, if you look at those, basically customers are saying, I love your Dell IP on this, on this product, on the storage, can you make it available through, in this public environment, whether, you know, it's any of the hyper skill players. So if we do all of that, right? So I think it's, it shows that, you know, it's not always tied to an infrastructure, right? Customers want to consume the best thumb and if we need to be consumed in hyperscale, we can make it available. >>Do you support containers? >>Yeah, we do support containers on hpc. We have, we have two container orchestrators we have to support. We, we, we have aner similarity, we also have a container options to customers. Both options. >>What kind of customers are you signing up for the, for the HPC offerings? Are they university research centers or is it tend to be smaller >>Companies? It, it's, it's, you know, the last three days, this conference has been great. We probably had like, you know, many, many customers talking to us. But HC somewhere in the range of 40, 50 customers, I would probably say lot of interest from educational institutions, universities research, to your point, a lot of interest from manufacturing, factory floor automation. A lot of customers want to do dynamic simulations on factory floor. That is also quite a bit of interest from life sciences pharmacies because you know, like I said, we have two designs, one on life sciences, one on manufacturing, both with different dynamics on the infrastructure. So yeah, quite a, quite a few interest definitely from academics, from life sciences, manufacturing. We also have a lot of financials, big banks, you know, who wants to simulate a lot of the, you know, brokerage, a lot of, lot of financial data because we have some, you know, really optimized hardware we announced in Dell for, especially for financial services. So there's quite a bit of interest from financial services as well. >>That's why that was great. We often think of Dell as, as the organization that democratizes all things in it eventually. And, and, and, and in that context, you know, this is super computing 22 HPC is like the little sibling trailing around, trailing behind the super computing trend. But we definitely have seen this move out of just purely academia into the business world. Dell is clearly a leader in that space. How has Apex overall been doing since you rolled out that strategy, what, two couple? It's been, it's been a couple years now, hasn't it? >>Yeah, it's been less than two years. >>How are, how are, how are mainstream Dell customers embracing Apex versus the traditional, you know, maybe 18 months to three year upgrade cycle CapEx? Yeah, >>I mean I look, I, I think that is absolutely strong momentum for Apex and like we, Paul pointed out earlier, we started with, you know, making the infrastructure and the platforms available to customers to consume as a service, right? We have options for customers, you know, to where Dell can fully manage everything end to end, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, basically environment for the customers, we also have options where customers would say, you know what, I actually have a pretty sophisticated IT organization. I want Dell to manage the infrastructure, but up to this level in the layer up to the guest operating system, I'll take care of the rest, right? So we are seeing customers who are coming to us with various requirements in terms of saying, I can do up to here, but you take all of this pain point away from me or you do everything for me. >>It all depends on the customer. So we do have wide interest. So our, I would say our products and the portfolio set in Apex is expanding and we are also learning, right? We are getting a lot of feedback from customers in terms of what they would like to see on some of these offers. Like the example we just talked about in terms of making some of the software IP available on a public cloud where they'll look at Dell as a software player, right? That's also is absolutely critical. So I think we are giving customers a lot of choices. Our, I would say the choice factor and you know, we are democratizing, like you said, expanding in terms of the customer choices. And I >>Think it's, we're almost outta our time, but I do wanna be sure we get to Dell validated designs, which you've mentioned a couple of times. How specific are the, well, what's the purpose of these designs? How specific are they? >>They, they are, I mean I, you know, so the most of these valid, I mean, again, we look at these industries, right? And we look at understanding exactly how would, I mean we have huge embedded base of customers utilizing HPC across our ecosystem in Dell, right? So a lot of them are CapEx customers. We actually do have an active customer profile. So these validated designs takes into account a lot of customer feedback, lot of partner feedback in terms of how they utilize this. And when you build these solutions, which are kind of end to end and integrated, you need to start anchoring on something, right? And a lot of these things have different characteristics. So these validated design basically prove to us that, you know, it gives a very good jump off point for customers. That's the way I look at it, right? So a lot of them will come to the table with, they don't come to the blank sheet of paper when they say, oh, you know what I'm, this, this is my characteristics of what I want. I think this is a great point for me to start from, right? So I think that that gives that, and plus it's the power of validation, really, right? We test, validate, integrate, so they know it works, right? So all of those are hypercritical. When you talk to, >>And you mentioned healthcare, you, you mentioned manufacturing, other design >>Factoring. We just announced validated design for financial services as well, I think a couple of days ago in the event. So yep, we are expanding all those DVDs so that we, we can, we can give our customers a choice. >>We're out of time. Sat ier. Thank you so much for joining us. Thank you. At the center of the move to subscription to everything as a service, everything is on a subscription basis. You really are on the leading edge of where, where your industry is going. Thanks for joining us. >>Thank you, Paul. Thank you Dave. >>Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas, wrapping up the show this afternoon and stay with us for, they'll be half more soon.
SUMMARY :
Lots of excitement out there, wouldn't you say, Dave? you know, it's, it's He is the vice Thank you. So it's telecom, it's cloud, it's edge. Can you just give us a quick definition? So this is our way I mean, you don't, you don't have a Dell cloud, right? So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. You, you mentioned something just ahead of us going on air. I mean, you know, one, one of the main things about cloud is the consumption model, right? an offering in the Apex line. we make it super easy for customers to consume and manage it across, you know, proven designs, So, I mean, you know, so this is a platform, if the customers want to actually, you know, extend colo as that OnPrem environment. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require Thunder from various partners that you have, you know? I mean, of course Dell, you know, we, we, So customers, these are very, you know, I would say very intensive systems. you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, in, they would like to kind of keep it close, keep it local, and you know, get a, Do you have, or will you have facilities So you know, if you look at those, basically customers are saying, I love your Dell IP on We have, we have two container orchestrators We also have a lot of financials, big banks, you know, who wants to simulate a you know, this is super computing 22 HPC is like the little sibling trailing around, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, we are democratizing, like you said, expanding in terms of the customer choices. How specific are the, well, what's the purpose of these designs? So these validated design basically prove to us that, you know, it gives a very good jump off point for So yep, we are expanding all those DVDs so that we, Thank you so much for joining us. Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Terry | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ian Coley | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Terry Ramos | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Paul Gell | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Paul Gillum | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
190 days | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
European Space Agency | ORGANIZATION | 0.99+ |
Max Peterson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
Africa | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Arcus Global | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
Bahrain | LOCATION | 0.99+ |
D.C. | LOCATION | 0.99+ |
Everee | ORGANIZATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
four hours | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Zero Days | TITLE | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
Washington | LOCATION | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
Capgemini | ORGANIZATION | 0.99+ |
Department for Wealth and Pensions | ORGANIZATION | 0.99+ |
Ireland | LOCATION | 0.99+ |
Washington, DC | LOCATION | 0.99+ |
an hour | QUANTITY | 0.99+ |
Paris | LOCATION | 0.99+ |
five weeks | QUANTITY | 0.99+ |
1.8 billion | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Germany | LOCATION | 0.99+ |
450 applications | QUANTITY | 0.99+ |
Department of Defense | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
John Walls | PERSON | 0.99+ |
Satish Iyer | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
GDPR | TITLE | 0.99+ |
Middle East | LOCATION | 0.99+ |
42% | QUANTITY | 0.99+ |
Jet Propulsion Lab | ORGANIZATION | 0.99+ |
Ian Colle, AWS | SuperComputing 22
(lively music) >> Good morning. Welcome back to theCUBE's coverage at Supercomputing Conference 2022, live here in Dallas. I'm Dave Nicholson with my co-host Paul Gillin. So far so good, Paul? It's been a fascinating morning Three days in, and a fascinating guest, Ian from AWS. Welcome. >> Thanks, Dave. >> What are we going to talk about? Batch computing, HPC. >> We've got a lot, let's get started. Let's dive right in. >> Yeah, we've got a lot to talk about. I mean, first thing is we recently announced our batch support for EKS. EKS is our Kubernetes, managed Kubernetes offering at AWS. And so batch computing is still a large portion of HPC workloads. While the interactive component is growing, the vast majority of systems are just kind of fire and forget, and we want to run thousands and thousands of nodes in parallel. We want to scale out those workloads. And what's unique about our AWS batch offering, is that we can dynamically scale, based upon the queue depth. And so customers can go from seemingly nothing up to thousands of nodes, and while they're executing their work they're only paying for the instances while they're working. And then as the queue depth starts to drop and the number of jobs waiting in the queue starts to drop, then we start to dynamically scale down those resources. And so it's extremely powerful. We see lots of distributed machine learning, autonomous vehicle simulation, and traditional HPC workloads taking advantage of AWS Batch. >> So when you have a Kubernetes cluster does it have to be located in the same region as the HPC cluster that's going to be doing the batch processing, or does the nature of batch processing mean, in theory, you can move something from here to somewhere relatively far away to do the batch processing? How does that work? 'Cause look, we're walking around here and people are talking about lengths of cables in order to improve performance. So what does that look like when you peel back the cover and you look at it physically, not just logically, AWS is everywhere, but physically, what does that look like? >> Oh, physically, for us, it depends on what the customer's looking for. We have workflows that are all entirely within a single region. And so where they could have a portion of say the traditional HPC workflow, is within that region as well as the batch, and they're saving off the results, say to a shared storage file system like our Amazon FSx for Lustre, or maybe aging that back to an S3 object storage for a little lower cost storage solution. Or you can have customers that have a kind of a multi-region orchestration layer to where they say, "You know what? "I've got a portion of my workflow that occurs "over on the other side of the country "and I replicate my data between the East Coast "and the West Coast just based upon business needs. "And I want to have that available to customers over there. "And so I'll do a portion of it in the East Coast "a portion of it in the West Coast." Or you can think of that even globally. It really depends upon the customer's architecture. >> So is the intersection of Kubernetes with HPC, is this relatively new? I know you're saying you're, you're announcing it. >> It really is. I think we've seen a growing perspective. I mean, Kubernetes has been a long time kind of eating everything, right, in the enterprise space? And now a lot of CIOs in the industrial space are saying, "Why am I using one orchestration layer "to manage my HPC infrastructure and another one "to manage my enterprise infrastructure?" And so there's a growing appreciation that, you know what, why don't we just consolidate on one? And so that's where we've seen a growth of Kubernetes infrastructure and our own managed Kubernetes EKS on AWS. >> Last month you announced a general availability of Trainium, of a chip that's optimized for AI training. Talk about what's special about that chip or what is is customized to the training workloads. >> Yeah, what's unique about the Trainium, is you'll you'll see 40% price performance over any other GPU available in the AWS cloud. And so we've really geared it to be that most price performance of options for our customers. And that's what we like about the silicon team, that we're part of that Annaperna acquisition, is because it really has enabled us to have this differentiation and to not just be innovating at the software level but the entire stack. That Annaperna Labs team develops our network cards, they develop our ARM cards, they developed this Trainium chip. And so that silicon innovation has become a core part of our differentiator from other vendors. And what Trainium allows you to do is perform similar workloads, just at a lower price performance. >> And you also have a chip several years older, called Inferentia- >> Um-hmm. >> Which is for inferencing. What is the difference between, I mean, when would a customer use one versus the other? How would you move the workload? >> What we've seen is customers traditionally have looked for a certain class of machine, more of a compute type that is not as accelerated or as heavy as you would need for Trainium for their inference portion of their workload. So when they do that training they want the really beefy machines that can grind through a lot of data. But when you're doing the inference, it's a little lighter weight. And so it's a different class of machine. And so that's why we've got those two different product lines with the Inferentia being there to support those inference portions of their workflow and the Trainium to be that kind of heavy duty training work. >> And then you advise them on how to migrate their workloads from one to the other? And once the model is trained would they switch to an Inferentia-based instance? >> Definitely, definitely. We help them work through what does that design of that workflow look like? And some customers are very comfortable doing self-service and just kind of building it on their own. Other customers look for a more professional services engagement to say like, "Hey, can you come in and help me work "through how I might modify my workflow to "take full advantage of these resources?" >> The HPC world has been somewhat slower than commercial computing to migrate to the cloud because- >> You're very polite. (panelists all laughing) >> Latency issues, they want to control the workload, they want to, I mean there are even issues with moving large amounts of data back and forth. What do you say to them? I mean what's the argument for ditching the on-prem supercomputer and going all-in on AWS? >> Well, I mean, to be fair, I started at AWS five years ago. And I can tell you when I showed up at Supercomputing, even though I'd been part of this community for many years, they said, "What is AWS doing at Supercomputing?" I know you care, wait, it's Amazon Web Services. You care about the web, can you actually handle supercomputing workloads? Now the thing that very few people appreciated is that yes, we could. Even at that time in 2017, we had customers that were performing HPC workloads. Now that being said, there were some real limitations on what we could perform. And over those past five years, as we've grown as a company, we've started to really eliminate those frictions for customers to migrate their HPC workloads to the AWS cloud. When I started in 2017, we didn't have our elastic fabric adapter, our low-latency interconnect. So customers were stuck with standard TCP/IP. So for their highly demanding open MPI workloads, we just didn't have the latencies to support them. So the jobs didn't run as efficiently as they could. We didn't have Amazon FSx for Lustre, our managed lustre offering for high performant, POSIX-compliant file system, which is kind of the key to a large portion of HPC workloads is you have to have a high-performance file system. We didn't even, I mean, we had about 25 gigs of networking when I started. Now you look at, with our accelerated instances, we've got 400 gigs of networking. So we've really continued to grow across that spectrum and to eliminate a lot of those really, frictions to adoption. I mean, one of the key ones, we had a open source toolkit that was jointly developed by Intel and AWS called CFN Cluster that customers were using to even instantiate their clusters. So, and now we've migrated that all the way to a fully functional supported service at AWS called AWS Parallel Cluster. And so you've seen over those past five years we have had to develop, we've had to grow, we've had to earn the trust of these customers and say come run your workloads on us and we will demonstrate that we can meet your demanding requirements. And at the same time, there's been, I'd say, more of a cultural acceptance. People have gone away from the, again, five years ago, to what are you doing walking around the show, to say, "Okay, I'm not sure I get it. "I need to look at it. "I, okay, I, now, oh, it needs to be a part "of my architecture but the standard questions, "is it secure? "Is it price performant? "How does it compare to my on-prem?" And really culturally, a lot of it is, just getting IT administrators used to, we're not eliminating a whole field, right? We're just upskilling the people that used to rack and stack actual hardware, to now you're learning AWS services and how to operate within that environment. And it's still key to have those people that are really supporting these infrastructures. And so I'd say it's a little bit of a combination of cultural shift over the past five years, to see that cloud is a super important part of HPC workloads, and part of it's been us meeting the the market segment of where we needed to with innovating both at the hardware level and at the software level, which we're going to continue to do. >> You do have an on-prem story though. I mean, you have outposts. We don't hear a lot of talk about outposts lately, but these innovations, like Inferentia, like Trainium, like the networking innovation you're talking about, are these going to make their way into outposts as well? Will that essentially become this supercomputing solution for customers who want to stay on-prem? >> Well, we'll see what the future lies, but we believe that we've got the, as you noted, we've got the hardware, we've got the network, we've got the storage. All those put together gives you a a high-performance computer, right? And whether you want it to be redundant in your local data center or you want it to be accessible via APIs from the AWS cloud, we want to provide that service to you. >> So to be clear, that's not that's not available now, but that is something that could be made available? >> Outposts are available right now, that have this the services that you need. >> All these capabilities? >> Often a move to cloud, an impetus behind it comes from the highest levels in an organization. They're looking at the difference between OpEx versus CapEx. CapEx for a large HPC environment, can be very, very, very high. Are these HPC clusters consumed as an operational expense? Are you essentially renting time, and then a fundamental question, are these multi-tenant environments? Or when you're referring to batches being run in HPC, are these dedicated HPC environments for customers who are running batches against them? When you think about batches, you think of, there are times when batches are being run and there are times when they're not being run. So that would sort of conjure, in the imagination, multi-tenancy, what does that look like? >> Definitely, and that's been, let me start with your second part first is- >> Yeah. That's been a a core area within AWS is we do not see as, okay we're going to, we're going to carve out this super computer and then we're going to allocate that to you. We are going to dynamically allocate multi-tenant resources to you to perform the workloads you need. And especially with the batch environment, we're going to spin up containers on those, and then as the workloads complete we're going to turn those resources over to where they can be utilized by other customers. And so that's where the batch computing component really is powerful, because as you say, you're releasing resources from workloads that you're done with. I can use those for another portion of the workflow for other work. >> Okay, so it makes a huge difference, yeah. >> You mentioned, that five years ago, people couldn't quite believe that AWS was at this conference. Now you've got a booth right out in the center of the action. What kind of questions are you getting? What are people telling you? >> Well, I love being on the show floor. This is like my favorite part is talking to customers and hearing one, what do they love, what do they want more of? Two, what do they wish we were doing that we're not currently doing? And three, what are the friction points that are still exist that, like, how can I make their lives easier? And what we're hearing is, "Can you help me migrate my workloads to the cloud? "Can you give me the information that I need, "both from a price for performance, "for an operational support model, "and really help me be an internal advocate "within my environment to explain "how my resources can be operated proficiently "within the AWS cloud." And a lot of times it's, let's just take your application a subset of your applications and let's benchmark 'em. And really that, AWS, one of the key things is we are a data-driven environment. And so when you take that data and you can help a customer say like, "Let's just not look at hypothetical, "at synthetic benchmarks, let's take "actually the LS-DYNA code that you're running, perhaps. "Let's take the OpenFOAM code that you're running, "that you're running currently "in your on-premises workloads, "and let's run it on AWS cloud "and let's see how it performs." And then we can take that back to your to the decision makers and say, okay, here's the price for performance on AWS, here's what we're currently doing on-premises, how do we think about that? And then that also ties into your earlier question about CapEx versus OpEx. We have models where actual, you can capitalize a longer-term purchase at AWS. So it doesn't have to be, I mean, depending upon the accounting models you want to use, we do have a majority of customers that will stay with that OpEx model, and they like that flexibility of saying, "Okay, spend as you go." We need to have true ups, and make sure that they have insight into what they're doing. I think one of the boogeyman is that, oh, I'm going to spend all my money and I'm not going to know what's available. And so we want to provide the, the cost visibility, the cost controls, to where you feel like, as an HPC administrator you have insight into what your customers are doing and that you have control over that. And so once you kind of take away some of those fears and and give them the information that they need, what you start to see too is, you know what, we really didn't have a lot of those cost visibility and controls with our on-premises hardware. And we've had some customers tell us we had one portion of the workload where this work center was spending thousands of dollars a day. And we went back to them and said, "Hey, we started to show this, "what you were spending on-premises." They went, "Oh, I didn't realize that." And so I think that's part of a cultural thing that, at an HPC, the question was, well on-premises is free. How do you compete with free? And so we need to really change that culturally, to where people see there is no free lunch. You're paying for the resources whether it's on-premises or in the cloud. >> Data scientists don't worry about budgets. >> Wait, on-premises is free? Paul mentioned something that reminded me, you said you were here in 2017, people said AWS, web, what are you even doing here? Now in 2022, you're talking in terms of migrating to cloud. Paul mentioned outposts, let's say that a customer says, "Hey, I'd like you to put "in a thousand-node cluster in this data center "that I happen to own, but from my perspective, "I want to interact with it just like it's "in your data center." In other words, the location doesn't matter. My experience is identical to interacting with AWS in an AWS data center, in a CoLo that works with AWS, but instead it's my physical data center. When we're tracking the percentage of IT that's that is on-prem versus off-prem. What is that? Is that, what I just described, is that cloud? And in five years are you no longer going to be talking about migrating to cloud because people go, "What do you mean migrating to cloud? "What do you even talking about? "What difference does it make?" It's either something that AWS is offering or it's something that someone else is offering. Do you think we'll be at that point in five years, where in this world of virtualization and abstraction, you talked about Kubernetes, we should be there already, thinking in terms of it doesn't matter as long as it meets latency and sovereignty requirements. So that, your prediction, we're all about insights and supercomputing- >> My prediction- >> In five years, will you still be talking about migrating to cloud or will that be something from the past? >> In five years, I still think there will be a component. I think the majority of the assumption will be that things are cloud-native and you start in the cloud and that there are perhaps, an aspect of that, that will be interacting with some sort of an edge device or some sort of an on-premises device. And we hear more and more customers that are saying, "Okay, I can see the future, "I can see that I'm shrinking my footprint." And, you can see them still saying, "I'm not sure how small that beachhead will be, "but right now I want to at least say "that I'm going to operate in that hybrid environment." And so I'd say, again, the pace of this community, I'd say five years we're still going to be talking about migrations, but I'd say the vast majority will be a cloud-native, cloud-first environment. And how do you classify that? That outpost sitting in someone's data center? I'd say we'd still, at least I'll leave that up to the analysts, but I think it would probably come down as cloud spend. >> Great place to end. Ian, you and I now officially have a bet. In five years we're going to come back. My contention is, no we're not going to be talking about it anymore. >> Okay. >> And kids in college are going to be like, "What do you mean cloud, it's all IT, it's all IT." And they won't remember this whole phase of moving to cloud and back and forth. With that, join us in five years to see the result of this mega-bet between Ian and Dave. I'm Dave Nicholson with theCUBE, here at Supercomputing Conference 2022, day three of our coverage with my co-host Paul Gillin. Thanks again for joining us. Stay tuned, after this short break, we'll be back with more action. (lively music)
SUMMARY :
Welcome back to theCUBE's coverage What are we going to talk about? Let's dive right in. in the queue starts to drop, does it have to be of say the traditional HPC workflow, So is the intersection of Kubernetes And now a lot of CIOs in the to the training workloads. And what Trainium allows you What is the difference between, to be that kind of heavy to say like, "Hey, can you You're very polite. to control the workload, to what are you doing I mean, you have outposts. And whether you want it to be redundant that have this the services that you need. Often a move to cloud, to you to perform the workloads you need. Okay, so it makes a What kind of questions are you getting? the cost controls, to where you feel like, And in five years are you no And so I'd say, again, the not going to be talking of moving to cloud and back and forth.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ian | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
400 gigs | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
Ian Colle | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
40% | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
Annaperna | ORGANIZATION | 0.99+ |
second part | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Last month | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
five years ago | DATE | 0.98+ |
five | QUANTITY | 0.98+ |
Two | QUANTITY | 0.98+ |
Supercomputing | ORGANIZATION | 0.98+ |
Lustre | ORGANIZATION | 0.97+ |
Annaperna Labs | ORGANIZATION | 0.97+ |
Trainium | ORGANIZATION | 0.97+ |
five years | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
OpEx | TITLE | 0.96+ |
both | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.96+ |
Supercomputing Conference | EVENT | 0.96+ |
first | QUANTITY | 0.96+ |
West Coast | LOCATION | 0.96+ |
thousands of dollars a day | QUANTITY | 0.96+ |
Supercomputing Conference 2022 | EVENT | 0.95+ |
CapEx | TITLE | 0.94+ |
three | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.92+ |
East Coast | LOCATION | 0.91+ |
single region | QUANTITY | 0.91+ |
years | QUANTITY | 0.91+ |
thousands of nodes | QUANTITY | 0.88+ |
Parallel Cluster | TITLE | 0.87+ |
about 25 gigs | QUANTITY | 0.87+ |
Dhabaleswar “DK” Panda, Ohio State State University | SuperComputing 22
>>Welcome back to The Cube's coverage of Supercomputing Conference 2022, otherwise known as SC 22 here in Dallas, Texas. This is day three of our coverage, the final day of coverage here on the exhibition floor. I'm Dave Nicholson, and I'm here with my co-host, tech journalist extraordinaire, Paul Gillum. How's it going, >>Paul? Hi, Dave. It's going good. >>And we have a wonderful guest with us this morning, Dr. Panda from the Ohio State University. Welcome Dr. Panda to the Cube. >>Thanks a lot. Thanks a lot to >>Paul. I know you're, you're chopping at >>The bit, you have incredible credentials, over 500 papers published. The, the impact that you've had on HPC is truly remarkable. But I wanted to talk to you specifically about a product project you've been working on for over 20 years now called mva, high Performance Computing platform that's used by more than 32 organ, 3,200 organizations across 90 countries. You've shepherded this from, its, its infancy. What is the vision for what MVA will be and and how is it a proof of concept that others can learn from? >>Yeah, Paul, that's a great question to start with. I mean, I, I started with this conference in 2001. That was the first time I came. It's very coincidental. If you remember the Finman Networking Technology, it was introduced in October of 2000. Okay. So in my group, we were working on NPI for Marinette Quadrics. Those are the old technology, if you can recollect when Finman was there, we were the very first one in the world to really jump in. Nobody knew how to use Infin van in an HPC system. So that's how the Happy Project was born. And in fact, in super computing 2002 on this exhibition floor in Baltimore, we had the first demonstration, the open source happy, actually is running on an eight node infinite van clusters, eight no zeros. And that was a big challenge. But now over the years, I means we have continuously worked with all infinite van vendors, MPI Forum. >>We are a member of the MPI Forum and also all other network interconnect. So we have steadily evolved this project over the last 21 years. I'm very proud of my team members working nonstop, continuously bringing not only performance, but scalability. If you see now INFIN event are being deployed in 8,000, 10,000 node clusters, and many of these clusters actually use our software, stack them rapid. So, so we have done a lot of, like our focuses, like we first do research because we are in academia. We come up with good designs, we publish, and in six to nine months, we actually bring it to the open source version and people can just download and then use it. And that's how currently it's been used by more than 3000 orange in 90 countries. And, but the interesting thing is happening, your second part of the question. Now, as you know, the field is moving into not just hvc, but ai, big data, and we have those support. This is where like we look at the vision for the next 20 years, we want to design this MPI library so that not only HPC but also all other workloads can take advantage of it. >>Oh, we have seen libraries that become a critical develop platform supporting ai, TensorFlow, and, and the pie torch and, and the emergence of, of, of some sort of default languages that are, that are driving the community. How, how important are these frameworks to the, the development of the progress making progress in the HPC world? >>Yeah, no, those are great. I mean, spite our stencil flow, I mean, those are the, the now the bread and butter of deep learning machine learning. Am I right? But the challenge is that people use these frameworks, but continuously models are becoming larger. You need very first turnaround time. So how do you train faster? How do you do influencing faster? So this is where HPC comes in and what exactly what we have done is actually we have linked floor fighters to our happy page because now you see the MPI library is running on a million core system. Now your fighters and tenor four clan also be scaled to to, to those number of, large number of course and gps. So we have actually done that kind of a tight coupling and that helps the research to really take advantage of hpc. >>So if, if a high school student is thinking in terms of interesting computer science, looking for a place, looking for a university, Ohio State University, bruns, world renowned, widely known, but talk about what that looks like from a day on a day to day basis in terms of the opportunity for undergrad and graduate students to participate in, in the kind of work that you do. What is, what does that look like? And is, and is that, and is that a good pitch to for, for people to consider the university? >>Yes. I mean, we continuously, from a university perspective, by the way, the Ohio State University is one of the largest single campus in, in us, one of the top three, top four. We have 65,000 students. Wow. It's one of the very largest campus. And especially within computer science where I am located, high performance computing is a very big focus. And we are one of the, again, the top schools all over the world for high performance computing. And we also have very strength in ai. So we always encourage, like the new students who like to really work on top of the art solutions, get exposed to the concepts, principles, and also practice. Okay. So, so we encourage those people that wish you can really bring you those kind of experience. And many of my past students, staff, they're all in top companies now, have become all big managers. >>How, how long, how long did you say you've been >>At 31 >>Years? 31 years. 31 years. So, so you, you've had people who weren't alive when you were already doing this stuff? That's correct. They then were born. Yes. They then grew up, yes. Went to university graduate school, and now they're on, >>Now they're in many top companies, national labs, all over the universities, all over the world. So they have been trained very well. Well, >>You've, you've touched a lot of lives, sir. >>Yes, thank you. Thank >>You. We've seen really a, a burgeoning of AI specific hardware emerge over the last five years or so. And, and architectures going beyond just CPUs and GPUs, but to Asics and f PGAs and, and accelerators, does this excite you? I mean, are there innovations that you're seeing in this area that you think have, have great promise? >>Yeah, there is a lot of promise. I think every time you see now supercomputing technology, you see there is sometime a big barrier comes barrier jump. Rather I'll say, new technology comes some disruptive technology, then you move to the next level. So that's what we are seeing now. A lot of these AI chips and AI systems are coming up, which takes you to the next level. But the bigger challenge is whether it is cost effective or not, can that be sustained longer? And this is where commodity technology comes in, which commodity technology tries to take you far longer. So we might see like all these likes, Gaudi, a lot of new chips are coming up, can they really bring down the cost? If that cost can be reduced, you will see a much more bigger push for AI solutions, which are cost effective. >>What, what about on the interconnect side of things, obvi, you, you, your, your start sort of coincided with the initial standards for Infin band, you know, Intel was very, very, was really big in that, in that architecture originally. Do you see interconnects like RDMA over converged ethernet playing a part in that sort of democratization or commoditization of things? Yes. Yes. What, what are your thoughts >>There for internet? No, this is a great thing. So, so we saw the infinite man coming. Of course, infinite Man is, commod is available. But then over the years people have been trying to see how those RDMA mechanisms can be used for ethernet. And then Rocky has been born. So Rocky has been also being deployed. But besides these, I mean now you talk about Slingshot, the gray slingshot, it is also an ethernet based systems. And a lot of those RMA principles are actually being used under the hood. Okay. So any modern networks you see, whether it is a Infin and Rocky Links art network, rock board network, you name any of these networks, they are using all the very latest principles. And of course everybody wants to make it commodity. And this is what you see on the, on the slow floor. Everybody's trying to compete against each other to give you the best performance with the lowest cost, and we'll see whoever wins over the years. >>Sort of a macroeconomic question, Japan, the US and China have been leapfrogging each other for a number of years in terms of the fastest supercomputer performance. How important do you think it is for the US to maintain leadership in this area? >>Big, big thing, significantly, right? We are saying that I think for the last five to seven years, I think we lost that lead. But now with the frontier being the number one, starting from the June ranking, I think we are getting that leadership back. And I think it is very critical not only for fundamental research, but for national security trying to really move the US to the leading edge. So I hope us will continue to lead the trend for the next few years until another new system comes out. >>And one of the gating factors, there is a shortage of people with data science skills. Obviously you're doing what you can at the university level. What do you think can change at the secondary school level to prepare students better to, for data science careers? >>Yeah, I mean that is also very important. I mean, we, we always call like a pipeline, you know, that means when PhD levels we are expecting like this even we want to students to get exposed to, to, to many of these concerts from the high school level. And, and things are actually changing. I mean, these days I see a lot of high school students, they, they know Python, how to program in Python, how to program in sea object oriented things. Even they're being exposed to AI at that level. So I think that is a very healthy sign. And in fact we, even from Ohio State side, we are always engaged with all this K to 12 in many different programs and then gradually trying to take them to the next level. And I think we need to accelerate also that in a very significant manner because we need those kind of a workforce. It is not just like a building a system number one, but how do we really utilize it? How do we utilize that science? How do we propagate that to the community? Then we need all these trained personal. So in fact in my group, we are also involved in a lot of cyber training activities for HPC professionals. So in fact, today there is a bar at 1 1 15 I, yeah, I think 1215 to one 15. We'll be talking more about that. >>About education. >>Yeah. Cyber training, how do we do for professionals? So we had a funding together with my co-pi, Dr. Karen Tom Cook from Ohio Super Center. We have a grant from NASA Science Foundation to really educate HPT professionals about cyber infrastructure and ai. Even though they work on some of these things, they don't have the complete knowledge. They don't get the time to, to learn. And the field is moving so fast. So this is how it has been. We got the initial funding, and in fact, the first time we advertised in 24 hours, we got 120 application, 24 hours. We couldn't even take all of them. So, so we are trying to offer that in multiple phases. So, so there is a big need for those kind of training sessions to take place. I also offer a lot of tutorials at all. Different conference. We had a high performance networking tutorial. Here we have a high performance deep learning tutorial, high performance, big data tutorial. So I've been offering tutorials at, even at this conference since 2001. Good. So, >>So in the last 31 years, the Ohio State University, as my friends remind me, it is properly >>Called, >>You've seen the world get a lot smaller. Yes. Because 31 years ago, Ohio, in this, you know, of roughly in the, in the middle of North America and the United States was not as connected as it was to everywhere else in the globe. So that's, that's pro that's, I i it kind of boggles the mind when you think of that progression over 31 years, but globally, and we talk about the world getting smaller, we're sort of in the thick of, of the celebratory seasons where, where many, many groups of people exchange gifts for varieties of reasons. If I were to offer you a holiday gift, that is the result of what AI can deliver the world. Yes. What would that be? What would, what would, what would the first thing be? This is, this is, this is like, it's, it's like the genie, but you only get one wish. >>I know, I know. >>So what would the first one be? >>Yeah, it's very hard to answer one way, but let me bring a little bit different context and I can answer this. I, I talked about the happy project and all, but recently last year actually we got awarded an S f I institute award. It's a 20 million award. I am the overall pi, but there are 14 universities involved. >>And who is that in that institute? >>What does that Oh, the I ici. C e. Okay. I cycle. You can just do I cycle.ai. Okay. And that lies with what exactly what you are trying to do, how to bring lot of AI for masses, democratizing ai. That's what is the overall goal of this, this institute, think of like a, we have three verticals we are working think of like one is digital agriculture. So I'll be, that will be my like the first ways. How do you take HPC and AI to agriculture the world as though we just crossed 8 billion people. Yeah, that's right. We need continuous food and food security. How do we grow food with the lowest cost and with the highest yield? >>Water >>Consumption. Water consumption. Can we minimize or minimize the water consumption or the fertilization? Don't do blindly. Technologies are out there. Like, let's say there is a weak field, A traditional farmer see that, yeah, there is some disease, they will just go and spray pesticides. It is not good for the environment. Now I can fly it drone, get images of the field in the real time, check it against the models, and then it'll tell that, okay, this part of the field has disease. One, this part of the field has disease. Two, I indicate to the, to the tractor or the sprayer saying, okay, spray only pesticide one, you have pesticide two here. That has a big impact. So this is what we are developing in that NSF A I institute I cycle ai. We also have, we have chosen two additional verticals. One is animal ecology, because that is very much related to wildlife conservation, climate change, how do you understand how the animals move? Can we learn from them? And then see how human beings need to act in future. And the third one is the food insecurity and logistics. Smart food distribution. So these are our three broad goals in that institute. How do we develop cyber infrastructure from below? Combining HP c AI security? We have, we have a large team, like as I said, there are 40 PIs there, 60 students. We are a hundred members team. We are working together. So, so that will be my wish. How do we really democratize ai? >>Fantastic. I think that's a great place to wrap the conversation here On day three at Supercomputing conference 2022 on the cube, it was an honor, Dr. Panda working tirelessly at the Ohio State University with his team for 31 years toiling in the field of computer science and the end result, improving the lives of everyone on Earth. That's not a stretch. If you're in high school thinking about a career in computer science, keep that in mind. It isn't just about the bits and the bobs and the speeds and the feeds. It's about serving humanity. Maybe, maybe a little, little, little too profound a statement, I would argue not even close. I'm Dave Nicholson with the Queue, with my cohost Paul Gillin. Thank you again, Dr. Panda. Stay tuned for more coverage from the Cube at Super Compute 2022 coming up shortly. >>Thanks a lot.
SUMMARY :
Welcome back to The Cube's coverage of Supercomputing Conference 2022, And we have a wonderful guest with us this morning, Dr. Thanks a lot to But I wanted to talk to you specifically about a product project you've So in my group, we were working on NPI for So we have steadily evolved this project over the last 21 years. that are driving the community. So we have actually done that kind of a tight coupling and that helps the research And is, and is that, and is that a good pitch to for, So, so we encourage those people that wish you can really bring you those kind of experience. you were already doing this stuff? all over the world. Thank this area that you think have, have great promise? I think every time you see now supercomputing technology, with the initial standards for Infin band, you know, Intel was very, very, was really big in that, And this is what you see on the, Sort of a macroeconomic question, Japan, the US and China have been leapfrogging each other for a number the number one, starting from the June ranking, I think we are getting that leadership back. And one of the gating factors, there is a shortage of people with data science skills. And I think we need to accelerate also that in a very significant and in fact, the first time we advertised in 24 hours, we got 120 application, that's pro that's, I i it kind of boggles the mind when you think of that progression over 31 years, I am the overall pi, And that lies with what exactly what you are trying to do, to the tractor or the sprayer saying, okay, spray only pesticide one, you have pesticide two here. I think that's a great place to wrap the conversation here On
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Paul Gillum | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
October of 2000 | DATE | 0.99+ |
Paul | PERSON | 0.99+ |
NASA Science Foundation | ORGANIZATION | 0.99+ |
2001 | DATE | 0.99+ |
Baltimore | LOCATION | 0.99+ |
8,000 | QUANTITY | 0.99+ |
14 universities | QUANTITY | 0.99+ |
31 years | QUANTITY | 0.99+ |
20 million | QUANTITY | 0.99+ |
24 hours | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Karen Tom Cook | PERSON | 0.99+ |
60 students | QUANTITY | 0.99+ |
Ohio State University | ORGANIZATION | 0.99+ |
90 countries | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
Earth | LOCATION | 0.99+ |
Panda | PERSON | 0.99+ |
today | DATE | 0.99+ |
65,000 students | QUANTITY | 0.99+ |
3,200 organizations | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
Python | TITLE | 0.99+ |
United States | LOCATION | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
over 500 papers | QUANTITY | 0.99+ |
June | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
more than 32 organ | QUANTITY | 0.99+ |
120 application | QUANTITY | 0.99+ |
Ohio | LOCATION | 0.99+ |
more than 3000 orange | QUANTITY | 0.99+ |
first ways | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
nine months | QUANTITY | 0.99+ |
40 PIs | QUANTITY | 0.99+ |
Asics | ORGANIZATION | 0.99+ |
MPI Forum | ORGANIZATION | 0.98+ |
China | ORGANIZATION | 0.98+ |
Two | QUANTITY | 0.98+ |
Ohio State State University | ORGANIZATION | 0.98+ |
8 billion people | QUANTITY | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
HP | ORGANIZATION | 0.97+ |
Dr. | PERSON | 0.97+ |
over 20 years | QUANTITY | 0.97+ |
US | ORGANIZATION | 0.97+ |
Finman | ORGANIZATION | 0.97+ |
Rocky | PERSON | 0.97+ |
Japan | ORGANIZATION | 0.97+ |
first time | QUANTITY | 0.97+ |
first demonstration | QUANTITY | 0.96+ |
31 years ago | DATE | 0.96+ |
Ohio Super Center | ORGANIZATION | 0.96+ |
three broad goals | QUANTITY | 0.96+ |
one wish | QUANTITY | 0.96+ |
second part | QUANTITY | 0.96+ |
31 | QUANTITY | 0.96+ |
Cube | ORGANIZATION | 0.95+ |
eight | QUANTITY | 0.95+ |
over 31 years | QUANTITY | 0.95+ |
10,000 node clusters | QUANTITY | 0.95+ |
day three | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
INFIN | EVENT | 0.94+ |
seven years | QUANTITY | 0.94+ |
Dhabaleswar “DK” Panda | PERSON | 0.94+ |
three | QUANTITY | 0.93+ |
S f I institute | TITLE | 0.93+ |
first thing | QUANTITY | 0.93+ |
David Schmidt, Dell Technologies and Scott Clark, Intel | SuperComputing 22
(techno music intro) >> Welcome back to theCube's coverage of SuperComputing Conference 2022. We are here at day three covering the amazing events that are occurring here. I'm Dave Nicholson, with my co-host Paul Gillin. How's it goin', Paul? >> Fine, Dave. Winding down here, but still plenty of action. >> Interesting stuff. We got a full day of coverage, and we're having really, really interesting conversations. We sort of wrapped things up at Supercomputing 22 here in Dallas. I've got two very special guests with me, Scott from Intel and David from Dell, to talk about yeah supercomputing, but guess what? We've got some really cool stuff coming up after this whole thing wraps. So not all of the holiday gifts have been unwrapped yet, kids. Welcome gentlemen. >> Thanks so much for having us. >> Thanks for having us. >> So, let's start with you, David. First of all, explain the relationship in general between Dell and Intel. >> Sure, so obviously Intel's been an outstanding partner. We built some great solutions over the years. I think the market reflects that. Our customers tell us that. The feedback's strong. The products you see out here this week at Supercompute, you know, put that on display for everybody to see. And then as we think about AI in machine learning, there's so many different directions we need to go to help our customers deliver AI outcomes. Right, so we recognize that AI has kind of spread outside of just the confines of everything we've seen here this week. And now we've got really accessible AI use cases that we can explain to friends and family. We can talk about going into retail environments and how AI is being used to track inventory, to monitor traffic, et cetera. But really what that means to us as a bunch of hardware folks is we have to deliver the right platforms and the right designs for a variety of environments, both inside and outside the data center. And so if you look at our portfolio, we have some great products here this week, but we also have other platforms, like the XR4000, our shortest rack server ever that's designed to go into Edge environments, but is also built for those Edge AI use cases that supports GPUs. It supports AI on the CPU as well. And so there's a lot of really compelling platforms that we're starting to talk about, have already been talking about, and it's going to really enable our customers to deliver AI in a variety of ways. >> You mentioned AI on the CPU. Maybe this is a question for Scott. What does that mean, AI on the CPU? >> Well, as David was talking about, we're just seeing this explosion of different use cases. And some of those on the Edge, some of them in the Cloud, some of them on Prem. But within those individual deployments, there's often different ways that you can do AI, whether that's training or inference. And what we're seeing is a lot of times the memory locality matters quite a bit. You don't want to have to pay necessarily a cost going across the PCI express bus, especially with some of our newer products like the CPU Max series, where you can have a huge about of high bandwidth memory just sitting right on the CPU. Things that traditionally would have been accelerator only, can now live on a CPU, and that includes both on the inference side. We're seeing some really great things with images, where you might have a giant medical image that you need to be able to do extremely high resolution inference on or even text, where you might have a huge corpus of extremely sparse text that you need to be able to randomly sample very efficiently. >> So how are these needs influencing the evolution of Intel CPU architectures? >> So, we're talking to our customers. We're talking to our partners. This presents both an opportunity, but also a challenge with all of these different places that you can put these great products, as well as applications. And so we're very thoughtfully trying to go to the market, see where their needs are, and then meet those needs. This industry obviously has a lot of great players in it, and it's no longer the case that if you build it, they will come. So what we're doing is we're finding where are those choke points, how can we have that biggest difference? Sometimes there's generational leaps, and I know David can speak to this, can be huge from one system to the next just because everything's accelerated on the software side, the hardware side, and the platforms themselves. >> That's right, and we're really excited about that leap. If you take what Scott just described, we've been writing white papers, our team with Scott's team, we've been talking about those types of use cases using doing large image analysis and leveraging system memory, leveraging the CPU to do that, we've been talking about that for several generations now. Right, going back to Cascade Lake, going back to what we would call 14th generation power Edge. And so now as we prepare and continue to unveil, kind of we're in launch season, right, you and I were talking about how we're in launch season. As we continue to unveil and launch more products, the performance improvements are just going to be outstanding and we'll continue that evolution that Scott described. >> Yeah, I'd like to applaud Dell just for a moment for its restraint. Because I know you could've come in and taken all of the space in the convention center to show everything that you do. >> Would have loved to. >> In the HPC space. Now, worst kept secrets on earth at this point. Vying for number one place is the fact that there is a new Mission Impossible movie coming. And there's also new stuff coming from Intel. I know, I think allegedly we're getting close. What can you share with us on that front? And I appreciate it if you can't share a ton of specifics, but where are we going? David just alluded to it. >> Yeah, as David talked about, we've been working on some of these things for many years. And it's just, this momentum is continuing to build, both in respect to some of our hardware investments. We've unveiled some things both here, both on the CPU side and the accelerator side, but also on the software side. OneAPI is gathering more and more traction and the ecosystem is continuing to blossom. Some of our AI and HPC workloads, and the combination thereof, are becoming more and more viable, as well as displacing traditional approaches to some of these problems. And it's this type of thing where it's not linear. It all builds on itself. And we've seen some of these investments that we've made for a better half of a decade starting to bear fruit, but that's, it's not just a one time thing. It's just going to continue to roll out, and we're going to be seeing more and more of this. >> So I want to follow up on something that you mentioned. I don't know if you've ever heard that the Charlie Brown saying that sometimes the most discouraging thing can be to have immense potential. Because between Dell and Intel, you offer so many different versions of things from a fit for function perspective. As a practical matter, how do you work with customers, and maybe this is a question for you, David. How do you work with customers to figure out what the right fit is? >> I'll give you a great example. Just this week, customer conversations, and we can put it in terms of kilowatts to rack, right. How many kilowatts are you delivering at a rack level inside your data center? I've had an answer anywhere from five all the way up to 90. There's some that have been a bit higher that probably don't want to talk about those cases, kind of customers we're meeting with very privately. But the range is really, really large, right, and there's a variety of environments. Customers might be ready for liquid today. They may not be ready for it. They may want to maximize air cooling. Those are the conversations, and then of course it all maps back to the workloads they wish to enable. AI is an extremely overloaded term. We don't have enough time to talk about all the different things that tuck under that umbrella, but the workloads and the outcomes they wish to enable, we have the right solutions. And then we take it a step further by considering where they are today, where they need to go. And I just love that five to 90 example of not every customer has an identical cookie cutter environment, so we've got to have the right platforms, the right solutions, for the right workloads, for the right environments. >> So, I like to dive in on this power issue, to give people who are watching an idea. Because we say five kilowatts, 90 kilowatts, people are like, oh wow, hmm, what does that mean? 90 kilowatts is more than 100 horse power if you want to translate it over. It's a massive amount of power, so if you think of EV terms. You know, five kilowatts is about a hairdryer's around a kilowatt, 1,000 watts, right. But the point is, 90 kilowatts in a rack, that's insane. That's absolutely insane. The heat that that generates has got to be insane, and so it's important. >> Several houses in the size of a closet. >> Exactly, exactly. Yeah, in a rack I explain to people, you know, it's like a refrigerator. But, so in the arena of thermals, I mean is that something during the development of next gen architectures, is that something that's been taken into consideration? Or is it just a race to die size? >> Well, you definitely have to take thermals into account, as well as just the power of consumption themselves. I mean, people are looking at their total cost of ownership. They're looking at sustainability. And at the end of the day, they need to solve a problem. There's many paths up that mountain, and it's about choosing that right path. We've talked about this before, having extremely thoughtful partners, we're just not going to common-torily try every single solution. We're going to try to find the ones that fit that right mold for that customer. And we're seeing more and more people, excuse me, care about this, more and more people wanting to say, how do I do this in the most sustainable way? How do I do this in the most reliable way, given maybe different fluctuations in their power consumption or their power pricing? We're developing more software tools and obviously partnering with great partners to make sure we do this in the most thoughtful way possible. >> Intel put a lot of, made a big investment by buying Habana Labs for its acceleration technology. They're based in Israel. You're based on the west coast. How are you coordinating with them? How will the Habana technology work its way into more mainstream Intel products? And how would Dell integrate those into your servers? >> Good question. I guess I can kick this off. So Habana is part of the Intel family now. They've been integrated in. It's been a great journey with them, as some of their products have launched on AWS, and they've had some very good wins on MLPerf and things like that. I think it's about finding the right tool for the job, right. Not every problem is a nail, so you need more than just a hammer. And so we have the Xeon series, which is incredibly flexible, can do so many different things. It's what we've come to know and love. On the other end of the spectrum, we obviously have some of these more deep learning focused accelerators. And if that's your problem, then you can solve that problem in incredibly efficient ways. The accelerators themselves are somewhere in the middle, so you get that kind of Goldilocks zone of flexibility and power. And depending on your use case, depending on what you know your workloads are going to be day in and day out, one of these solutions might work better for you. A combination might work better for you. Hybrid compute starts to become really interesting. Maybe you have something that you need 24/7, but then you only need a burst to certain things. There's a lot of different options out there. >> The portfolio approach. >> Exactly. >> And then what I love about the work that Scott's team is doing, customers have told us this week in our meetings, they do not want to spend developer's time porting code from one stack to the next. They want that flexibility of choice. Everyone does. We want it in our lives, in our every day lives. They need that flexibility of choice, but they also, there's an opportunity cost when their developers have to choose to port some code over from one stack to another or spend time improving algorithms and doing things that actually generate, you know, meaningful outcomes for their business or their research. And so if they are, you know, desperately searching I would say for that solution and for help in that area, and that's what we're working to enable soon. >> And this is what I love about oneAPI, our software stack, it's open first, heterogeneous first. You can take SYCL code, it can run on competitor's hardware. It can run on Intel hardware. It's one of these things that you have to believe long term, the future is open. Wall gardens, the walls eventually crumble. And we're just trying to continue to invest in that ecosystem to make sure that the in-developer at the end of the day really gets what they need to do, which is solving their business problem, not tinkering with our drivers. >> Yeah, I actually saw an interesting announcement that I hadn't been tracking. I hadn't been tracking this area. Chiplets, and the idea of an open standard where competitors of Intel from a silicone perspective can have their chips integrated via a universal standard. And basically you had the top three silicone vendors saying, yeah, absolutely, let's work together. Cats and dogs. >> Exactly, but at the end of the day, it's whatever menagerie solves the problem. >> Right, right, exactly. And of course Dell can solve it from any angle. >> Yeah, we need strong partners to build the platforms to actually do it. At the end of the day, silicone without software is just sand. Sand with silicone is poorly written prose. But without an actual platform to put it on, it's nothing, it's a box that sits in the corner. >> David, you mentioned that 90% of power age servers now support GPUs. So how is this high-performing, the growth of high performance computing, the demand, influencing the evolution of your server architecture? >> Great question, a couple of ways. You know, I would say 90% of our platforms support GPUs. 100% of our platforms support AI use cases. And it goes back to the CPU compute stack. As we look at how we deliver different form factors for customers, we go back to that range, I said that power range this week of how do we enable the right air coolant solutions? How do we deliver the right liquid cooling solutions, so that wherever the customer is in their environment, and whatever footprint they have, we're ready to meet it? That's something you'll see as we go into kind of the second half of launch season and continue rolling out products. You're going to see some very compelling solutions, not just in air cooling, but liquid cooling as well. >> You want to be more specific? >> We can't unveil everything at Supercompute. We have a lot of great stuff coming up here in the next few months, so. >> It's kind of like being at a great restaurant when they offer you dessert, and you're like yeah, dessert would be great, but I just can't take anymore. >> It's a multi course meal. >> At this point. Well, as we wrap, I've got one more question for each of you. Same question for each of you. When you think about high performance computing, super computing, all of the things that you're doing in your partnership, driving artificial intelligence, at that tip of the spear, what kind of insights are you looking forward to us being able to gain from this technology? In other words, what cool thing, what do you think is cool out there from an AI perspective? What problem do you think we can solve in the near future? What problems would you like to solve? What gets you out of bed in the morning? Cause it's not the little, it's not the bits and the bobs and the speeds and the feats, it's what we're going to do with them, so what do you think, David? >> I'll give you an example. And I think, I saw some of my colleagues talk about this earlier in the week, but for me what we could do in the past two years to unable our customers in a quarantine pandemic environment, we were delivering platforms and solutions to help them do their jobs, help them carry on in their lives. And that's just one example, and if I were to map that forward, it's about enabling that human progress. And it's, you know, you ask a 20 year version of me 20 years ago, you know, if you could imagine some of these things, I don't know what kind of answer you would get. And so mapping forward next decade, next two decades, I can go back to that example of hey, we did great things in the past couple of years to enable our customers. Just imagine what we're going to be able to do going forward to enable that human progress. You know, there's great use cases, there's great image analysis. We talked about some. The images that Scott was referring to had to do with taking CAT scan images and being able to scan them for tumors and other things in the healthcare industry. That is stuff that feels good when you get out of bed in the morning, to know that you're enabling that type of progress. >> Scott, quick thoughts? >> Yeah, and I'll echo that. It's not one specific use case, but it's really this wave front of all of these use cases, from the very micro of developing the next drug to finding the next battery technology, all the way up to the macro of trying to have an impact on climate change or even the origins of the universe itself. All of these fields are seeing these massive gains, both from the software, the hardware, the platforms that we're bringing to bear to these problems. And at the end of the day, humanity is going to be fundamentally transformed by the computation that we're launching and working on today. >> Fantastic, fantastic. Thank you, gentlemen. You heard it hear first, Intel and Dell just committed to solving the secrets of the universe by New Years Eve 2023. >> Well, next Supercompute, let's give us a little time. >> The next Supercompute Convention. >> Yeah, next year. >> Yeah, SC 2023, we'll come back and see what problems have been solved. You heard it hear first on theCube, folks. By SC 23, Dell and Intel are going to reveal the secrets of the universe. From here, at SC 22, I'd like to thank you for joining our conversation. I'm Dave Nicholson, with my co-host Paul Gillin. Stay tuned to theCube's coverage of Supercomputing Conference 22. We'll be back after a short break. (techno music)
SUMMARY :
covering the amazing events Winding down here, but So not all of the holiday gifts First of all, explain the and the right designs for What does that mean, AI on the CPU? that you need to be able to and it's no longer the case leveraging the CPU to do that, all of the space in the convention center And I appreciate it if you and the ecosystem is something that you mentioned. And I just love that five to 90 example But the point is, 90 kilowatts to people, you know, And at the end of the day, You're based on the west coast. So Habana is part of the Intel family now. and for help in that area, in that ecosystem to make Chiplets, and the idea of an open standard Exactly, but at the end of the day, And of course Dell can that sits in the corner. the growth of high performance And it goes back to the CPU compute stack. in the next few months, so. when they offer you dessert, and the speeds and the feats, in the morning, to know And at the end of the day, of the universe by New Years Eve 2023. Well, next Supercompute, From here, at SC 22, I'd like to thank you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Maribel | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
Equinix | ORGANIZATION | 0.99+ |
Matt Link | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Indianapolis | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Tim Minahan | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Stephanie Cox | PERSON | 0.99+ |
Akanshka | PERSON | 0.99+ |
Budapest | LOCATION | 0.99+ |
Indiana | LOCATION | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
October | DATE | 0.99+ |
India | LOCATION | 0.99+ |
Stephanie | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Chris Lavilla | PERSON | 0.99+ |
2006 | DATE | 0.99+ |
Tanuja Randery | PERSON | 0.99+ |
Cuba | LOCATION | 0.99+ |
Israel | LOCATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Akanksha | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Akanksha Mehrotra | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
September 2020 | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
David Schmidt | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
$45 billion | QUANTITY | 0.99+ |
October 2020 | DATE | 0.99+ |
Africa | LOCATION | 0.99+ |
Kim Leyenaar, Broadcom | SuperComputing 22
(Intro music) >> Welcome back. We're LIVE here from SuperComputing 22 in Dallas Paul Gillin, for Silicon Angle in theCUBE with my guest host Dave... excuse me. And our, our guest today, this segment is Kim Leyenaar who is a storage performance architect at Broadcom. And the topic of this conversation is, is is networking, it's connectivity. I guess, how does that relate to the work of a storage performance architect? >> Well, that's a really good question. So yeah, I have been focused on storage performance for about 22 years. But even, even if we're talking about just storage the entire, all the components have a really big impact on ultimately how quickly you can access your data. So, you know, the, the switches the memory bandwidth, the, the expanders the just the different protocols that you're using. And so, and the big part of is actually ethernet because as you know, data's not siloed anymore. You have to be able to access it from anywhere in the world. >> Dave: So wait, so you're telling me that we're just not living in a CPU centric world now? >> Ha ha ha >> Because it is it is sort of interesting. When we talk about supercomputing and high performance computing we're always talking about clustering systems. So how do you connect those systems? Isn't that, isn't that kind of your, your wheelhouse? >> Kim: It really is. >> Dave: At Broadcom. >> It's, it is, it is Broadcom's wheelhouse. We are all about interconnectivity and we own the interconnectivity. You know, you know, years ago it was, 'Hey, you know buy this new server because, you know, we we've added more cores or we've got better memory.' But now you've got all this siloed data and we've got you know, we've got this, this stuff or defined kind of environment now this composable environments where, hey if you need more networking, just plug this in or just go here and just allocate yourself more. So what we're seeing is these silos really of, 'hey here's our compute, here's your networking, here's your storage.' And so, how do you put those all together? The thing is interconnectivity. So, that's really what we specialize in. I'm really, you know, I'm really happy to be here to talk about some of the things that that we do to enable high performance computing. >> Paul: Now we're seeing, you know, new breed of AI computers being built with multiple GPUs very large amounts of data being transferred between them. And the internet really has become a, a bottleneck. The interconnect has become a bottle, a bottleneck. Is that something that Broadcom is working on alleviating? >> Kim: Absolutely. So we work with a lot of different, there's there's a lot of different standards that we work with to define so that we can make sure that we work everywhere. So even if you're just a dentist's office that's deploying one server, or we're talking about these hyperscalers that are, you know that have thousands or, you know tens of thousands of servers, you know, we're working on making sure that the next generation is able to outperform the previous generation. Not only that, but we found that, you know with these siloed things, if, if you add more storage but that means we're going to eat up six cores using that it's not really as useful. So Broadcom's really been focused on trying to offload the CPU. So we're offloading it from, you know data security, data protection, you know, we're we do packet sniffing ourselves and things like that. So no longer do we rely on the CPU to do that kind of processing for us but we become very smart devices all on our own so that they work very well in these kind of environments. >> Dave: So how about, give, give us an example. I know a lot of the discussion here has been around using ethernet as the connectivity layer. >> Yes. >> You know, in in, in the past, people would think about supercomputing as exclusively being InfiniBand based. >> Ha ha ha. >> But give, give us an idea of what Broadcom is doing in the ethernet space. What, you know, what's what are the advantages of using ethernet? >> Kim: So we've made two really big announcements. The first one is our Tomahawk five ethernet switch. So it's a 400 gigi ethernet switch. And the other thing we announced too was our Thor. So we have, these are our network controllers that also support up to 400 gigi each as well. So, those two alone, it just, it's amazing to me how much data we're able to transfer with those. But not only that, but they're super super intelligent controllers too. And then we realized, you know, hey, we're we're managing all this data, let's go ahead and offload the CPU. So we actually adopted the Rocky Standards. So that's one of the things that puts us above InfiniBand is that ethernet is ubiquitous, it's everywhere. And InfiniBand is primarily just owned by one or two companies. And, and so, and it's also a lot more expensive. So ethernet is just, it's everywhere. And now with the, with the Rocky standards, we're working along with, it's, it's, it does what you're talking about much better than, you know predecessors. >> Tell us about the Rocky Standards. I'm not familiar with it. I'm sure some of our listeners are not. What is the Rocky standard? >> Kim: Ha ha ha. So it's our DNA over converged to ethernet. I'm not a Rocky expert myself but I am an expert on how to offload the CPU. And so one of the things it does is instead of using the CPU to transfer the data from, you know the user space over to the next, you know server when you're transferring it we actually will do it ourselves. So we'll handle it ourselves. We will take it, we will move it across the wire and we will put it in that remote computer. And we don't have to ask the CPU to do anything to get involved in that. So big, you know, it's a big savings. >> Yeah, I mean in, in a nutshell, because there are parts of the InfiniBand protocol that are essentially embedded in RDMA over converged ethernet. So... >> Right. >> So if you can, if you can leverage kind of the best of both worlds, but have it in an ethernet environment which is already ubiquitous, it seems like it's, kind of democratizing supercomputing and, and HPC and I know you guys are big partners with Dell as an example, you guys work with all sorts of other people. >> Kim: Yeah. >> But let's say, let's say somebody is going to be doing ethernet for connectivity, you also offer switches? >> Kim: We do, actually. >> So is that, I mean that's another piece of the puzzle. >> That's a big piece of the puzzle. So we just released our, our Atlas 2 switch. It is a PCIE Gen Five switch. And... >> Dave: What does that mean? What does Gen five, what does that mean? >> Oh, Gen Five PCIE, it's it's a magic connectivity right now. So, you know, we talk about the Sapphire Rapids release as well as the GENUWA release. I know that those, you know those have been talked about a lot here. I've been walking around and everybody's talking about it. Well, those enable the Gen Five PCIE interfaces. So we've been able to double the bandwidth from the Gen Four up to the Gen Five. So, in order to, to support that we do now have our Atlas two PCIE Gen Five switch. And it allows you to connect especially around here we're talking about, you know artificial intelligence and machine learning. A lot of these are relying on the GPU and the DPU that you see, you know a lot of people talking about enabling. So by in, you know, putting these switches in the servers you can connect multitudes of not only NVME devices but also these GPUs and these, these CPUs. So besides that we also have the storage component of it too. So to support that, we we just recently have released our 9,500 series HBAs which support 24 gig SAS. And you know, this is kind of a, this is kind of a big deal for some of our hyperscalers that say, Hey, look our next generation, we're putting a hundred hard drives in. So we're like, you know, so a lot of it is maybe for cold storage, but by giving them that 24 gig bandwidth and by having these mass 24 gig SAS expanders that allows these hyperscalers to build up their systems. >> Paul: And how are you supporting the HPC community at large? And what are you doing that's exclusively for supercomputing? >> Kim: Exclusively for? So we're doing the interconnectivity really for them. You know, you can have as, as much compute power as you want, but these are very data hungry applications and a lot of that data is not sitting right in the box. A lot of that data is sitting in some other country or in some other city, or just the box next door. So to be able to move that data around, you know there's a new concept where they say, you know do the compute where the data is and then there's another kind of, you know the other way is move the data around which is a lot easier kind of sometimes, but so we're allowing us to move that data around. So for that, you know, we do have our our tomahawk switches, we've got our Thor NICS and of course we got, you know, the really wide pipe. So our, our new 9,500 series HBA and RAID controllers not only allow us to do, so we're doing 28 gigabytes a second that we can trans through the one controller, and that's on protected data. So we can actually have the high availability protected data of RAID 5 or RAID 6, or RAID 10 in the box giving in 27 gigabytes a second. So it's, it's unheard of the latency that we're seeing even off of this too, we have a right cash latency that is sub 8 microseconds that is lower than most of the NVME drives that you see, you know that are available today. So, so you know we're able to support these applications that require really low latency as well as data protection. >> Dave: So, so often when we talk about the underlying hardware, it's a it's a game of, you know, whack-a-mole chase the bottleneck. And so you've mentioned PCIE five, a lot of folks who will be implementing five, gen five PCIE five are coming off of three, not even four. >> Kim: I know. >> So make, so, so they're not just getting a last generation to this generation bump but they're getting a two generations, bump. >> Kim: They are. >> How does that, is it the case that it would never make sense to use a next gen or a current gen card in an older generation bus because of the mismatch and performance? Are these things all designed to work together? >> Uh... That's a really tough question. I want to say, no, it doesn't make sense. It, it really makes sense just to kind of move things forward and buy a card that's made for the bus it's in. However, that's not always the case. So for instance, our 9,500 controller is a Gen four PCIE but what we did, we doubled the PCIE so it's a by 16, even though it's a gen four, it's a by 16. So we're getting really, really good bandwidth out of it. As I said before, you know, we're getting 28, 27.8 or almost 28 gigabytes a second bandwidth out of that by doubling the PCIE bus. >> Dave: But they worked together, it all works together? >> All works together. You can put, you can put our Gen four and a Gen five all day long and they work beautifully. Yeah. We, we do work to validate that. >> We're almost out our time. But I, I want to ask you a more, nuts and bolts question, about storage. And we've heard for, you know, for years of the aerial density of hard disk has been reached and there's really no, no way to excel. There's no way to make the, the dish any denser. What is the future of the hard disk look like as a storage medium? >> Kim: Multi actuator actually, we're seeing a lot of multi-actuator. I was surprised to see it come across my desk, you know because our 9,500 actually does support multi-actuator. And, and, and so it was really neat after I've been working with hard drives for 22 years and I remember when they could do 30 megabytes a second, and that was amazing. That was like, wow, 30 megabytes a second. And then, about 15 years ago, they hit around 200 to 250 megabytes a second, and they stayed there. They haven't gone anywhere. What they have done is they've increased the density so that you can have more storage. So you can easily go out and buy 15 to 30 terabyte drive, but you're not going to get any more performance. So what they've done is they've added multiple actuators. So each one of these can do its own streaming and each one of these can actually do their own seeking. So you can get two and four. And I've even seen a talk about, you know eight actuator per disc. I, I don't think that, I think that's still theory, but but they could implement those. So that's one of the things that we're seeing. >> Paul: Old technology somehow finds a way to, to remain current. >> It does. >> Even it does even in the face of new alternatives. Kim Leyenaar, Storage Architect, Storage Performance Architect at Broadcom Thanks so much for being here with us today. Thank you so much for having me. >> This is Paul Gillin with Dave Nicholson here at SuperComputing 22. We'll be right back. (Outro music)
SUMMARY :
And the topic of this conversation is, is So, you know, the, the switches So how do you connect those systems? buy this new server because, you know, we you know, new breed So we're offloading it from, you know I know a lot of the You know, in in, in the What, you know, what's And then we realized, you know, hey, we're What is the Rocky standard? the data from, you know of the InfiniBand protocol So if you can, if you can So is that, I mean that's So we just released So we're like, you know, So for that, you know, we do have our it's a game of, you know, So make, so, so they're not out of that by doubling the PCIE bus. You can put, you can put And we've heard for, you know, for years so that you can have more storage. to remain current. Even it does even in the with Dave Nicholson here
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kim Leyenaar | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
15 | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Kim | PERSON | 0.99+ |
30 megabytes | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
9,500 | QUANTITY | 0.99+ |
28 | QUANTITY | 0.99+ |
22 years | QUANTITY | 0.99+ |
six cores | QUANTITY | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
24 gig | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
first one | QUANTITY | 0.99+ |
Rocky | ORGANIZATION | 0.98+ |
27.8 | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
30 terabyte | QUANTITY | 0.98+ |
both worlds | QUANTITY | 0.98+ |
about 22 years | QUANTITY | 0.97+ |
two generations | QUANTITY | 0.97+ |
each one | QUANTITY | 0.97+ |
SuperComputing 22 | ORGANIZATION | 0.97+ |
one controller | QUANTITY | 0.97+ |
three | QUANTITY | 0.96+ |
two really big announcements | QUANTITY | 0.96+ |
250 megabytes | QUANTITY | 0.96+ |
one server | QUANTITY | 0.94+ |
Gen four | COMMERCIAL_ITEM | 0.94+ |
up to 400 gigi | QUANTITY | 0.93+ |
Rocky standards | ORGANIZATION | 0.93+ |
tens of thousands of servers | QUANTITY | 0.93+ |
400 gigi | QUANTITY | 0.92+ |
around 200 | QUANTITY | 0.92+ |
9,500 series | QUANTITY | 0.92+ |
excel | TITLE | 0.91+ |
9,500 series | COMMERCIAL_ITEM | 0.9+ |
16 | QUANTITY | 0.9+ |
InfiniBand | ORGANIZATION | 0.89+ |
sub 8 microseconds | QUANTITY | 0.89+ |
gen four | COMMERCIAL_ITEM | 0.89+ |
eight actuator | QUANTITY | 0.89+ |
second bandwidth | QUANTITY | 0.88+ |
Atlas 2 | COMMERCIAL_ITEM | 0.86+ |
GENUWA | ORGANIZATION | 0.86+ |
Thor | ORGANIZATION | 0.85+ |
five | TITLE | 0.85+ |
about 15 years ago | DATE | 0.84+ |
28 gigabytes | QUANTITY | 0.84+ |
Gen Five | COMMERCIAL_ITEM | 0.83+ |
27 gigabytes a second | QUANTITY | 0.82+ |
Justin Emerson, Pure Storage | SuperComputing 22
(soft music) >> Hello, fellow hardware nerds and welcome back to Dallas Texas where we're reporting live from Supercomputing 2022. My name is Savannah Peterson, joined with the John Furrier on my left. >> Looking good today. >> Thank you, John, so are you. It's been a great show so far. >> We've had more hosts, more guests coming than ever before. >> I know. >> Amazing, super- >> We've got a whole thing going on. >> It's been a super computing performance. >> It, wow. And, we'll see how many times we can say super on this segment. Speaking of super things, I am in a very unique position right now. I am a flanked on both sides by people who have been doing content on theCUBE for 12 years. Yes, you heard me right, our next guest was on theCUBE 12 years ago, the third event, was that right, John? >> Man: First ever VM World. >> Yeah, the first ever VM World, third event theCUBE ever did. We are about to have a lot of fun. Please join me in welcoming Justin Emerson of Pure Storage. Justin, welcome back. >> It's a pleasure to be here. It's been too long, you never call, you don't write. (Savannah laughs) >> Great to see you. >> Yeah, likewise. >> How fun is this? Has the set evolved? Is everything looking good? >> I mean, I can barely remember what happened last week, so. (everyone laughs) >> Well, I remember lot's changed that VM world. You know, Paul Moritz was the CEO if you remember at that time. His actual vision actually happened but not the way, for VMware, but the industry, the cloud, he called the software mainframe. We were kind of riffing- >> It was quite the decade. >> Unbelievable where we are now, how we got here, but not where we're going to be. And you're with Pure Storage now which we've been, as you know, covering as well. Where's the connection into the supercomputing? Obviously storage performance, big part of this show. >> Right, right. >> What's the take? >> Well, I think, first of all it's great to be back at events in person. We were talking before we went on, and it's been so great to be back at live events now. It's been such a drought over the last several years, but yeah, yeah. So I'm very glad that we're doing in person events again. For Pure, this is an incredibly important show. You know, the product that I work with, with FlashBlade is you know, one of our key areas is specifically in this high performance computing, AI machine learning kind of space. And so we're really glad to be here. We've met a lot of customers, met a lot of other folks, had a lot of really great conversations. So it's been a really great show for me. And also just seeing all the really amazing stuff that's around here, I mean, if you want to find, you know, see what all the most cutting edge data center stuff that's going to be coming down the pipe, this is the place to do it. >> So one of the big themes of the show for us and probably, well, big theme of your life, is balancing power efficiency. You have a product in this category, Direct Flash. Can you tell us a little bit more about that? >> Yeah, so Pure as a storage company, right, what do we do differently from everybody else? And if I had to pick one thing, right, I would talk about, it's, you know, as the name implies, we're an all, we're purely flash, we're an all flash company. We've always been, don't plan to be anything else. And part of that innovation with Direct Flash is the idea of rather than treating a solid state disc as like a hard drive, right? Treat it as it actually is, treat it like who it really is and that's a very different kind of thing. And so Direct Flash is all about bringing native Flash interfaces to our product portfolio. And what's really exciting for me as a FlashBlade person, is now that's also part of our FlashBlade S portfolio, which just launched in June. And so the benefits of that are our myriad. But, you know, talking about efficiency, the biggest difference is that, you know, we can use like 90% less DRAM in our drives, which you know, everything uses, everything that you put in a drive uses power, it adds cost and all those things and so that really gives us an efficiency edge over everybody else and at a show like this, where, I mean, you walk the aisles and there's there's people doing liquid cooling and so much immersion stuff, and the reason they're doing that is because power is just increasing everywhere, right? So if you can figure out how do we use less power in some areas means you can shift that budget to other places. So if you can talk to a customer and say, well, if I could shrink your power budget for storage by two thirds or even, save you two-thirds of power, how many more accelerators, how many more CPUs, how much more work could you actually get done? So really exciting. >> I mean, less power consumption, more power and compute. >> Right. >> Kind of power center. So talk about the AI implications, where the use cases are. What are you seeing here? A lot of simulations, a lot of students, again, dorm room to the boardroom we've been saying here on theCUBE this is a great broad area, where's the action in the ML and the AI for you guys? >> So I think, not necessarily storage related but I think that right now there's this enormous explosion of custom silicon around AI machine learning which I as a, you said welcome hardware nerds at the beginning and I was like, ah, my people. >> We're all here, we're all here in Dallas. >> So wonderful. You know, as a hardware nerd we're talking about conferences, right? Who has ever attended hot chips and there's so much really amazing engineering work going on in the silicon space. It's probably the most exciting time for, CPU and accelerator, just innovation in, since the days before X 86 was the defacto standard, right? And you could go out and buy a different workstation with 16 different ISAs. That's really the most exciting thing, I walked past so many different places where you know, our booth is right next to Havana Labs with their gout accelerator, and they're doing this cute thing with one of the AI image generators in their booth, which is really cute. >> Woman: We're going to have to go check that out. >> Yeah, but that to me is like one of the more exciting things around like innovation at a, especially at a show like this where it's all about how do we move forward, the state of the art. >> What's different now than just a few years ago in terms of what's opening up the creativity for people to look at things that they could do with some of the scale that's different now. >> Yeah well, I mean, every time the state of the art moves forward what it means is, is that the entry level gets better, right? So if the high end is going faster, that means that the mid-range is going faster, and that means the entry level is going faster. So every time it pushes the boundary forward, it's a rising tide that floats all boats. And so now, the kind of stuff that's possible to do, if you're a student in a dorm room or if you're an enterprise, the world, the possible just keeps expanding dramatically and expanding almost, you know, geometrically like the amount of data that we are, that we have, as a storage guy, I was coming back to data but the amount of data that we have and the amount of of compute that we have, and it's not just about the raw compute, but also the advances in all sorts of other things in terms of algorithms and transfer learning and all these other things. There's so much amazing work going on in this area and it's just kind of this Kay Green explosion of innovation in the area. >> I love that you touched on the user experience for the community, no matter the level that you're at. >> Yeah. >> And I, it's been something that's come up a lot here. Everyone wants to do more faster, always, but it's not just that, it's about making the experience and the point of entry into this industry more approachable and digestible for folks who may not be familiar, I mean we have every end of the ecosystem here, on the show floor, where does Pure Storage sit in the whole game? >> Right, so as a storage company, right? What AI is all about deriving insights from data, right? And so everyone remembers that magazine cover data's the new oil, right? And it's kind of like, okay, so what do you do with it? Well, how do you derive value from all of that data? And AI machine learning and all of this supercomputing stuff is about how do we take all this data? How do we innovate with it? And so if you want data to innovate with, you need storage. And so, you know, our philosophy is that how do we make the best storage platforms that we can using the best technology for our customers that enable them to do really amazing things with AI machine learning and we've got different products, but, you know at the show here, what we're specifically showing off is our new flashlight S product, which, you know, I know we've had Pure folks on theCUBE before talking about FlashBlade, but for viewers out there, FlashBlade is our our scale out unstructured data platform and AI and machine learning and supercomputing is all about unstructured data. It's about sensor data, it's about imaging, it's about, you know, photogrammetry, all this other kinds of amazing stuff. But, you got to land all that somewhere. You got to process that all somewhere. And so really high performance, high throughput, highly scalable storage solutions are really essential. It's an enabler for all of the amazing other kinds of engineering work that goes on at a place like Supercomputing. >> It's interesting you mentioned data's oil. Remember in 2010, that year, our first year of theCUBE, Hadoop World, Hadoop just started to come on the scene, which became, you know kind of went away and, but now you got, Spark and Databricks and Snowflake- >> Justin: And it didn't go away, it just changed, right? >> It just got refactored and right size, I think for what the people wanted it to be easy to use but there's more data coming. How is data driving innovation as you bring, as people see clearly the more data's coming? How is data driving innovation as you guys look at your products, your roadmap and your customer base? How is data driving innovation for your customers? >> Well, I think every customer who has been, you know collecting all of this data, right? Is trying to figure out, now what do I do with it? And a lot of times people collect data and then it will end up on, you know, lower slower tiers and then suddenly they want to do something with it. And it's like, well now what do I do, right? And so there's all these people that are reevaluating you know, we, when we developed FlashBlade we sort of made this bet that unstructured data was going to become the new tier one data. It used to be that we thought unstructured data, it was emails and home directories and all that stuff the kind of stuff that you didn't really need a really good DR plan on. It's like, ah, we could, now of course, as soon as email goes down, you realize how important email is. But, the perspectives that people had on- >> Yeah, exactly. (all laughing) >> The perspectives that people had on unstructured data and it's value to the business was very different and so now- >> Good bet, by the way. >> Yeah, thank you. So now unstructured data is considered, you know, where companies are going to derive their value from. So it's whether they use the data that they have to build better products whether it's they use the data they have to develop you know, improvements in processes. All those kinds of things are data driven. And so all of the new big advancements in industry and in business are all about how do I derive insights from data? And so machine learning and AI has something to do with that, but also, you know, it all comes back to having data that's available. And so, we're working very hard on building platforms that customers can use to enable all of this really- >> Yeah, it's interesting, Savannah, you know, the top three areas we're covering for reinventing all the hyperscale events is data. How does it drive innovation and then specialized solutions to make customers lives easier? >> Yeah. >> It's become a big category. How do you compose stuff and then obviously compute, more and more compute and services to make the performance goes. So those seem to be the three hot areas. So, okay, data's the new oil refineries. You've got good solutions. What specialized solutions do you see coming out because once people have all this data, they might have either large scale, maybe some edge use cases. Do you see specialized solutions emerging? I mean, obviously it's got DPU emerging which is great, but like, do you see anything else coming out at that people are- >> Like from a hardware standpoint. >> Or from a customer standpoint, making the customer's lives easier? So, I got a lot of data flowing in. >> Yeah. >> It's never stopping, it keeps powering in. >> Yeah. >> Are there things coming out that makes their life easier? Have you seen anything coming out? >> Yeah, I think where we are as an industry right now with all of this new technology is, we're really in this phase of the standards aren't quite there yet. Everybody is sort of like figuring out what works and what doesn't. You know, there was this big revolution in sort of software development, right? Where moving towards agile development and all that kind of stuff, right? The way people build software change fundamentally this is kind of like another wave like that. I like to tell people that AI and machine learning is just a different way of writing software. What is the output of a training scenario, right? It's a model and a model is just code. And so I think that as all of these different, parts of the business figure out how do we leverage these technologies, what it is, is it's a different way of writing software and it's not necessarily going to replace traditional software development, but it's going to augment it, it's going to let you do other interesting things and so, where are things going? I think we're going to continue to start coalescing around what are the right ways to do things. Right now we talk about, you know, ML Ops and how development and the frameworks and all of this innovation. There's so much innovation, which means that the industry is moving so quickly that it's hard to settle on things like standards and, or at least best practices you know, at the very least. And that the best practices are changing every three months. Are they really best practices right? So I think, right, I think that as we progress and coalesce around kind of what are the right ways to do things that's really going to make customers' lives easier. Because, you know, today, if you're a software developer you know, we build a lot of software at Pure Storage right? And if you have people and developers who are familiar with how the process, how the factory functions, then their skills become portable and it becomes easier to onboard people and AI is still nothing like that right now. It's just so, so fast moving and it's so- >> Wild West kind of. >> It's not standardized. It's not industrialized, right? And so the next big frontier in all of this amazing stuff is how do we industrialize this and really make it easy to implement for organizations? >> Oil refineries, industrial Revolution. I mean, it's on that same trajectory. >> Yeah. >> Yeah, absolutely. >> Or industrial revolution. (John laughs) >> Well, we've talked a lot about the chaos and sort of we are very much at this early stage stepping way back and this can be your personal not Pure Storage opinion if you want. >> Okay. >> What in HPC or AIML I guess it all falls under the same umbrella, has you most excited? >> Ooh. >> So I feel like you're someone who sees a lot of different things. You've got a lot of customers, you're out talking to people. >> I think that there is a lot of advancement in the area of natural language processing and I think that, you know, we're starting to take things just like natural language processing and then turning them into vision processing and all these other, you know, I think the, the most exciting thing for me about AI is that there are a lot of people who are, you are looking to use these kinds of technologies to make technology more inclusive. And so- >> I love it. >> You know the ability for us to do things like automate captioning or the ability to automate descriptive, audio descriptions of video streams or things like that. I think that those are really,, I think they're really great in terms of bringing the benefits of technology to more people in an automated way because the challenge has always been bandwidth of how much a human can do. And because they were so difficult to automate and what AI's really allowing us to do is build systems whether that's text to speech or whether that's translation, or whether that's captioning or all these other things. I think the way that AI interfaces with humans is really the most interesting part. And I think the benefits that it can bring there because there's a lot of talk about all of the things that it does that people don't like or that they, that people are concerned about. But I think it's important to think about all the really great things that maybe don't necessarily personally impact you, but to the person who's not cited or to the person who you know is hearing impaired. You know, that's an enormously valuable thing. And the fact that those are becoming easier to do they're becoming better, the quality is getting better. I think those are really important for everybody. >> I love that you brought that up. I think it's a really important note to close on and you know, there's always the kind of terminator, dark side that we obsess over but that's actually not the truth. I mean, when we think about even just captioning it's a tool we use on theCUBE. It's, you know, we see it on our Instagram stories and everything else that opens the door for so many more people to be able to learn. >> Right? >> And the more we all learn, like you said the water level rises together and everything is magical. Justin, it has been a pleasure to have you on board. Last question, any more bourbon tasting today? >> Not that I'm aware of, but if you want to come by I'm sure we can find something somewhere. (all laughing) >> That's the spirit, that is the spirit of an innovator right there. Justin, thank you so much for joining us from Pure Storage. John Furrier, always a pleasure to interview with you. >> I'm glad I can contribute. >> Hey, hey, that's the understatement of the century. >> It's good to be back. >> Yeah. >> Hopefully I'll see you guys in, I'll see you guys in 2034. >> No. (all laughing) No, you've got the Pure Accelerate conference. We'll be there. >> That's right. >> We'll be there. >> Yeah, we have our Pure Accelerate conference next year and- >> Great. >> Yeah. >> I love that, I mean, feel free to, you know, hype that. That's awesome. >> Great company, great runs, stayed true to the mission from day one, all Flash, continue to innovate congratulations. >> Yep, thank you so much, it's pleasure being here. >> It's a fun ride, you are a joy to talk to and it's clear you're just as excited as we are about hardware, so thanks a lot Justin. >> My pleasure. >> And thank all of you for tuning in to this wonderfully nerdy hardware edition of theCUBE live from Dallas, Texas, where we're at, Supercomputing, my name's Savannah Peterson and I hope you have a wonderful night. (soft music)
SUMMARY :
and welcome back to Dallas Texas It's been a great show so far. We've had more hosts, more It's been a super the third event, was that right, John? Yeah, the first ever VM World, It's been too long, you I mean, I can barely remember for VMware, but the industry, the cloud, as you know, covering as well. and it's been so great to So one of the big the biggest difference is that, you know, I mean, less power consumption, in the ML and the AI for you guys? nerds at the beginning all here in Dallas. places where you know, have to go check that out. Yeah, but that to me is like one of for people to look at and the amount of of compute that we have, I love that you touched and the point of entry It's an enabler for all of the amazing but now you got, Spark and as you guys look at your products, the kind of stuff that Yeah, exactly. And so all of the new big advancements Savannah, you know, but like, do you see a hardware standpoint. the customer's lives easier? It's never stopping, it's going to let you do And so the next big frontier I mean, it's on that same trajectory. (John laughs) a lot about the chaos You've got a lot of customers, and I think that, you know, or to the person who you and you know, there's always And the more we all but if you want to come by that is the spirit of an Hey, hey, that's the Hopefully I'll see you guys We'll be there. free to, you know, hype that. all Flash, continue to Yep, thank you so much, It's a fun ride, you and I hope you have a wonderful night.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul Moritz | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
Justin Emerson | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
Dallas | LOCATION | 0.99+ |
June | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
12 years | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
Kay Green | PERSON | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
third event | QUANTITY | 0.99+ |
Dallas Texas | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
12 years ago | DATE | 0.99+ |
two-thirds | QUANTITY | 0.99+ |
First | QUANTITY | 0.98+ |
VM World | EVENT | 0.98+ |
first | QUANTITY | 0.98+ |
two thirds | QUANTITY | 0.98+ |
Havana Labs | ORGANIZATION | 0.98+ |
Pure Accelerate | EVENT | 0.98+ |
next year | DATE | 0.98+ |
today | DATE | 0.98+ |
both sides | QUANTITY | 0.98+ |
Pure Storage | ORGANIZATION | 0.97+ |
first year | QUANTITY | 0.97+ |
16 different ISAs | QUANTITY | 0.96+ |
FlashBlade | TITLE | 0.96+ |
three hot areas | QUANTITY | 0.94+ |
three | QUANTITY | 0.94+ |
Snowflake | ORGANIZATION | 0.93+ |
one | QUANTITY | 0.93+ |
2034 | DATE | 0.93+ |
one thing | QUANTITY | 0.93+ |
Supercomputing | ORGANIZATION | 0.9+ |
90% less | QUANTITY | 0.89+ |
theCUBE | ORGANIZATION | 0.86+ |
agile | TITLE | 0.84+ |
VM world | EVENT | 0.84+ |
few years ago | DATE | 0.81+ |
day one | QUANTITY | 0.81+ |
Hadoop World | ORGANIZATION | 0.8+ |
VMware | ORGANIZATION | 0.79+ |
ORGANIZATION | 0.78+ | |
Spark and | ORGANIZATION | 0.77+ |
Hadoop | ORGANIZATION | 0.74+ |
years | DATE | 0.73+ |
last | DATE | 0.73+ |
three months | QUANTITY | 0.69+ |
FlashBlade | ORGANIZATION | 0.68+ |
Direct Flash | TITLE | 0.67+ |
year | DATE | 0.65+ |
tier one | QUANTITY | 0.58+ |
Supercomputing | TITLE | 0.58+ |
Direct | TITLE | 0.56+ |
Flash | ORGANIZATION | 0.55+ |
86 | TITLE | 0.55+ |
aces | QUANTITY | 0.55+ |
Pure | ORGANIZATION | 0.51+ |
Databricks | ORGANIZATION | 0.5+ |
2022 | ORGANIZATION | 0.5+ |
X | EVENT | 0.45+ |
Lucas Snyder, Indiana University and Karl Oversteyns, Purdue University | SuperComputing 22
(upbeat music) >> Hello, beautiful humans and welcome back to Supercomputing. We're here in Dallas, Texas giving you live coverage with theCUBE. I'm joined by David Nicholson. Thank you for being my left arm today. >> Thank you Savannah. >> It's a nice little moral. Very excited about this segment. We've talked a lot about how the fusion between academia and the private sector is a big theme at this show. You can see multiple universities all over the show floor as well as many of the biggest companies on earth. We were very curious to learn a little bit more about this from people actually in the trenches. And we are lucky to be joined today by two Purdue students. We have Lucas and Karl. Thank you both so much for being here. >> One Purdue, one IU, I think. >> Savannah: Oh. >> Yeah, yeah, yeah. >> I'm sorry. Well then wait, let's give Indiana University their fair do. That's where Lucas is. And Karl is at Purdue. Sorry folks. I apparently need to go back to school to learn how to read. (chuckles) In the meantime, I know you're in the middle of a competition. Thank you so much for taking the time out. Karl, why don't you tell us what's going on? What is this competition? What brought you all here? And then let's dive into some deeper stuff. >> Yeah, this competition. So we're a joint team between Purdue and IU. We've overcome our rivalries, age old rivalries to computer at the competition. It's a multi-part competition where we're going head to head against other teams from all across the world, benchmarking our super computing cluster that we designed. >> Was there a moment of rift at all when you came together? Or was everyone peaceful? >> We came together actually pretty nicely. Our two advisors they were very encouraging and so we overcame that, no hostility basically. >> I love that. So what are you working on and how long have you guys been collaborating on it? You can go ahead and start Lucas. >> So we've been prepping for this since the summer and some of us even before that. >> Savannah: Wow. >> And so currently we're working on the application phase of the competition. So everybody has different specialties and basically the competition gives you a set of rules and you have to accomplish what they tell you to do in the allotted timeframe and run things very quickly. >> And so we saw, when we came and first met you, we saw that there are lights and sirens and a monitor looking at the power consumption involved. So part of this is how much power is being consumed. >> Karl: That's right. >> Explain exactly what are the what are the rules that you have to live within? >> So, yeah, so the main constraint is the time as we mentioned and the power consumption. So for the benchmarking phase, which was one, two days ago there was a hard camp of 3000 watts to be consumed. You can't go over that otherwise you would be penalized for that. You have to rerun, start from scratch basically. Now there's a dynamic one for the application section where it's it modulates at random times. So we don't know when it's going to go down when it's going to go back up. So we have to adapt to that in real time. >> David: Oh, interesting. >> Dealing with a little bit of real world complexity I guess probably is simulation is here. I think that's pretty fascinating. I want to know, because I am going to just confess when I was your age last week, I did not understand the power of supercomputing and high performance computing. Lucas, let's start with you. How did you know this was the path you wanted to go down in your academic career? >> David: Yeah, what's your background? >> Yeah, give us some. >> So my background is intelligence systems engineering which is kind of a fusion. It's between, I'm doing bioengineering and then also more classical computer engineering. So my background is biology actually. But I decided to go down this path kind of on a whim. My professor suggested it and I've kind of fallen in love with it. I did my summer internship doing HPC and I haven't looked back. >> When did you think you wanted to go into this field? I mean, in high school, did you have a special teacher that sparked it? What was it? >> Lucas: That's funny that you say that. >> What was in your background? >> Yes, I mean, in high school towards the end I just knew that, I saw this program at IU and it's pretty new and I just thought this would be a great opportunity for me and I'm loving it so far. >> Do you have family in tech or is this a different path for you? >> Yeah, this is a different path for me, but my family is so encouraging and they're very happy for me. They text me all the time. So I couldn't be happier. >> Savannah: Just felt that in my heart. >> I know. I was going to say for the parents out there get the tissue out. >> Yeah, yeah, yeah. (chuckles) >> These guys they don't understand. But, so Karl, what's your story? What's your background? >> My background, I'm a major in unmanned Aerial systems. So this is a drones commercial applications not immediately connected as you might imagine although there's actually more overlap than one might think. So a lot of unmanned systems today a lot of it's remote sensing, which means that there's a lot of image processing that takes place. Mapping of a field, what have you, or some sort of object, like a silo. So a lot of it actually leverages high performance computing in order to map, to visualize much replacing, either manual mapping that used to be done by humans in the field or helicopters. So a lot of cost reduction there and efficiency increases. >> And when did you get this spark that said I want to go to Purdue? You mentioned off camera that you're from Belgium. >> Karl: That's right. >> Did you, did you come from Belgium to Purdue or you were already in the States? >> No, so I have family that lives in the States but I grew up in Belgium. >> David: Okay. >> I knew I wanted to study in the States. >> But at what age did you think that science and technology was something you'd be interested in? >> Well, I've always loved computers from a young age. I've been breaking computers since before I can remember. (chuckles) Much to my parents dismay. But yeah, so I've always had a knack for technology and that's sort of has always been a hobby of mine. >> And then I want to ask you this question and then Lucas and then Savannah will get some time. >> Savannah: It cool, will just sit here and look pretty. >> Dream job. >> Karl: Dream job. >> Okay. So your undergrad both you. >> Savannah: Offering one of my questions. Kind of, It's adjacent though. >> Okay. You're undergrad now? Is there grad school in your future do you feel that's necessary? Is that something you want to pursue? >> I think so. Entrepreneurship is something that's been in the back of my head for a while as well. So may be or something. >> So when I say dream job, understand could be for yourself. >> Savannah: So just piggyback. >> Dream thing after academia or stay in academia. What's do you think at this point? >> That's a tough question. You're asking. >> You'll be able to review this video in 10 years. >> Oh boy. >> This is give us your five year plan and then we'll have you back on theCUBE and see 2027. >> What's the dream? There's people out here watching this. I'm like, go, hey, interesting. >> So as I mentioned entrepreneurship I'm thinking I'll start a company at some point. >> David: Okay. >> Yeah. In what? I don't know yet. We'll see. >> David: Lucas, any thoughts? >> So after graduation, I am planning to go to grad school. IU has a great accelerated master's degree program so I'll stay an extra year and get my master's. Dream job is, boy, that's impossible to answer but I remember telling my dad earlier this year that I was so interested in what NASA was doing. They're sending a probe to one of the moons of Jupiter. >> That's awesome. From a parent's perspective the dream often is let's get the kids off the payroll. So I'm sure that your families are happy to hear that you have. >> I think these two will be right in that department. >> I think they're going to be okay. >> Yeah, I love that. I was curious, I want to piggyback on that because I think when NASA's doing amazing we have them on the show. Who doesn't love space. >> Yeah. >> I'm also an entrepreneur though so I very much empathize with that. I was going to ask to your dream job, but also what companies here do you find the most impressive? I'll rephrase. Because I was going to say, who would you want to work with? >> David: Anything you think is interesting? >> But yeah. Have you even had a chance to walk the floor? I know you've been busy competing >> Karl: Very little. >> Yeah, I was going to say very little. Unfortunately I haven't been able to roam around very much. But I look around and I see names that I'm like I can't even, it's crazy to see them. Like, these are people who are so impressive in the space. These are people who are extremely smart. I'm surrounded by geniuses everywhere I look, I feel like, so. >> Savannah: That that includes us. >> Yeah. >> He wasn't talking about us. Yeah. (laughs) >> I mean it's hard to say any of these companies I would feel very very lucky to be a part of, I think. >> Well there's a reason why both of you were invited to the party, so keep that in mind. Yeah. But so not a lot of time because of. >> Yeah. Tomorrow's our day. >> Here to get work. >> Oh yes. Tomorrow gets play and go talk to everybody. >> Yes. >> And let them recruit you because I'm sure that's what a lot of these companies are going to be doing. >> Yeah. Hopefully it's plan. >> Have you had a second at all to look around Karl. >> A Little bit more I've been going to the bathroom once in a while. (laughs) >> That's allowed I mean, I can imagine that's a vital part of the journey. >> I've ruin my gaze a little bit to what's around all kinds of stuff. Higher education seems to be very important in terms of their presence here. I find that very, very impressive. Purdue has a big stand IU as well, but also others all from Europe as well and Asia. I think higher education has a lot of potential in this field. >> David: Absolutely. >> And it really is that union between academia and the private sector. We've seen a lot of it. But also one of the things that's cool about HPC is it's really not ageist. It hasn't been around for that long. So, I mean, well, at this scale it's obviously this show's been going on since 1988 before you guys were even probably a thought. But I think it's interesting. It's so fun to get to meet you both. Thank you for sharing about what you're doing and what your dreams are. Lucas and Karl. >> David: Thanks for taking the time. >> I hope you win and we're going to get you off the show here as quickly as possible so you can get back to your teams and back to competing. David, great questions as always, thanks for being here. And thank you all for tuning in to theCUBE Live from Dallas, Texas, where we are at Supercomputing. My name's Savannah Peterson and I hope you're having a beautiful day. (gentle upbeat music)
SUMMARY :
Thank you for being my left arm today. Thank you both so much for being here. I apparently need to go back from all across the world, and so we overcame that, So what are you working on since the summer and some and you have to accomplish and a monitor looking at the So for the benchmarking phase, How did you know this was the path But I decided to go down I saw this program at They text me all the time. I was going to say for Yeah, yeah, yeah. But, so Karl, what's your story? So a lot of unmanned systems today And when did you get that lives in the States I can remember. ask you this question Savannah: It cool, will of my questions. Is that something you want to pursue? I think so. So when I say dream job, understand What's do you think at this point? That's a tough question. You'll be able to review and then we'll have you back What's the dream? So as I mentioned entrepreneurship I don't know yet. planning to go to grad school. to hear that you have. I think these two will I was curious, I want to piggyback on that I was going to ask to your dream job, Have you even had I can't even, it's crazy to see them. Yeah. I mean it's hard to why both of you were invited go talk to everybody. And let them recruit you Have you had a second I've been going to the I mean, I can imagine that's I find that very, very impressive. It's so fun to get to meet you both. going to get you off the show
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Savannah | PERSON | 0.99+ |
David | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Belgium | LOCATION | 0.99+ |
Karl | PERSON | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
3000 watts | QUANTITY | 0.99+ |
Lucas | PERSON | 0.99+ |
IU | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Karl Oversteyns | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
five year | QUANTITY | 0.99+ |
Asia | LOCATION | 0.99+ |
Lucas Snyder | PERSON | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
Purdue | ORGANIZATION | 0.99+ |
two advisors | QUANTITY | 0.99+ |
Tomorrow | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Purdue | LOCATION | 0.99+ |
1988 | DATE | 0.99+ |
last week | DATE | 0.99+ |
Jupiter | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
Purdue University | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two days ago | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Indiana University | ORGANIZATION | 0.98+ |
Indiana University | ORGANIZATION | 0.97+ |
earlier this year | DATE | 0.93+ |
earth | LOCATION | 0.93+ |
first | QUANTITY | 0.92+ |
Supercomputing | ORGANIZATION | 0.9+ |
2027 | TITLE | 0.86+ |
HPC | ORGANIZATION | 0.8+ |
theCUBE | ORGANIZATION | 0.8+ |
States | LOCATION | 0.56+ |
second | QUANTITY | 0.48+ |
22 | QUANTITY | 0.38+ |
Kirk Bresniker, HPE | SuperComputing 22
>>Welcome back, everyone live here at Supercomputing 22 in Dallas, Texas. I'm John for host of the Queue here at Paul Gillin, editor of Silicon Angle, getting all the stories, bringing it to you live. Supercomputer TV is the queue right now. And bringing all the action Bresniker, chief architect of Hewlett Packard Labs with HP Cube alumnis here to talk about Supercomputing Road to Quantum. Kirk, great to see you. Thanks for coming on. >>Thanks for having me guys. Great to be >>Here. So Paul and I were talking and we've been covering, you know, computing as we get into the large scale cloud now on premises compute has been one of those things that just never stops. No one ever, I never heard someone say, I wanna run my application or workload on slower, slower hardware or processor or horsepower. Computing continues to go, but this, we're at a step function. It feels like we're at a level where we're gonna unleash new, new creativity, new use cases. You've been kind of working on this for many, many years at hp, Hewlett Packard Labs, I remember the machine and all the predecessor r and d. Where are we right now from your standpoint, HPE standpoint? Where are you in the computing? It's as a service, everything's changing. What's your view? >>So I think, you know, you capture so well. You think of the capabilities that you create. You create these systems and you engineer these amazing products and then you think, whew, it doesn't get any better than that. And then you remind yourself as an engineer. But wait, actually it has to, right? It has to because we need to continuously provide that next generation of scientists and engineer and artists and leader with the, with the tools that can do more and do more frankly with less. Because while we want want to run the program slower, we sure do wanna run them for less energy. And figuring out how we accomplish all of those things, I think is, is really where it's gonna be fascinating. And, and it's also, we think about that, we think about that now, scale data center billion, billion operations per second, the new science, arts and engineering that we'll create. And yet it's also what's beyond what's beyond that data center. How do we hook it up to those fantastic scientific instruments that are capable to generate so much information? We need to understand how we couple all of those things together. So I agree, we are at, at an amazing opportunity to raise the aspirations of the next generation. At the same time we have to think about what's coming next in terms of the technology. Is the silicon the only answer for us to continue to advance? >>You know, one of the big conversations is like refactoring, replatforming, we have a booth behind us that's doing energy. You can build it in data centers for compute. There's all kinds of new things. Is there anything in the paradigm of computing and now on the road to quantum, which I know you're involved, I saw you have on LinkedIn, you have an open rec for that. What paradigm elements are changing that weren't in play a few years ago that you're looking at right now as you look at the 20 mile stair into quantum? >>So I think for us it's fascinating because we've had a tailwind at our backs my whole career, 33 years at hp. And what I could count on was transistors got at first they got cheaper, faster and they use less energy. And then, you know, that slowed down a little bit. Now they're still cheaper and faster. As we look in that and that Moore's law continues to flatten out of it, there has to be something better to do than, you know, yet another copy of the prior design opening up that diversity of approach. And whether that is the amazing wafer scale accelerators, we see these application specific silicon and then broadening out even farther next to the next to the silicon. Here's the analog computational accelerator here is now the, the emergence of a potential quantum accelerator. So seeing that diversity of approaches, but what we have to happen is we need to harness all of those efficiencies and yet we still have to realize that there are human beings that need to create the application. So how do we bridge, how do we accommodate the physical of, of new kinds of accelerator? How do we imagine the cyber physical connection to the, to the rest of the supercomputer? And then finally, how do we bridge that productivity gap? Especially not for people who like me who have been around for a long time, we wanna think about that next generation cuz they're the ones that need to solve the problems and write the code that will do it. >>You mentioned what exists beyond silicon. In fact, are you looking at different kinds of materials that computers in the future will be built upon? >>Oh absolutely. You think of when, when we, we look at the quantum, the quantum modalities then, you know, whether it is a trapped ion or a superconducting, a piece of silicon or it is a neutral ion. There's just no, there's about half a dozen of these novel systems because really what we're doing when we're using a a quantum mechanical computer, we're creating a tiny universe. We're putting a little bit of material in there and we're manipulating at, at the subatomic level, harnessing the power of of, of quantum physics. That's an incredible challenge. And it will take novel materials, novel capabilities that we aren't just used to seeing. Not many people have a helium supplier in their data center today, but some of them might tomorrow. And understanding again, how do we incorporate industrialize and then scale all of these technologies. >>I wanna talk Turkey about quantum because we've been talking for, for five years. We've heard a lot of hyperbole about quantum. We've seen some of your competitors announcing quantum computers in the cloud. I don't know who's using these, these computers, what kind of work they're being used, how much of the, how real is quantum today? How close are we to having workable true quantum computers and what can you point to any examples of how it's being, how that technology is being used in the >>Field? So it, it remains nascent. We'll put it that way. I think part of the challenge is we see this low level technology and of course it was, you know, professor Richard Fineman who first pointed us in this direction, you know, more than 30 years ago. And you know, I I I trust his judgment. Yes. You know that there's probably some there there especially for what he was doing, which is how do we understand and engineer systems at the quantum mechanical level. Well he said a quantum mechanical system's probably the way to go. So understanding that, but still part of the challenge we see is that people have been working on the low level technology and they're reaching up to wondering will I eventually have a problem that that I can solve? And the challenge is you can improve something every single day and if you don't know where the bar is, then you don't ever know if you'll be good enough. >>I think part of the approach that we like to understand, can we start with the problem, the thing that we actually want to solve and then figure out what is the bespoke combination of classical supercomputing, advanced AI accelerators, novel quantum quantum capabilities. Can we simulate and design that? And we think there's probably nothing better to do that than than an next to scale supercomputer. Yeah. Can we simulate and design that bespoke environment, create that digital twin of this environment and if we, we've simulated it, we've designed it, we can analyze it, see is it actually advantageous? Cuz if it's not, then we probably should go back to the drawing board. And then finally that then becomes the way in which we actually run the quantum mechanical system in this hybrid environment. >>So it's na and you guys are feeling your way through, you get some moonshot, you work backwards from use cases as a, as a more of a discovery navigational kind of mission piece. I get that. And Exoscale has been a great role for you guys. Congratulations. Has there been strides though in quantum this year? Can you point to what's been the, has the needle moved a little bit a lot or, I mean it's moving I guess to some, there's been some talk but we haven't really been able to put our finger on what's moving, like what need, where's the needle moved I >>Guess in quantum. And I think, I think that's part of the conversation that we need to have is how do we measure ourselves. I know at the World Economic Forum, quantum Development Network, we had one of our global future councils on the future of quantum computing. And I brought in a scene I EEE fellow Par Gini who, you know, created the international technology roadmap for semiconductors. And I said, Paulo, could you come in and and give us examples, how was the semiconductor community so effective not only at developing the technology but predicting the development of technology so that whether it's an individual deciding if they should change careers or it's a nation state deciding if they should spend a couple billion dollars, we have that tool to predict the rate of change and improvement. And so I think that's part of what we're hoping by participating will bring some of that road mapping skill and technology and understanding so we can make those better reasoned investments. >>Well it's also fun to see super computing this year. Look at the bigger picture, obviously software cloud natives running modern applications, infrastructure as code that's happening. You're starting to see the integration of, of environments almost like a global distributed operating system. That's the way I call it. Silicon and advancements have been a big part of what we see now. Merchant silicon, but also dpu are on the scene. So the role role of silicon is there. And also we have supply chain problems. So how, how do you look at that as a a, a chief architect of h Hewlett Packard Labs? Because not only you have to invent the future and dream it up, but you gotta deal with the realities and you get the realities are silicon's great, we need more of that quantums around the corner, but supply chain, how do you solve that? What's your thoughts and how do you, how, how is HPE looking at silicon innovation and, and supply chain? >>And so for us it, it is really understanding that partnership model and understanding and contributing. And so I will do things like I happen to be the, the systems and architectures chapter editor for the I eee International Roadmap for devices and systems, that community that wants to come together and provide that guidance. You know, so I'm all about telling the semiconductor and the post semiconductor community, okay, this is where we need to compute. I have a partner in the applications and benchmark that says, this is what we need to compute. And when you can predict in the future about where you need to compute, what you need to compute, you can have a much richer set of conversations because you described it so well. And I think our, our senior fellow Nick Dubey would, he's coined the term internet of workflows where, you know, you need to harness everything from the edge device all the way through the extra scale computer and beyond. And it's not just one sort of static thing. It is a very interesting fluid topology. I'll use this compute at the edge, I'll do this information in the cloud, I want to have this in my exoscale data center and I still need to provide the tool so that an individual who's making that decision can craft that work flow across all of those different resources. >>And those workflows, by the way, are complicated. Now you got services being turned on and off. Observability is a hot area. You got a lot more data in in cycle inflow. I mean a lot more action. >>And I think you just hit on another key point for us and part of our research at labs, I have, as part of my other assignments, I help draft our AI ethics global policies and principles and not only tell getting advice about, about how we should live our lives, it also became the basis for our AI research lab at Shewl Packard Labs because they saw, here's a challenge and here's something where I can't actually believe, maintain my ethical compliance. I need to have engineer new ways of, of achieving artificial intelligence. And so much of that comes back to governance over that data and how can we actually create those governance systems and and do that out in the open >>That's a can of worms. We're gonna do a whole segment on that one, >>On that >>Technology, on that one >>Piece I wanna ask you, I mean, where rubber meets the road is where you're putting your dollars. So you've talked a lot, a lot of, a lot of areas of, of progress right now, where are you putting your dollars right now at Hewlett Packard Labs? >>Yeah, so I think when I draw, when I draw my 2030 vision slide, you know, I, for me the first column is about heterogeneous, right? How do we bring all of these novel computational approaches to be able to demonstrate their effectiveness, their sustainability, and also the productivity that we can drive from, from, from them. So that's my first column. My section column is that edge to exoscale workflow that I need to be able to harness all of those computational and data resources. I need to be aware of the energy consequence of moving data, of doing computation and find all of that while still maintaining and solving for security and privacy. But the last thing, and, and that's one was a, one was a how one was aware. The last thing is a who, right? And is is how do we take that subject matter expert? I think of a, a young engineer starting their career at hpe. It'll be very different than my 33 years. And part of it, you know, they will be undaunted by any, any scale. They will be cloud natives, maybe they metaverse natives, they will demand to design an open cooperative environment. So for me it's thinking about that individual and how do I take those capabilities, heterogeneous edge to exito scale workflows and then make them productive. And for me, that's, that's where we were putting our emphasis on those three. When, where and >>Who. Yeah. And making it compatible for the next generation. We see the student cluster competition going on over there. This is the only show that we cover that we've been to that is from the dorm room to the boardroom and this cuz Supercomputing now is elevating up into that workflow, into integration, multiple environments, cloud, premise, edge, metaverse. This is like a whole nother world. >>And, and, but I think it's, it's the way that regardless of which human pursuit you're in, you know, everyone is going to be demand simulation and modeling ai, ML and massive data m l and massive data analytics that's gonna be at heart of, of everything. And that's what you see. That's what I love about coming here. This isn't just the way we're gonna do science. This is the way we're gonna do everything. >>We're gonna come by your booth, check it out. We've talked to some of the folks, hpe obviously HPE Discover this year, GreenLake with center stage, it's now consumption is a service for technology. Whole nother ballgame. Congratulations on, on all this. I would say the massive, I won't say pivot, but you know, a change >>It >>Is and how you guys >>Operate. And you know, it's funny sometimes you think about the, the pivot to as a services benefiting the customer, but as someone who has supported designs over decades, you know, that ability to to to operate and at peak efficiency, to always keep in perfect operating order and to continuously change while still meeting the customer expectations that actually allows us to deliver innovation to our customers faster than when we are delivering warranted individual packaged products. >>Kirk, thanks for coming on Paul. Great conversation here. You know, the road to Quantum's gonna be paved through computing supercomputing software integrated workflows from the dorm room to the boardroom to Cube, bringing all the action here at Supercomputing 22. I'm Jacque Forer with Paul Gillin. Thanks for watching. We'll be right back.
SUMMARY :
bringing it to you live. Great to be I remember the machine and all the predecessor r and d. Where are we right now from At the same time we have to think about what's coming next in terms of the technology. You know, one of the big conversations is like refactoring, replatforming, we have a booth behind us that's And then, you know, that slowed down a little bit. that computers in the future will be built upon? And understanding again, how do we incorporate industrialize and true quantum computers and what can you point to any examples And the challenge is you can improve something every single day and if you don't know where the bar is, I think part of the approach that we like to understand, can we start with the problem, lot or, I mean it's moving I guess to some, there's been some talk but we haven't really been able to put And I think, I think that's part of the conversation that we need to have is how do we need more of that quantums around the corner, but supply chain, how do you solve that? in the future about where you need to compute, what you need to compute, you can have a much richer set of Now you got services being turned on and off. And so much of that comes back to governance over that data and how can we actually create That's a can of worms. a lot of, a lot of areas of, of progress right now, where are you putting your dollars right And part of it, you know, they will be undaunted by any, any scale. This is the only show that we cover that we've been to that And that's what you see. the massive, I won't say pivot, but you know, a change And you know, it's funny sometimes you think about the, the pivot to as a services benefiting the customer, You know, the road to Quantum's gonna be paved through
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul Gillin | PERSON | 0.99+ |
Nick Dubey | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Bresniker | PERSON | 0.99+ |
Richard Fineman | PERSON | 0.99+ |
20 mile | QUANTITY | 0.99+ |
Hewlett Packard Labs | ORGANIZATION | 0.99+ |
Kirk | PERSON | 0.99+ |
Paulo | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
33 years | QUANTITY | 0.99+ |
first column | QUANTITY | 0.99+ |
Jacque Forer | PERSON | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
Shewl Packard Labs | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Kirk Bresniker | PERSON | 0.99+ |
John | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
hp | ORGANIZATION | 0.98+ |
Moore | PERSON | 0.98+ |
five years | QUANTITY | 0.98+ |
HPE | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
2030 | DATE | 0.97+ |
h Hewlett Packard Labs | ORGANIZATION | 0.97+ |
this year | DATE | 0.96+ |
one | QUANTITY | 0.96+ |
HP Cube | ORGANIZATION | 0.95+ |
GreenLake | ORGANIZATION | 0.93+ |
about half a dozen | QUANTITY | 0.91+ |
billion, | QUANTITY | 0.91+ |
World Economic Forum | ORGANIZATION | 0.9+ |
quantum Development Network | ORGANIZATION | 0.9+ |
few years ago | DATE | 0.88+ |
couple billion dollars | QUANTITY | 0.84+ |
more than 30 years ago | DATE | 0.84+ |
Gini | ORGANIZATION | 0.78+ |
Supercomputing Road to Quantum | TITLE | 0.68+ |
Supercomputing 22 | ORGANIZATION | 0.68+ |
Par | PERSON | 0.67+ |
billion operations per second | QUANTITY | 0.67+ |
Silicon Angle | ORGANIZATION | 0.66+ |
EEE | ORGANIZATION | 0.66+ |
single | QUANTITY | 0.66+ |
Turkey | ORGANIZATION | 0.56+ |
SuperComputing 22 | ORGANIZATION | 0.52+ |
Cube | ORGANIZATION | 0.48+ |
Exoscale | TITLE | 0.44+ |
International | TITLE | 0.4+ |
Anthony Dina, Dell Technologies and Bob Crovella, NVIDIA | SuperComputing 22
>>How do y'all, and welcome back to Supercomputing 2022. We're the Cube, and we are live from Dallas, Texas. I'm joined by my co-host, David Nicholson. David, hello. Hello. We are gonna be talking about data and enterprise AI at scale during this segment. And we have the pleasure of being joined by both Dell and Navidia. Anthony and Bob, welcome to the show. How you both doing? Doing good. >>Great. Great show so far. >>Love that. Enthusiasm, especially in the afternoon on day two. I think we all, what, what's in that cup? Is there something exciting in there that maybe we should all be sharing with you? >>Just say it's just still Yeah, water. >>Yeah. Yeah. I love that. So I wanna make sure that, cause we haven't talked about this at all during the show yet, on the cube, I wanna make sure that everyone's on the same page when we're talking about data unstructured versus structured data. I, it's in your title, Anthony, tell me what, what's the difference? >>Well, look, the world has been based in analytics around rows and columns, spreadsheets, data warehouses, and we've made predictions around the forecast of sales maintenance issues. But when we take computers and we give them eyes, ears, and fingers, cameras, microphones, and temperature and vibration sensors, we now translate that into more human experience. But that kind of data, the sensor data, that video camera is unstructured or semi-structured, that's what that >>Means. We live in a world of unstructured data structure is something we add to later after the fact. But the world that we see and the world that we experience is unstructured data. And one of the promises of AI is to be able to take advantage of everything that's going on around us and augment that, improve that, solve problems based on that. And so if we're gonna do that job effectively, we can't just depend on structured data to get the problem done. We have to be able to incorporate everything that we can see here, taste, smell, touch, and use >>That as, >>As part of the problem >>Solving. We want the chaos, bring it. >>Chaos has been a little bit of a theme of our >>Show. It has been, yeah. And chaos is in the eye of the beholder. You, you think about, you think about the reason for structuring data to a degree. We had limited processing horsepower back when everything was being structured as a way to allow us to be able to, to to reason over it and gain insights. So it made sense to put things into rows and tables. How does, I'm curious, diving right into where Nvidia fits into this, into this puzzle, how does NVIDIA accelerate or enhance our ability to glean insight from or reason over unstructured data in particular? >>Yeah, great question. It's really all about, I would say it's all about ai and Invidia is a leader in the AI space. We've been investing and focusing on AI since at least 2012, if not before, accelerated computing that we do it. Invidia is an important part of it, really. We believe that AI is gonna revolutionize nearly every aspect of computing. Really nearly every aspect of problem solving, even nearly every aspect of programming. And one of the reasons is for what we're talking about now is it's a little impact. Being able to incorporate unstructured data into problem solving is really critical to being able to solve the next generation of problems. AI unlocks, tools and methodologies that we can realistically do that with. It's not realistic to write procedural code that's gonna look at a picture and solve all the problems that we need to solve if we're talking about a complex problem like autonomous driving. But with AI and its ability to naturally absorb unstructured data and make intelligent reason decisions based on it, it's really a breakthrough. And that's what NVIDIA's been focusing on for at least a decade or more. >>And how does NVIDIA fit into Dell's strategy? >>Well, I mean, look, we've been partners for many, many years delivering beautiful experiences on workstations and laptops. But as we see the transition away from taking something that was designed to make something pretty on screen to being useful in solving problems in life sciences, manufacturing in other places, we work together to provide integrated solutions. So take for example, the dgx a 100 platform, brilliant design, revolutionary bus technologies, but the rocket ship can't go to Mars without the fuel. And so you need a tank that can scale in performance at the same rate as you throw GPUs at it. And so that's where the relationship really comes alive. We enable people to curate the data, organize it, and then feed those algorithms that get the answers that Bob's been talking about. >>So, so as a gamer, I must say you're a little shot at making things pretty on a screen. Come on. That was a low blow. That >>Was a low blow >>Sassy. What I, >>I Now what's in your cup? That's what I wanna know, Dave, >>I apparently have the most boring cup of anyone on you today. I don't know what happened. We're gonna have to talk to the production team. I'm looking at all of you. We're gonna have to make that better. One of the themes that's been on this show, and I love that you all embrace the chaos, we're, we're seeing a lot of trend in the experimentation phase or stage rather. And it's, we're in an academic zone of it with ai, companies are excited to adopt, but most companies haven't really rolled out their strategy. What is necessary for us to move from this kind of science experiment, science fiction in our heads to practical application at scale? Well, >>Let me take this, Bob. So I've noticed there's a pattern of three levels of maturity. The first level is just what you described. It's about having an experience, proof of value, getting stakeholders on board, and then just picking out what technology, what algorithm do I need? What's my data source? That's all fun, but it is chaos over time. People start actually making decisions based on it. This moves us into production. And what's important there is normality, predictability, commonality across, but hidden and embedded in that is a center of excellence. The community of data scientists and business intelligence professionals sharing a common platform in the last stage, we get hungry to replicate those results to other use cases, throwing even more information at it to get better accuracy and precision. But to do this in a budget you can afford. And so how do you figure out all the knobs and dials to turn in order to make, take billions of parameters and process that, that's where casual, what's >>That casual decision matrix there with billions of parameters? >>Yeah. Oh, I mean, >>But you're right that >>That's, that's exactly what we're, we're on this continuum, and this is where I think the partnership does really well, is to marry high performant enterprise grade scalability that provides the consistency, the audit trail, all of the things you need to make sure you don't get in trouble, plus all of the horsepower to get to the results. Bob, what would you >>Add there? I think the thing that we've been talking about here is complexity. And there's complexity in the AI problem solving space. There's complexity everywhere you look. And we talked about the idea that NVIDIA can help with some of that complexity from the architecture and the software development side of it. And Dell helps with that in a whole range of ways, not the least of which is the infrastructure and the server design and everything that goes into unlocking the performance of the technology that we have available to us today. So even the center of excellence is an example of how do I take this incredibly complex problem and simplify it down so that the real world can absorb and use this? And that's really what Dell and Vidia are partnering together to do. And that's really what the center of excellence is. It's an idea to help us say, let's take this extremely complex problem and extract some good value out of >>It. So what is Invidia's superpower in this realm? I mean, look, we're we are in, we, we are in the era of Yeah, yeah, yeah. We're, we're in a season of microprocessor manufacturers, one uping, one another with their latest announcements. There's been an ebb and a flow in our industry between doing everything via the CPU versus offloading processes. Invidia comes up and says, Hey, hold on a second, gpu, which again, was focused on graphics processing originally doing something very, very specific. How does that translate today? What's the Nvidia again? What's, what's, what's the superpower? Because people will say, well, hey, I've got a, I've got a cpu, why do I need you? >>I think our superpower is accelerated computing, and that's really a hardware and software thing. I think your question is slanted towards the hardware side, which is, yes, it is very typical and we do make great processors, but the processor, the graphics processor that you talked about from 10 or 20 years ago was designed to solve a very complex task. And it was exquisitely designed to solve that task with the resources that we had available at that time. Time. Now, fast forward 10 or 15 years, we're talking about a new class of problems called ai. And it requires both exquisite, soft, exquisite processor design as well as very complex and exquisite software design sitting on top of it as well. And the systems and infrastructure knowledge, high performance storage and everything that we're talking about in the solution today. So Nvidia superpower is really about that accelerated computing stack at the bottom. You've got hardware above that, you've got systems above that, you have middleware and libraries and above that you have what we call application SDKs that enable the simplification of this really complex problem to this domain or that domain or that domain, while still allowing you to take advantage of that processing horsepower that we put in that exquisitely designed thing called the gpu >>Decreasing complexity and increasing speed to very key themes of the show. Shocking, no one, you all wanna do more faster. Speaking of that, and I'm curious because you both serve a lot of different unique customers, verticals and use cases, is there a specific project that you're allowed to talk about? Or, I mean, you know, you wanna give us the scoop, that's totally cool too. We're here for the scoop on the cube, but is there a specific project or use case that has you personally excited Anthony? We'll start with that. >>Look, I'm, I've always been a big fan of natural language processing. I don't know why, but to derive intent based on the word choices is very interesting to me. I think what compliments that is natural language generation. So now we're having AI programs actually discover and describe what's inside of a package. It wouldn't surprise me that over time we move from doing the typical summary on the economic, the economics of the day or what happened in football. And we start moving that towards more of the creative advertising and marketing arts where you are no longer needed because the AI is gonna spit out the result. I don't think we're gonna get there, but I really love this idea of human language and computational linguistics. >>What a, what a marriage. I agree. Think it's fascinating. What about you, Bob? It's got you >>Pumped. The thing that really excites me is the problem solving, sort of the tip of the spear in problem solving. The stuff that you've never seen before, the stuff that you know, in a geeky way kind of takes your breath away. And I'm gonna jump or pivot off of what Anthony said. Large language models are really one of those areas that are just, I think they're amazing and they're just kind of surprising everyone with what they can do here on the show floor. I was looking at a demonstration from a large language model startup, basically, and they were showing that you could ask a question about some obscure news piece that was reported only in a German newspaper. It was about a little shipwreck that happened in a hardware. And I could type in a query to this system and it would immediately know where to find that information as if it read the article, summarized it for you, and it even could answer questions that you could only only answer by looking pic, looking at pictures in that article. Just amazing stuff that's going on. Just phenomenal >>Stuff. That's a huge accessibility. >>That's right. And I geek out when I see stuff like that. And that's where I feel like all this work that Dell and Invidia and many others are putting into this space is really starting to show potential in ways that we wouldn't have dreamed of really five years ago. Just really amazing. And >>We see this in media and entertainment. So in broadcasting, you have a sudden event, someone leaves this planet where they discover something new where they get a divorce and they're a major quarterback. You wanna go back somewhere in all of your archives to find that footage. That's a very laborist project. But if you can use AI technology to categorize that and provide the metadata tag so you can, it's searchable, then we're off to better productions, more interesting content and a much richer viewer experience >>And a much more dynamic picture of what's really going on. Factoring all of that in, I love that. I mean, David and I are both nerds and I know we've had take our breath away moments, so I appreciate that you just brought that up. Don't worry, you're in good company. In terms of the Geek Squad over >>Here, I think actually maybe this entire show for Yes, exactly. >>I mean, we were talking about how steampunk some of the liquid cooling stuff is, and you know, this is the only place on earth really, or the only show where you would come and see it at this level in scale and, and just, yeah, it's, it's, it's very, it's very exciting. How important for the future of innovation in HPC are partnerships like the one that Navia and Dell have? >>You wanna start? >>Sure, I would, I would just, I mean, I'm gonna be bold and brash and arrogant and say they're essential. Yeah, you don't not, you do not want to try and roll this on your own. This is, even if we just zoomed in to one little beat, little piece of the technology, the software stack that do modern, accelerated deep learning is incredibly complicated. There can be easily 20 or 30 components that all have to be the right version with the right buttons pushed, built the right way, assembled the right way, and we've got lots of technologies to help with that. But you do not want to be trying to pull that off on your own. That's just one little piece of the complexity that we talked about. And we really need, as technology providers in this space, we really need to do as much as we do to try to unlock the potential. We have to do a lot to make it usable and capable as well. >>I got a question for Anthony. All >>Right, >>So in your role, and I, and I'm, I'm sort of, I'm sort of projecting here, but I think, I think, I think your superpower personally is likely in the realm of being able to connect the dots between technology and the value that that technology holds in a variety of contexts. That's right. Whether it's business or, or whatever, say sentences. Okay. Now it's critical to have people like you to connect those dots. Today in the era of pervasive ai, how important will it be to have AI have to explain its answer? In other words, words, should I trust the information the AI is giving me? If I am a decision maker, should I just trust it on face value? Or am I going to want a demand of the AI kind of what you deliver today, which is No, no, no, no, no, no. You need to explain this to me. How did you arrive at that conclusion, right? How important will that be for people to move forward and trust the results? We can all say, oh hey, just trust us. Hey, it's ai, it's great, it's got Invidia, you know, Invidia acceleration and it's Dell. You can trust us, but come on. So many variables in the background. It's >>An interesting one. And explainability is a big function of ai. People want to know how the black box works, right? Because I don't know if you have an AI engine that's looking for potential maladies in an X-ray, but it misses it. Do you sue the hospital, the doctor or the software company, right? And so that accountability element is huge. I think as we progress and we trust it to be part of our everyday decision making, it's as simply as a recommendation engine. It isn't actually doing all of the decisions. It's supporting us. We still have, after decades of advanced technology algorithms that have been proven, we can't predict what the market price of any object is gonna be tomorrow. And you know why? You know why human beings, we are so unpredictable. How we feel in the moment is radically different. And whereas we can extrapolate for a population to an individual choice, we can't do that. So humans and computers will not be separated. It's a, it's a joint partnership. But I wanna get back to your point, and I think this is very fundamental to the philosophy of both companies. Yeah, it's about a community. It's always about the people sharing ideas, getting the best. And anytime you have a center of excellence and algorithm that works for sales forecasting may actually be really interesting for churn analysis to make sure the employees or students don't leave the institution. So it's that community of interest that I think is unparalleled at other conferences. This is the place where a lot of that happens. >>I totally agree with that. We felt that on the show. I think that's a beautiful note to close on. Anthony, Bob, thank you so much for being here. I'm sure everyone feels more educated and perhaps more at peace with the chaos. David, thanks for sitting next to me asking the best questions of any host on the cube. And thank you all for being a part of our community. Speaking of community here on the cube, we're alive from Dallas, Texas. It's super computing all week. My name is Savannah Peterson and I'm grateful you're here. >>So I.
SUMMARY :
And we have the pleasure of being joined by both Dell and Navidia. Great show so far. I think we all, cause we haven't talked about this at all during the show yet, on the cube, I wanna make sure that everyone's on the same page when we're talking about But that kind of data, the sensor data, that video camera is unstructured or semi-structured, And one of the promises of AI is to be able to take advantage of everything that's going on We want the chaos, bring it. And chaos is in the eye of the beholder. And one of the reasons is for what we're talking about now is it's a little impact. scale in performance at the same rate as you throw GPUs at it. So, so as a gamer, I must say you're a little shot at making things pretty on a I apparently have the most boring cup of anyone on you today. But to do this in a budget you can afford. the horsepower to get to the results. and simplify it down so that the real world can absorb and use this? What's the Nvidia again? So Nvidia superpower is really about that accelerated computing stack at the bottom. We're here for the scoop on the cube, but is there a specific project or use case that has you personally excited And we start moving that towards more of the creative advertising and marketing It's got you And I'm gonna jump or pivot off of what That's a huge accessibility. And I geek out when I see stuff like that. and provide the metadata tag so you can, it's searchable, then we're off to better productions, so I appreciate that you just brought that up. I mean, we were talking about how steampunk some of the liquid cooling stuff is, and you know, this is the only place on earth really, There can be easily 20 or 30 components that all have to be the right version with the I got a question for Anthony. to have people like you to connect those dots. And anytime you have a center We felt that on the show.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Anthony | PERSON | 0.99+ |
Bob Crovella | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
Invidia | ORGANIZATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Mars | LOCATION | 0.99+ |
Vidia | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
Navidia | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
first level | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
2012 | DATE | 0.98+ |
today | DATE | 0.98+ |
billions | QUANTITY | 0.98+ |
earth | LOCATION | 0.97+ |
10 | DATE | 0.96+ |
Anthony Dina | PERSON | 0.96+ |
five years ago | DATE | 0.96+ |
30 components | QUANTITY | 0.95+ |
Navia | ORGANIZATION | 0.95+ |
day two | QUANTITY | 0.94+ |
one little piece | QUANTITY | 0.91+ |
tomorrow | DATE | 0.87+ |
three levels | QUANTITY | 0.87+ |
HPC | ORGANIZATION | 0.86+ |
20 years ago | DATE | 0.83+ |
one little | QUANTITY | 0.77+ |
billions of parameters | QUANTITY | 0.75+ |
a decade | QUANTITY | 0.74+ |
decades | QUANTITY | 0.68+ |
German | OTHER | 0.68+ |
dgx a 100 platform | COMMERCIAL_ITEM | 0.67+ |
themes | QUANTITY | 0.63+ |
second | QUANTITY | 0.57+ |
22 | QUANTITY | 0.48+ |
Squad | ORGANIZATION | 0.4+ |
Supercomputing 2022 | ORGANIZATION | 0.36+ |
Kelly Gaither, University of Texas | SuperComputing 22
>>Good afternoon everyone, and thank you so much for joining us. My name is Savannah Peterson, joined by my co-host Paul for the afternoon. Very excited. Oh, Savannah. Hello. I'm, I'm pumped for this. This is our first bit together. Exactly. >>It's gonna be fun. Yes. We have a great guest to kick off with. >>We absolutely do. We're at Supercomputing 2022 today, and very excited to talk to our next guest. We're gonna be talking about data at scale and data that really matters to us joining us. Kelly Gayer, thank you so much for being here and you are with tech. Tell everyone what TAC is. >>Tech is the Texas Advanced Computing Center at the University of Texas at Austin. And thank you so much for having me here. >>It is wonderful to have you. Your smile's contagious. And one of the themes that's come up a lot with all of our guests, and we just talked about it, is how good it is to be back in person, how good it is to be around our hardware, community tech. You did some very interesting research during the pandemic. Can you tell us about that? >>I can. I did. So when we realized sort of mid-March, we realized that, that this was really not normal times and the pandemic was statement. Yes. That pandemic was really gonna touch everyone. I think a lot of us at the center and me personally, we dropped everything to plug in and that's what we do. So UT's tagline is what starts here changes the world and tax tagline is powering discoveries that change the world. So we're all about impact, but I plugged in with the research group there at UT Austin, Dr. Lauren Myers, who's an epidemiologist, and just we figured out how to plug in and compute so that we could predict the spread of, of Covid 19. >>And you did that through the use of mobility data, cell phone signals. Tell us more about what exactly you were choreographing. >>Yeah, so that was really interesting. Safe graph during the pandemic made their mobility data. Typically it was used for marketing purposes to know who was going into Walmart. The offenses >>For advertising. >>Absolutely, yeah. They made all of their mobility data available for free to people who were doing research and plugging in trying to understand Covid. 19, I picked that data up and we used it as a proxy for human behavior. So we knew we had some idea, we got weekly mobility updates, but it was really mobility all day long, you know, anonymized. I didn't know who they were by cell phones across the US by census block group or zip code if we wanted to look at it that way. And we could see how people were moving around. We knew what their neighbor, their home neighborhoods were. We knew how they were traveling or not traveling. We knew where people were congregating, and we could get some idea of, of how people were behaving. Were they really, were they really locking down or were they moving in their neighborhoods or were they going outside of their neighborhoods? >>What a, what a fascinating window into our pandemic lives. So now that you were able to do this for this pandemic, as we look forward, what have you learned? How quickly could we forecast? What's the prognosis? >>Yeah, so we, we learned a tremendous amount. I think during the pandemic we were reacting, we were really trying. It was a, it was an interesting time as a scientist, we were reacting to things almost as if the earth was moving underneath us every single day. So it was something new every day. And I've told people since I've, I haven't, I haven't worked that hard since I was a graduate student. So it was really daylight to dark 24 7 for a long period of time because it was so important. And we knew, we, we knew we were, we were being a part of history and affecting something that was gonna make a difference for a really long time. And, and I think what we've learned is that indeed there is a lot of data being collected that we can use for good. We can really understand if we get organized and we get set up, we can use this data as a means of perhaps predicting our next pandemic or our next outbreak of whatever. It is almost like using it as a canary in the coal mine. There's a lot in human behavior we can use, given >>All the politicization of, of this last pandemic, knowing what we know now, making us better prepared in theory for the next one. How confident are you that at least in the US we will respond proactively and, and effectively when the next one comes around? >>Yeah, I mean, that's a, that's a great question and, and I certainly understand why you ask. I think in my experience as a scientist, certainly at tech, the more transparent you are with what you do and the more you explain things. Again, during the pandemic, things were shifting so rapidly we were reacting and doing the best that we could. And I think one thing we did right was we admitted where we felt uncertain. And that's important. You have to really be transparent to the general public. I, I don't know how well people are gonna react. I think if we have time to prepare, to communicate and always be really transparent about it. I think those are three factors that go into really increasing people's trust. >>I think you nailed it. And, and especially during times of chaos and disaster, you don't know who to trust or what to believe. And it sounds like, you know, providing a transparent source of truth is, is so critical. How do you protect the sensitive data that you're working with? I know it's a top priority for you and the team. >>It is, it is. And we, we've adopted the medical mantra, do no harm. So we have, we feel a great responsibility there. There's, you know, two things that you have to really keep in mind when you've got sensitive data. One is the physical protection of it. And so that's, that's governed by rule, federal rules, hipaa, ferpa, whatever, whatever kind of data that you have. So we certainly focus on the physical protection of it, but there's also sort of the ethical protection of it. What, what is the quote? There's lies, damn lies and statistics. >>Yes. Twain. >>Yeah. So you, you really have to be responsible with what you're doing with the data, how you're portraying the results. And again, I think it comes back to transparency is is basically if people are gonna reproduce what I did, I have to be really transparent with what I did. >>I, yeah, I think that's super important. And one of the themes with, with HPC that we've been talking about a lot too is, you know, do people trust ai? Do they trust all the data that's going into these systems? And I love that you just talked about the storytelling aspect of that, because there is a duty, it's not, you can cut data kind of however you want. I mean, I come from marketing background and we can massage it to, to do whatever we want. So in addition to being the deputy director at Tech, you are also the DEI officer. And diversity I know is important to you probably both as an individual, but also in the work that you're doing. Talk to us about that. >>Yeah, I mean, I, I very passionate about diversity, equity and inclusion in a sense of belongingness. I think that's one of the key aspects of it. Core >>Of community too. >>I got a computer science degree back in the eighties. I was akin to a unicorn in a, in an engineering computer science department. And, but I was really lucky in a couple of respects. I had a, I had a father that was into science that told me I could do anything I, I wanted to set my mind to do. So that was my whole life, was really having that support system. >>He was cheers to dad. >>Yeah. Oh yeah. And my mom as well, actually, you know, they were educators. I grew up, you know, in that respect, very, very privileged, but it was still really hard to make it. And I couldn't have told you back in that time why I made it and, and others didn't, why they dropped out. But I made it a mission probably back, gosh, maybe 10, 15 years ago, that I was really gonna do all that I could to change the needle. And it turns out that there are a number of things that you can do grassroots. There are certainly best practices. There are rules and there are things that you really, you know, best practices to follow to make people feel more included in an organization, to feel like they belong it, shared mission. But there are also clever things that you can do with programming to really engage students, to meet people and students where they are interested and where they are engaged. And I think that's what, that's what we've done over, you know, the course of our programming over the course of about maybe since 2016. We have built a lot of programming ATAC that really focuses on that as well, because I'm determined the needle is gonna change before it's all said and done. It just really has to. >>So what, what progress have you made and what goals have you set in this area? >>Yeah, that, that's a great question. So, you know, at first I was a little bit reluctant to set concrete goals because I really didn't know what we could accomplish. I really wasn't sure what grassroots efforts was gonna be able to, you're >>So honest, you can tell how transparent you are with the data as well. That's >>Great. Yeah, I mean, if I really, most of the successful work that I've done is both a scientist and in the education and outreach space is really trust relationships. If I break that trust, I'm done. I'm no longer effective. So yeah, I am really transparent about it. But, but what we did was, you know, the first thing we did was we counted, you know, to the extent that we could, what does the current picture look like? Let's be honest about it. Start where we are. Yep. It was not a pretty picture. I mean, we knew that anecdotally it was not gonna be a great picture, but we put it out there and we leaned into it. We said, this is what it is. We, you know, I hesitated to say we're gonna look 10% better next year because I'm, I'm gonna be honest, I don't always know we're gonna do our best. >>The things that I think we did really well was that we stopped to take time to talk and find out what people were interested in. It's almost like being present and listening. My grandmother had a saying, you have two errors in one mouth for a reason, just respect the ratio. Oh, I love that. Yeah. And I think it's just been building relationships, building trust, really focusing on making a difference, making it a priority. And I think now what we're doing is we've been successful in pockets of people in the center and we are, we are getting everybody on board. There's, there's something everyone can do, >>But the problem you're addressing doesn't begin in college. It begins much, much, that's right. And there's been a lot of talk about STEM education, particularly for girls, how they're pushed out of the system early on. Also for, for people of color. Do you see meaningful progress being made there now after years of, of lip service? >>I do. I do. But it is, again, grassroots. We do have a, a, a researcher who was a former teacher at the center, Carol Fletcher, who is doing research and for CS for all we know that the workforce, so if you work from the current workforce, her projected workforce backwards, we know that digital skills of some kind are gonna be needed. We also know we have a, a, a shortage. There's debate on how large that shortage is, but about roughly about 1 million unmet jobs was projected in 2020. It hasn't gotten a lot better. We can work that problem backwards. So what we do there is a little, like a scatter shot approach. We know that people come in all forms, all shapes, all sizes. They get interested for all different kinds of reasons. We expanded our set of pathways so that we can get them where they can get on to the path all the way back K through 12, that's Carol's work. Rosie Gomez at the center is doing sort of the undergraduate space. We've got Don Hunter that does it, middle school, high school space. So we are working all parts of the problem. I am pretty passionate about what we consider opportunity youth people who never had the opportunity to go to college. Is there a way that we can skill them and get, get them engaged in some aspect and perhaps get them into this workforce. >>I love that you're starting off so young. So give us an example of one of those programs. What are you talking to kindergartners about when it comes to CS education? >>You know, I mean, gaming. Yes. Right. It's what everybody can wrap their head around. So most kids have had some sort of gaming device. You talk in the context, in the context of something they understand. I'm not gonna talk to them about high performance computing. It, it would go right over their heads. And I think, yeah, you know, I, I'll go back to something that you said Paul, about, you know, girls were pushed out. I don't know that girls are being pushed out. I think girls aren't interested and things that are being presented and I think they, I >>Think you're generous. >>Yeah. I mean, I was a young girl and I don't know why I stayed. Well, I do know why I stayed with it because I had a father that saw something in me and I had people at critical points in my life that saw something in me that I didn't see. But I think if we ch, if we change the way we teach it, maybe in your words they don't get pushed out or they, or they won't lose interest. There's, there's some sort of computing in everything we do. Well, >>Absolutely. There's also the bro culture, which begins at a very early >>Age. Yeah, that's a different problem. Yeah. That's just having boys in the classroom. Absolutely. You got >>It. That's a whole nother case. >>That's a whole other thing. >>Last question for you, when we are sitting here, well actually I've got, it's two parter, let's put it that way. Is there a tool or something you wish you could flick a magic wand that would make your job easier? Where you, you know, is there, can you identify the, the linchpin in the DEI challenge? Or is it all still prototyping and iterating to figure out the best fit? >>Yeah, that is a, that's a wonderful question. I can tell you what I get frustrated with is that, that >>Counts >>Is that I, I feel like a lot of people don't fully understand the level of effort and engagement it takes to do something meaningful. The >>Commitment to a program, >>The commitment to a program. Totally agree. It's, there is no one and done. No. And in fact, if I do that, I will lose them forever. They'll be, they will, they will be lost in the space forever. Rather. The engagement is really sort of time intensive. It's relationship intensive, but there's a lot of follow up too. And the, the amount of funding that goes into this space really is not, it, it, it's not equal to the amount of time and effort that it really takes. And I think, you know, I think what you work in this space, you realize that what you gain is, is really more of, it's, it really feels good to make a difference in somebody's life, but it's really hard to do on a shoer budget. So if I could kind of wave a magic wand, yes, I would increase understanding. I would get people to understand that it's all of our responsibility. Yes, everybody is needed to make the difference and I would increase the funding that goes to the programs. >>I think that's awesome, Kelly, thank you for that. You all heard that. More funding for diversity, equity, and inclusion. Please Paul, thank you for a fantastic interview, Kelly. Hopefully everyone is now inspired to check out tac perhaps become a, a Longhorn, hook 'em and, and come deal with some of the most important data that we have going through our systems and predicting the future of our pandemics. Ladies and gentlemen, thank you for joining us online. We are here in Dallas, Texas at Supercomputing. My name is Savannah Peterson and I look forward to seeing you for our next segment.
SUMMARY :
Good afternoon everyone, and thank you so much for joining us. It's gonna be fun. Kelly Gayer, thank you so much for being here and you are with tech. And thank you so much for having me here. And one of the themes that's come up a to plug in and compute so that we could predict the spread of, And you did that through the use of mobility data, cell phone signals. Yeah, so that was really interesting. but it was really mobility all day long, you know, So now that you were able to do this for this pandemic, as we look forward, I think during the pandemic we were reacting, in the US we will respond proactively and, and effectively when And I think one thing we did right was we I think you nailed it. There's, you know, two things that you have to really keep And again, I think it comes back to transparency is is basically And I love that you just talked about the storytelling aspect of I think that's one of the key aspects of it. I had a, I had a father that was into science I grew up, you know, in that respect, very, very privileged, I really wasn't sure what grassroots efforts was gonna be able to, you're So honest, you can tell how transparent you are with the data as well. but what we did was, you know, the first thing we did was we counted, you And I think now what we're doing is we've been successful in Do you see meaningful progress being all we know that the workforce, so if you work from the current workforce, I love that you're starting off so young. And I think, yeah, you know, I, I'll go back to something that But I think if we ch, There's also the bro culture, which begins at a very early That's just having boys in the classroom. you know, is there, can you identify the, the linchpin in the DEI challenge? I can tell you what I get frustrated with of effort and engagement it takes to do something meaningful. you know, I think what you work in this space, you realize that what I look forward to seeing you for our next segment.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kelly Gayer | PERSON | 0.99+ |
Kelly | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Carol Fletcher | PERSON | 0.99+ |
Rosie Gomez | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Paul | PERSON | 0.99+ |
Lauren Myers | PERSON | 0.99+ |
Carol | PERSON | 0.99+ |
Kelly Gaither | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
10% | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
today | DATE | 0.99+ |
two errors | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Covid 19 | OTHER | 0.99+ |
Austin | LOCATION | 0.99+ |
eighties | DATE | 0.99+ |
three factors | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
TAC | ORGANIZATION | 0.98+ |
two parter | QUANTITY | 0.98+ |
one mouth | QUANTITY | 0.98+ |
earth | LOCATION | 0.98+ |
UT | ORGANIZATION | 0.98+ |
mid-March | DATE | 0.97+ |
pandemic | EVENT | 0.97+ |
two things | QUANTITY | 0.97+ |
University of Texas | ORGANIZATION | 0.97+ |
first bit | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.96+ |
Supercomputing | ORGANIZATION | 0.96+ |
Don Hunter | PERSON | 0.95+ |
Texas Advanced Computing Center | ORGANIZATION | 0.95+ |
ATAC | ORGANIZATION | 0.93+ |
Covid. 19 | OTHER | 0.93+ |
24 7 | QUANTITY | 0.86+ |
UT Austin | ORGANIZATION | 0.82+ |
10, 15 years ago | DATE | 0.81+ |
Supercomputing 2022 | ORGANIZATION | 0.79+ |
every single day | QUANTITY | 0.79+ |
about 1 million unmet jobs | QUANTITY | 0.77+ |
12 | QUANTITY | 0.74+ |
SuperComputing | ORGANIZATION | 0.74+ |
outbreak | EVENT | 0.7+ |
Dr. | PERSON | 0.56+ |
DEI | ORGANIZATION | 0.54+ |
Twain | PERSON | 0.51+ |
Brian Payne, Dell Technologies and Raghu Nambiar, AMD | SuperComputing 22
(upbeat music) >> We're back at SC22 SuperComputing Conference in Dallas. My name's Paul Gillan, my co-host, John Furrier, SiliconANGLE founder. And huge exhibit floor here. So much activity, so much going on in HPC, and much of it around the chips from AMD, which has been on a roll lately. And in partnership with Dell, our guests are Brian Payne, Dell Technologies, VP of Product Management for ISG mid-range technical solutions, and Raghu Nambiar, corporate vice president of data system, data center ecosystem, and application engineering, that's quite a mouthful, at AMD, And gentlemen, welcome. Thank you. >> Thanks for having us. >> This has been an evolving relationship between you two companies, obviously a growing one, and something Dell was part of the big general rollout, AMD's new chip set last week. Talk about how that relationship has evolved over the last five years. >> Yeah, sure. Well, so it goes back to the advent of the EPIC architecture. So we were there from the beginning, partnering well before the launch five years ago, thinking about, "Hey how can we come up with a way to solve customer problems? address workloads in unique ways?" And that was kind of the origin of the relationship. We came out with some really disruptive and capable platforms. And then it continues, it's continued till then, all the way to the launch of last week, where we've introduced four of the most capable platforms we've ever had in the PowerEdge portfolio. >> Yeah, I'm really excited about the partnership with the Dell. As Brian said, we have been partnering very closely for last five years since we introduced the first generation of EPIC. So we collaborate on, you know, system design, validation, performance benchmarks, and more importantly on software optimizations and solutions to offer out of the box experience to our customers. Whether it is HPC or databases, big data analytics or AI. >> You know, you guys have been on theCUBE, you guys are veterans 2012, 2014 back in the day. So much has changed over the years. Raghu, you were on the founding chair of the TPC for AI. We've talked about the different iterations of power service. So much has changed. Why the focus on these workloads now? What's the inflection point that we're seeing here at SuperComputing? It feels like we've been in this, you know run the ball, get, gain a yard, move the chains, you know, but we feel, I feel like there's a moment where the there's going to be an unleashing of innovation around new use cases. Where's the workloads? Why the performance? What are some of those use cases right now that are front and center? >> Yeah, I mean if you look at today, the enterprise ecosystem has become extremely complex, okay? People are running traditional workloads like Relational Database Management Systems, also new generation of workloads with the AI and HPC and actually like AI actually HPC augmented with some of the AI technologies. So what customers are looking for is, as I said, out of the box experience, or time to value is extremely critical. Unlike in the past, you know, people, the customers don't have the time and resources to run months long of POCs, okay? So that's one idea that we are focusing, you know, working closely with Dell to give out of the box experience. Again, you know, the enterprise applicate ecosystem is, you know, really becoming complex and the, you know, as you mentioned, some of the industry standard benchmark is designed to give the fair comparison of performance, and price performance for the, our end customers. And you know, Brian and my team has been working closely to demonstrate our joint capabilities in the AI space with, in a set of TPCx-AI benchmark cards last week it was the major highlight of our launch last week. >> Brian, you got showing the demo in the booth at Dell here. Not demo, the product, it's available. What are you seeing for your use cases that customers are kind of rallying around now, and what are they doubling down on. >> Yeah, you know, I, so Raghu I think teed it up well. The really data is the currency of business and all organizations today. And that's what's pushing people to figure out, hey, both traditional workloads as well as new workloads. So we've got in the traditional workload space, you still have ERP systems like SAP, et cetera, and we've announced world records there, a hundred plus percent improvements in our single socket system, 70% and dual. We actually posted a 40% advantage over the best Genoa result just this week. So, I mean, we're excited about that in the traditional space. But what's exciting, like why are we here? Why, why are people thinking about HPC and AI? It's about how do we make use of that data, that data being the currency and how do we push in that space? So Raghu mentioned the TPC AI benchmark. We launched, or we announced in collaboration you talk about how do we work together, nine world records in that space. In one case it's a 3x improvement over prior generations. So the workloads that people care about is like how can I process this data more effectively? How can I store it and secure it more effectively? And ultimately, how do I make decisions about where we're going, whether it's a scientific breakthrough, or a commercial application. That's what's really driving the use cases and the demand from our customers today. >> I think one of the interesting trends we've seen over the last couple of years is a resurgence in interest in task specific hardware around AI. In fact venture capital companies invested a $1.8 billion last year in AI hardware startups. I wonder, and these companies are not doing CPUs necessarily, or GPUs, they're doing accelerators, FPGAs, ASICs. But you have to be looking at that activity and what these companies are doing. What are you taking away from that? How does that affect your own product development plans? Both on the chip side and on the system side? >> I think the future of computing is going to be heterogeneous. Okay. I mean a CPU solving certain type of problems like general purpose computing databases big data analytics, GPU solving, you know, problems in AI and visualization and DPUs and FPGA's accelerators solving you know, offloading, you know, some of the tasks from the CPU and providing realtime performance. And of course, you know, the, the software optimizes are going to be critical to stitch everything together, whether it is HPC or AI or other workloads. You know, again, as I said, heterogeneous computing is going to be the future. >> And, and for us as a platform provider, the heterogeneous, you know, solutions mean we have to design systems that are capable of supporting that. So if as you think about the compute power whether it's a GPU or a CPU, continuing to push the envelope in terms of, you know, to do the computations, power consumption, things like that. How do we design a system that can be, you know, incredibly efficient, and also be able to support the scaling, you know, to solve those complex problems. So that gets into challenges around, you know, both liquid cooling, but also making the most out of air cooling. And so we're seeing not only are we we driving up you know, the capability of these systems, we're actually improving the energy efficiency. And those, the most recent systems that we launched around the CPU, which is still kind of at the heart of everything today, you know, are seeing 50% improvement, you know, gen to gen in terms of performance per watt capabilities. So it's, it's about like how do we package these systems in effective ways and make sure that our customers can get, you know, the advertised benefits, so to speak, of the new chip technologies. >> Yeah. To add to that, you know, performance, scalability total cost of ownership, these are the key considerations, but now energy efficiency has become more important than ever, you know, our commitment to sustainability. This is one of the thing that we have demonstrated last week was with our new generation of EPIC Genoa based systems, we can do a one five to one consolidation, significantly reducing the energy requirement. >> Power's huge costs are going up. It's a global issue. >> Raghu: Yeah, it is. >> How do you squeeze more performance too out of it at the same time, I mean, smaller, faster, cheaper. Paul, you wrote a story about, you know, this weekend about hardware and AI making hardware so much more important. You got more power requirements, you got the sustainability, but you need more horsepower, more compute. What's different in the architecture if you guys could share like today versus years ago, what's different in as these generations step function value increases? >> So one of the major drivers from the processor perspective is if you look at the latest generation of processors, the five nanometer technology, bringing efficiency and density. So we are able to pack 96 processor cores, you know, in a two socket system, we are talking about 196 processor cores. And of course, you know, other enhancements like IPC uplift, bringing DDR5 to the market PC (indistinct) for the market, offering overall, you know, performance uplift of more than 2.5x for certain workloads. And of course, you know, significantly reducing the power footprint. >> Also, I was just going to cut, I mean, architecturally speaking, you know, then how do we take the 96 cores and surround it, deliver a balanced ecosystem to make sure that we can get the, the IO out of the system, and make sure we've got the right data storage. So I mean, you'll see 60% improvements and total storage in the system. I think in 2012 we're talking about 10 gig ethernet. Well, you know, now we're on to 100 and 400 on the forefront. So it's like how do we keep up with this increased power, by having, or computing capabilities both offload and core computing and make sure we've got a system that can deliver the desired (indistinct). >> So the little things like the bus, the PCI cards, the NICs, the connectors have to be rethought through. Is that what you're getting at? >> Yeah, absolutely. >> Paul: And the GPUs, which are huge power consumers. >> Yeah, absolutely. So I mean, cooling, we introduce, and we call it smart cooling is a part of our latest generation of servers. I mean, the thermal design inside of a server is a is a complex, you know, complex system, right? And doing that efficiently because of course fans consume power. So I mean, yeah, those are the kind of considerations that we have to put through to make sure that you're not either throttling performance because you don't have you know, keeping the chips at the right temperature. And, and you know, ultimately when you do that, you're hurting the productivity of the investment. So I mean, it's, it's our responsibility to put our thoughts and deliver those systems that are (indistinct) >> You mention data too, if you bring in the data, one of the big discussions going into the big Amazon show coming up, re:Invent is egress costs. Right, So now you've got compute and how you design data latency you know, processing. It's not just contained in a machine. You got to think about outside that machine talking to other machines. Is there an intelligent (chuckles) network developing? I mean, what's the future look like? >> Well, I mean, this is a, is an area that, that's, you know, it's fun and, you know, Dell's in a unique position to work on this problem, right? We have 70% of the mission housed, 70% of the mission critical data that exists in the world. How do we bring that closer to compute? How do we deliver system level solutions? So server compute, so recently we announced innovations around NVMe over Fabrics. So now you've got the NVMe technology and the SAN. How do we connect that more efficiently across the servers? Those are the kinds, and then guide our customers to make use of that. Those are the kinds of challenges that we're trying to unlock the value of the data by making sure we're (indistinct). >> There are a lot of lessons learned from, you know, classic HPC and some of the, you know big data analytics. Like, you know, Hadoops of the world, you know, you know distributor processing for crunching a large amount of amount of data. >> With the growth of the cloud, you see, you know, some pundits saying that data centers will become obsolete in five years, and everything's going to move to the cloud. Obviously data center market that's still growing, and is projected to continue to grow. But what's the argument for captive hardware, for owning a data center these days when the cloud offers such convenience and allegedly cost benefit? >> I would say the reality is that we're, and I think the industry at large has acknowledged this, that we're living in a multicloud world and multicloud methods are going to be necessary to you know, to solve problems and compete. And so, I mean, you know, in some cases, whether it's security or latency, you know, there's a push to have things in your own data center. And then of course growth at the edge, right? I mean, that's, that's really turning, you know, things on their head, if you will, getting data closer to where it's being generated. And so I would say we're going to live in this edge cloud, you know, and core data center environment with multi, you know, different cloud providers providing solutions and services where it makes sense, and it's incumbent on us to figure out how do we stitch together that data platform, that data layer, and help customers, you know, synthesize this data to, to generate, you know, the results they need. >> You know, one of the things I want to get into on the cloud you mentioned that Paul, is that we see the rise of graph databases. And so is that on the radar for the AI? Because a lot of more graph data is being brought in, the database market's incredibly robust. It's one of the key areas that people want performance out of. And as cloud native becomes the modern application development, a lot more infrastructure as code's happening, which means that the internet and the networks and the process should be programmable. So graph database has been one of those things. Have you guys done any work there? What's some data there you can share on that? >> Yeah, actually, you know, we have worked closely with a company called TigerGraph, there in the graph database space. And we have done a couple of case studies, one on the healthcare side, and the other one on the financial side for fraud detection. Yeah, I think they have a, this is an emerging area, and we are able to demonstrate industry leading performance for graph databases. Very excited about it. >> Yeah, it's interesting. It brings up the vertical versus horizontal applications. Where is the AI HPC kind of shining? Is it like horizontal and vertical solutions or what's, what's your vision there. >> Yeah, well, I mean, so this is a case where I'm also a user. So I own our analytics platform internally. We actually, we have a chat box for our product development organization to figure out, hey, what trends are going on with the systems that we sell, whether it's how they're being consumed or what we've sold. And we actually use graph database technology in order to power that chat box. So I'm actually in a position where I'm like, I want to get these new systems into our environment so we can deliver. >> Paul: Graphs under underlie most machine learning models. >> Yeah, Yeah. >> So we could talk about, so much to talk about in this space, so little time. And unfortunately we're out of that. So fascinating discussion. Brian Payne, Dell Technologies, Raghu Nambiar, AMD. Congratulations on the successful launch of your new chip set and the growth of, in your relationship over these past years. Thanks so much for being with us here on theCUBE. >> Super. >> Thank you much. >> It's great to be back. >> We'll be right back from SuperComputing 22 in Dallas. (upbeat music)
SUMMARY :
and much of it around the chips from AMD, over the last five years. in the PowerEdge portfolio. you know, system design, So much has changed over the years. Unlike in the past, you know, demo in the booth at Dell here. Yeah, you know, I, so and on the system side? And of course, you know, the heterogeneous, you know, This is one of the thing that we It's a global issue. What's different in the And of course, you know, other Well, you know, now the connectors have to Paul: And the GPUs, which And, and you know, you know, processing. is an area that, that's, you know, the world, you know, you know With the growth of the And so, I mean, you know, in some cases, on the cloud you mentioned that Paul, Yeah, actually, you know, Where is the AI HPC kind of shining? And we actually use graph Paul: Graphs under underlie Congratulations on the successful launch SuperComputing 22 in Dallas.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian | PERSON | 0.99+ |
Brian Payne | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Paul Gillan | PERSON | 0.99+ |
Dallas | LOCATION | 0.99+ |
50% | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
2012 | DATE | 0.99+ |
Raghu | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
96 cores | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
$1.8 billion | QUANTITY | 0.99+ |
400 | QUANTITY | 0.99+ |
TigerGraph | ORGANIZATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Raghu Nambiar | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
96 processor cores | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Both | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.98+ |
five years | QUANTITY | 0.98+ |
two socket | QUANTITY | 0.98+ |
3x | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
five years ago | DATE | 0.98+ |
today | DATE | 0.98+ |
first generation | QUANTITY | 0.98+ |
four | QUANTITY | 0.98+ |
SiliconANGLE | ORGANIZATION | 0.97+ |
more than 2.5x | QUANTITY | 0.97+ |
five | QUANTITY | 0.97+ |
one idea | QUANTITY | 0.97+ |
ISG | ORGANIZATION | 0.96+ |
one case | QUANTITY | 0.95+ |
five nanometer | QUANTITY | 0.95+ |
SuperComputing | ORGANIZATION | 0.94+ |
EPIC | ORGANIZATION | 0.93+ |
years | DATE | 0.93+ |
Genoa | ORGANIZATION | 0.92+ |
Raghu Nambiar | ORGANIZATION | 0.92+ |
SC22 SuperComputing Conference | EVENT | 0.91+ |
last couple of years | DATE | 0.9+ |
hundred plus percent | QUANTITY | 0.89+ |
TPC | ORGANIZATION | 0.88+ |
nine world | QUANTITY | 0.87+ |
SuperComputing 22 | ORGANIZATION | 0.87+ |
about 196 processor cores | QUANTITY | 0.85+ |
Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22
(upbeat music) (logo swooshing) >> Good morning and welcome back to Dallas, ladies and gentlemen, we are here with theCUBE Live from Supercomputing 2022. David, my cohost, how are you doing? Exciting, day two, feeling good? >> Very exciting. Ready to start off the day. >> Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >> Thank you for having us. >> Thank you for having us. >> I'm excited that you're starting off the day because we've been hearing a lot of rumors about Ethernet as the fabric for HPC, but we really haven't done a deep dive yet during the show. You all seem all in on Ethernet. Tell us about that. Armando, why don't you start? >> Yeah, I mean, when you look at Ethernet, customers are asking for flexibility and choice. So when you look at HPC, InfiniBand's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial in their enterprise customers. And not everybody wants to be in the top 500, what they want to do is improve their job time and improve their latency over the network. And when you look at Ethernet, you kind of look at the sweet spot between 8, 12, 16, 32 nodes, that's a perfect fit for Ethernet in that space and those types of jobs. >> I love that. Pete, you want to elaborate? >> Yeah, sure. I mean, I think one of the biggest things you find with Ethernet for HPC is that, if you look at where the different technologies have gone over time, you've had old technologies like, ATM, Sonic, Fifty, and pretty much everything is now kind of converged toward Ethernet. I mean, there's still some technologies such as InfiniBand, Omni-Path, that are out there. But basically, they're single source at this point. So what you see is that there is a huge ecosystem behind Ethernet. And you see that also the fact that Ethernet is used in the rest of the enterprise, is used in the cloud data centers, that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia into enterprise, into cloud service providers, it's much easier to integrate it with the same technology you're already using in those data centers, in those networks. >> So what's the state of the art for Ethernet right now? What's the leading edge? what's shipping now and what's in the near future? You're with Broadcom, you guys designed this stuff. >> Pete: Yeah. >> Savannah: Right. >> Yeah, so leading edge right now, got a couple things-- >> Savannah: We love good stage prop here on the theCUBE. >> Yeah, so this is Tomahawk 4. So this is what is in production, it's shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 terabytes per second. >> David: Okay. >> Which matches any other technology out there. Like if you look at say, InfinBand, highest they have right now that's just starting to get into production is 25.6 T. So state of the art right now is what we introduced, We announced this in August, This is Tomahawk 5, so this is 51.2 terabytes per second. So double the bandwidth, out of any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, actually, winds up being a factor of six efficiency. >> Savannah: Wow. >> 'Cause if you want, I can go into that, but... >> Why not? >> Well, what I want to know, please tell me that in your labs, you have a poster on the wall that says T five, with some like Terminator kind of character. (all laughs) 'Cause that would be cool. If it's not true, just don't say anything. I'll just... >> Pete: This can actually shift into a terminator. >> Well, so this is from a switching perspective. >> Yeah. >> When we talk about the end nodes, when we talk about creating a fabric, what's the latest in terms of, well, the nicks that are going in there, what speed are we talking about today? >> So as far as 30 speeds, it tends to be 50 gigabits per second. >> David: Okay. >> Moving to a hundred gig PAM-4. >> David: Okay. >> And we do see a lot of nicks in the 200 gig Ethernet port speed. So that would be four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon, 800 gig in the future. But say state of the art right now, we're seeing for the end node tends to be 200 gig E based on 50 gig PAM-4. >> Wow. >> Yeah, that's crazy. >> Yeah, that is great. My mind is act actively blown. I want to circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen, where do you think we are on the adoption curve and sort of in that cycle? Armando, do you want to go? >> Yeah, well, if you look at the market research, they're actually telling you it's 50/50 now. So Ethernet is at the level of 50%, InfinBand's at 50%, right? >> Savannah: Interesting. >> Yeah, and so what's interesting to us, customers are coming to us and say, hey, we want to see flexibility and choice and, hey, let's look at Ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we their have switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially MPI. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, hey, I've been InfiniBand but now I want to go Ethernet, there's going to be some learning curves there. And so what we want to do is really simplify that so that we can make it easy to install, get the cluster up and running and they can actually get some value out the cluster. >> Yeah, Pete, talk about that partnership. what does that look like? I mean, are you working with Dell before the T six comes out? Or you just say what would be cool is we'll put this in the T six? >> No, we've had a very long partnership both on the hardware and the software side. Dell's been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group, within Broadcom, we've then gotten very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, Dell can take it and deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again, in both the hardware and the software. >> So I'm fascinated by... I always like to know like what, yeah, exactly. Look, you start talking about the largest supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be two million CPUs, 2 million CPU cores. Exoflap of performance. What are the outward limits of T five in switches, building out a fabric, what does that look like? What are the increments in terms of how many... And I know it's a depends answer, but how many nodes can you support in a scale out cluster before you need another switch? Or what does that increment of scale look like today? >> Yeah, so this is 51.2 terabytes per second. Where we see the most common implementation based on this would be with 400 gig Ethernet ports. >> David: Okay. >> So that would be 128, 400 gig E ports connected to one chip. Now, if you went to 200 gig, which is kind of the state of the art for the nicks, you can have double that. So in a single hop, you can have 256 end nodes connected through one switch. >> Okay, so this T five, that thing right there, (all laughing) inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what's the form factor look like for where that T five sits? Is there just one in a chassis or you have.. What does that look like? >> It tends to be pizza boxes these days. What you've seen overall is that the industry's moved away from chassis for these high end systems more towardS pizza boxes. And you can have composable systems where, in the past you would have line cards, either the fabric cards that the line cards are plug into or interfaced to. These days what tends to happen is you'd have a pizza box and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the line card. >> David: Okay. >> So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a 2RU, with 64 OSFP ports. And often each of those OSFP, which is an 800 gig E or 800 gig port, we've broken out into two 400 gig ports. >> So yeah, in 2RU, and this is all air cooled, in 2RU, you've got 51.2 T. We do see some cases where customers would like to have different optics and they'll actually deploy 4RU, just so that way they have the phase-space density. So they can plug in 128, say QSFP 112. But yeah, it really depends on which optics, if you want to have DAK connectivity combined with optics. But those are the two most common form factors. >> And Armando, Ethernet isn't necessarily Ethernet in the sense that many protocols can be run over it. >> Right. >> I think I have a projector at home that's actually using Ethernet physical connections. But, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center Ethernet, or is this RDMA over converged Ethernet? What Are we talking about? >> Yeah, so RDMA, right? So when you look at running, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on Ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on Ethernet. If you look at NPIs officially, built to, hey, it was designed to run on InfiniBand but now what you see with Broadcom, with the great work they're doing, now we can make that work on Ethernet and get same performance, so that's huge for customers. >> Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of Ethernet in HPC in terms of AI and ML, where do you think we're going to be next year or 10 years from now? >> You want to go first or you want me to go first? >> I can start, yeah. >> Savannah: Pete feels ready. >> So I mean, what I see, I mean, Ethernet, what we've seen is that as far as on, starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. >> That's impressive. >> Pete: Yeah. >> Nicely done, casual, humble brag there. That was great, I love that. I'm here for you. >> I mean, I think that's one of the benefits of Ethernet, is the ecosystem, is the trajectory the roadmap we've had, I mean, you don't see that in any of the networking technology. >> David: More who? (all laughing) >> So I see that, that trajectory is going to continue as far as the switches doubling in bandwidth, I think that they're evolving protocols, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on RDMA, for the supercomputing, the AI/ML workloads. But we do see that as you have a mix of the applications running on these end nodes, maybe they're interfacing to the CPUs for some processing, you might use a different mix of protocols. So I'd say it's going to be doubling a bandwidth over time, evolution of the protocols. I mean, I expect that Rocky is probably going to evolve over time depending on the AI/ML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-packed optics. So right now, this chip is, all the balls in the back here, there's electrical connections. >> How many are there, by the way? 9,000 plus on the back of that-- >> 9,352. >> I love how specific it is. It's brilliant. >> Yeah, so right now, all the SERDES, all the signals are coming out electrically based, but we've actually shown, we actually we have a version of Tomahawk 4 at 25.6 T that has co-packed optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk 5. >> Nice. >> Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. >> Wow. Cool. >> So I see there's the bandwidth, there's radix's increasing, protocols, different physical connectivity. So I think there's a lot of things throughout, and the protocol stack's also evolving. So a lot of excitement, a lot of new technology coming to bear. >> Okay, You just threw a carrot down the rabbit hole. I'm only going to chase this one, okay? >> Peter: All right. >> So I think of individual discreet physical connections to the back of those balls. >> Yeah. >> So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that many optical connections? What's the mapping there? What does that look like? >> So what we've announced for Tomahawk 5 is it would have FR4 optics coming out. So you'd actually have 512 fiber pairs coming out. So basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because-- >> It's miraculous, essentially. >> Savannah: I know. >> Yeah. So a lot of people are going to be looking at this and thinking in terms of InfiniBand versus Ethernet, I think you've highlighted some of the benefits of specifically running Ethernet moving forward as HPC which sort of just trails slightly behind super computing as we define it, becomes more pervasive AI/ML. What are some of the other things that maybe people might not immediately think about when they think about the advantages of running Ethernet in that environment? Is it about connecting the HPC part of their business into the rest of it? What are the advantages? >> Yeah, I mean, that's a big thing. I think, and one of the biggest things that Ethernet has again, is that the data centers, the networks within enterprises, within clouds right now are run on Ethernet. So now, if you want to add services for your customers, the easiest thing for you to do is the drop in clusters that are connected with the same networking technology. So I think one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to Ethernet. So now you've got to train your technicians, you train your assist admins on two different network technologies. You need to have all the debug technology, all the interconnect for that. So here, the easiest thing is you can use Ethernet, it's going to give you the same performance and actually, in some cases, we've seen better performance than we've seen with Omni-Path, better than in InfiniBand. >> That's awesome. Armando, we didn't get to you, so I want to make sure we get your future hot take. Where do you see the future of Ethernet here in HPC? >> Well, Pete hit on a big thing is bandwidth, right? So when you look at, train a model, okay? So when you go and train a model in AI, you need to have a lot of data in order to train that model, right? So what you do is essentially, you build a model, you choose whatever neural network you want to utilize. But if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the CPU. And essentially, if you're going to do it maybe on CPU only, but if you do it on accelerators, well, guess what? You need a big pipe in order to get all that data through. And here's the deal, the bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage, maybe it's some new way you design a product, but that's a benefit of speed, you want faster, faster, faster. >> It's all about making it faster and easier-- for the users. >> Armando: It is. >> I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas, stakes, there's a lot going on with that. >> Making me hungry. >> I know, exactly. I'm sitting out here thinking, man, I did not have big enough breakfast. How did you come up with the name Tomahawk? >> So Tomahawk, I think it just came from a list. So we have a tried end product line. >> Savannah: Ah, yes. >> Which is a missile product line. And Tomahawk is being kind of like the bigger and batter missile, so. >> Savannah: Love this. Yeah, I mean-- >> So do you like your engineers? You get to name it. >> Had to ask. >> It's collaborative. >> Okay. >> We want to make sure everyone's in sync with it. >> So just it's not the Aquaman tried. >> Right. >> It's the steak Tomahawk. I think we're good now. >> Now that we've cleared that-- >> Now we've cleared that up. >> Armando, Pete, it was really nice to have both you. Thank you for teaching us about the future of Ethernet and HCP. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to theCUBE live from Dallas. We're here talking all things HPC and supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us. (soft music)
SUMMARY :
David, my cohost, how are you doing? Ready to start off the day. Gentlemen, thank you about Ethernet as the fabric for HPC, So when you look at HPC, Pete, you want to elaborate? So what you see is that You're with Broadcom, you stage prop here on the theCUBE. So this is what is in production, So state of the art right 'Cause if you want, I have a poster on the wall Pete: This can actually Well, so this is from it tends to be 50 gigabits per second. 800 gig in the future. that you brought up a second ago, So Ethernet is at the level of 50%, So if you have a customer that, I mean, are you working with Dell and on the APIs, on the operating system that exist today, and you Yeah, so this is 51.2 of the art for the nicks, chassis or you have.. in the past you would have line cards, for this is they tend to be two, if you want to have DAK in the sense that many as what you think of So when you look at running, Both of you get to see a lot starting off of the switch side, I'm here for you. in any of the networking technology. But we do see that as you have a mix I love how specific it is. And if you look at, from the bottom, you actually have fibers and the protocol stack's also evolving. carrot down the rabbit hole. So I think of individual How do you do that many coming out of the sides there. What are some of the other things the easiest thing for you to do is Where do you see the future So the faster you can train for the users. I love that. How did you come up So we have a tried end product line. kind of like the bigger Yeah, I mean-- So do you like your engineers? everyone's in sync with it. It's the steak Tomahawk. And thank you all for tuning
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
David Nicholson | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
August | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
30 speeds | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
50 gig | QUANTITY | 0.99+ |
Armando | PERSON | 0.99+ |
128 | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
9,000 | QUANTITY | 0.99+ |
400 gig | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
128, 400 gig | QUANTITY | 0.99+ |
800 gig | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
512 channels | QUANTITY | 0.99+ |
9,352 | QUANTITY | 0.99+ |
24 months | QUANTITY | 0.99+ |
one chip | QUANTITY | 0.99+ |
Tomahawk 4 | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
512 fiber | QUANTITY | 0.98+ |
seven times | QUANTITY | 0.98+ |
Tomahawk 5 | COMMERCIAL_ITEM | 0.98+ |
four lanes | QUANTITY | 0.98+ |
9,000 plus | QUANTITY | 0.98+ |
Dell Technologies | ORGANIZATION | 0.98+ |
today | DATE | 0.97+ |
Aquaman | PERSON | 0.97+ |
Both | QUANTITY | 0.97+ |
InfiniBand | ORGANIZATION | 0.97+ |
QSFP 112 | OTHER | 0.96+ |
hundred gig | QUANTITY | 0.96+ |
Peter Del Vecchio | PERSON | 0.96+ |
25.6 terabytes per second | QUANTITY | 0.96+ |
two fascinating guests | QUANTITY | 0.96+ |
single source | QUANTITY | 0.96+ |
64 OSFP | QUANTITY | 0.95+ |
Rocky | ORGANIZATION | 0.95+ |
two million CPUs | QUANTITY | 0.95+ |
25.6 T. | QUANTITY | 0.95+ |
Andrea Booker, Dell Technologies | SuperComputing 22
>> Hello everyone and welcome back to theCUBE, where we're live from Dallas, Texas here at Super computing 2022. I am joined by my cohost David Nicholson. Thank you so much for being here with me and putting up with my trashy jokes all day. >> David: Thanks for having me. >> Yeah. Yes, we are going to be talking about AI this morning and I'm very excited that our guest has has set the stage for us here quite well. Please welcome Andrea Booker. Andrea, thank you so much for being here with us. >> Absolutely. Really excited to be here. >> Savannah: How's your show going so far? >> It's been really cool. I think being able to actually see people in person but also be able to see the latest technologies and and have the live dialogue that connects us in a different way than we have been able to virtually. >> Savannah: Oh yeah. No, it's all, it's all about that human connection and that it is driving towards our first question. So as we were just chit chatting. You said you are excited about making AI real and humanizing that. >> Andrea: Absolutely. >> What does that mean to you? >> So I think when it comes down to artificial intelligence it means so many different things to different people. >> Savannah: Absolutely. >> I was talking to my father the other day for context, he's in his late seventies, right. And I'm like, oh, artificial intelligence, this or that, and he is like, machines taking over the world. Right. >> Savannah: Very much the dark side. >> A little bit Terminator. And I'm like, well, not so much. So that was a fun discussion. And then you flip it to the other side and I'm talking to my 11 year old daughter and she's like, Alexa make sure you know my song preferences. Right. And that's the other very real way in which it's kind of impacting our lives. >> Savannah: Yeah. >> Right. There's so many different use cases that I don't think everyone understands how that resonates. Right. It's the simple things from, you know, recommend Jason Engines when you're on Amazon and it suggests just a little bit more. >> Oh yeah. >> I'm a little bit to you that one, right. To stuff that's more impactful in regards to getting faster diagnoses from your doctors. Right. Such peace of mind being able to actually hear that answer faster know how to go tackle something. >> Savannah: Great point, yeah. >> You know, and, and you know, what's even more interesting is from a business perspective, you know the projections are over the next five years about 90% of customers are going to use AI applications in in some fashion, right. >> Savannah: Wow. >> And the reason why that's interesting is because if you look at it today, only about 15% of of them are doing so. Right. So we're early. So when we're talking growth and the opportunity, it's, it's amazing. >> Yeah. I can, I can imagine. So when you're talking to customers, what are are they excited? Are they nervous? Are you educating them on how to apply Dell technology to advance their AI? Where are they off at because we're so early? >> Yeah well, I think they're figuring it out what it means to them, right? >> Yeah. Because there's so many different customer applications of it, right? You have those in which, you know, are on on the highest end in which that our new XE products are targeting that when they think of it. You know, I I, I like to break it down in this fashion in which artificial intelligence can actually save human lives, right? And this is those extreme workloads that I'm talking about. We actually can develop a Covid vaccine faster, right. Pandemic tracking, you know with global warming that's going on. And we have these extreme weather events with hurricanes and tsunamis and all these things to be able to get advanced notice to people to evacuate, to move. I mean, that's a pretty profound thing. And it is, you know so it could be used in that way to save lives, right? >> Absolutely. >> Which is it's the natural outgrowth of the speeds and feeds discussions that we might have internally. It's, it's like, oh, oh, speed doubled. Okay. Didn't it double last year? Yeah. Doubled last year too. So it's four x now. What does that mean to your point? >> Andrea: Yeah, yeah. >> Savannah: Yeah. >> Being able to deliver faster insight insights that are meaningful within a timeframe when otherwise they wouldn't be meaningful. >> Andrea: Yeah. >> If I tell you, within a two month window whether it's going to rain this weekend, that doesn't help you. In hindsight, we did the calculation and we figured out it's going to be 40 degrees at night last Thursday >> Knowing it was going to completely freeze here in Dallas to our definition in Texas but we prepare better to back to bring clothes. >> We were talking to NASA about that yesterday too. I mean, I think it's, it's must be fascinating for you to see your technology deployed in so many of these different use cases as well. >> Andrea: Absolutely, absolutely. >> It's got to be a part of one of the more >> Andrea: Not all of them are extreme, right? >> Savannah: Yeah. >> There's also examples of, you know natural language processing and what it does for us you know, the fact that it can break down communication barriers because we're global, right? We're all in a global environment. So if you think about conference calls in which we can actually clearly understand each other and what the intent is, and the messaging brings us closer in different ways as well. Which, which is huge, right? You don't want things lost in translation, right? So it, it helps on so many fronts. >> You're familiar with the touring test idea of, of, you know whether or not, you know, the test is if you can't discern within a certain number of questions that you're interacting with an AI versus a real human, then it passes the touring test. I think there should be a natural language processing test where basically I say, fine >> Andrea: You see if people was mad or not. >> You tell me, you tell me. >> I love this idea, David. >> You know? >> Yeah. This is great. >> Okay. AI lady, >> You tell me what I meant. >> Yeah, am I actually okay? >> How far from, that's silly example but how far do you think we are from that? I mean, what, what do you seeing out there in terms of things where you're kind of like, whoa, they did this with technology I'm responsible for, that was impressive. Or have you heard of things that are on the horizon that, you know, again, you, you know they're the big, they're the big issues. >> Yeah. >> But any, anything kind of interesting and little >> I think we're seeing it perfected and tweaked, right? >> Yeah. >> You know, I think going back to my daughter it goes from her screaming at Alexa 'cause she did hear her right the first time to now, oh she understands and modifies, right? Because we're constantly tweaking that technology to have a better experience with it. And it's a continuum, right? The voice to text capabilities, right. You know, I I'd say early on it got most of those words, right Right now it's, it's getting pretty dialed in. Right. >> Savannah: That's a great example. >> So, you know, little things, little things. >> Yeah. I think I, I love the, the this thought of your daughter as the example of training AI. What, what sort of, you get to look into the future quite a bit, I'm sure with your role. >> Andrea: Absolutely. >> Where, what is she going to be controlling next? >> The world. >> The world. >> No, I mean if you think about it just from a generational front, you know technology when I was her age versus what she's experiencing, she lives and breathes it. I mean, that's the generational change. So as these are coming out, you have new folks growing with it that it's so natural that they are so open to adopting it in their common everyday behaviors. Right? >> Savannah: Yeah. >> But they'd they never, over time they learn, oh well how it got there is 'cause of everything we're doing now, right. >> Savannah: Yeah. >> You know, one, one fun example, you know as my dad was like machines are taking over the world is not, not quite right. Even if when you look at manufacturing, there's a difference in using AI to go build a digital simulation of a factory to be able to optimize it and design it right before you're laying the foundation that saves cost, time and money. That's not taking people's jobs in that extreme event. >> Right. >> It's really optimizing for faster outcomes and, and and helping our customers get there which is better for everyone. >> Savannah: Yeah and safer too. I mean, using the factory example, >> Totally safer. >> You're able to model out what a workplace injury might be or what could happen. Or even the ergonomics of how people are using. >> Andrea: Yeah, should it be higher so they don't have to bend over? Right. >> Exactly. >> There's so many fantastic positive ways. >> Yeah so, so for your dad, you know, I mean it's going to help us, it's going to make, it's going to take away when I. Well I'm curious what you think, David when I think about AI, I think it's going to take out a lot of the boring things in life that, that we don't like >> Andrea: Absolutely. Doing. The monotony and the repetitive and let us optimize our creative selves maybe. >> However, some of the boring things are people's jobs. So, so it is, it it it will, it will it will push a transition in our economy in the global economy, in my opinion. That would be painful for some, for some period of time. But overall beneficial, >> Savannah: Yes. But definitely as you know, definitely there will be there will be people who will be disrupted and, you know. >> Savannah: Tech's always kind of done that. >> We No, but we need, I, I think we need to make sure that the digital divide doesn't get so wide that you know that, that people might not be negative, negatively affected. And, but, but I know that like organizations like Dell I believe what you actually see is, >> Andrea: Yeah. >> No, it's, it's elevating people. It's actually taking away >> Andrea: Easier. >> Yeah. It's, it's, it's allowing people to spend their focus on things that are higher level, more interesting tasks. >> Absolutely. >> David: So a net, A net good. But definitely some people disrupted. >> Yes. >> I feel, I feel disrupted. >> I was going to say, are, are we speaking for a friend or for ourselves here today on stage? >> I'm tired of software updates. So maybe if you could, if you could just standardize. So AI and ML. >> Andrea: Yeah. >> People talk about machine learning and, and, and and artificial intelligence. How would you differentiate the two? >> Savannah: Good question. >> It it, it's, it's just the different applications and the different workloads of it, right? Because you actually have artificial intelligence you have machine learning in which the learn it's learning from itself. And then you have like the deep learning in which it's diving deeper in in its execution and, and modeling. And it really depends on the workload applications as long as well as how large the data set is that's feeding into it for those applications. Right. And that really leads into the, we have to make sure we have the versatility in our offerings to be able to meet every dimension of that. Right. You know our XE products that we announced are really targeted for that, those extreme AI HPC workloads. Right. Versus we also have our entire portfolio products that we make sure we have GPU diversity throughout for the other applications that may be more edge centric or telco centric, right? Because AI isn't just these extreme situations it's also at the edge. It's in the cloud, it's in the data center, right? So we want to make sure we have, you know versatility in our offerings and we're really meeting customers where they're at in regards to the implementation and and the AI workloads that they have. >> Savannah: Let's dig in a little bit there. So what should customers expect with the next generation acceleration trends that Dell's addressing in your team? You had three exciting product announcements here >> Andrea: We did, we did. >> Which is very exciting. So you can talk about that a little bit and give us a little peek. >> Sure. So, you know, for, for the most extreme applications we have the XE portfolio that we built upon, right? We already had the XC 85 45 and we've expanded that out in a couple ways. The first of which is our very first XC 96 88 way offering in which we have Nvidia's H 100 as well as 8 100. 'Cause we want choice, right? A choice between performance, power, what really are your needs? >> Savannah: Is that the first time you've combined? >> Andrea: It's the first time we've had an eight way offering. >> Yeah. >> Andrea: But we did so mindful that the technology is emerging so much from a thermal perspective as well as a price and and other influencers that we wanted that choice baked into our next generation of product as we entered the space. >> Savannah: Yeah, yeah. >> The other two products we have were both in the four way SXM and OAM implementation and we really focus on diversifying and not only from vendor partnerships, right. The XC 96 40 is based off Intel Status Center max. We have the XE 86 40 that is going to be in or Nvidia's NB length, their latest H 100. But the key differentiator is we have air cold and we have liquid cold, right? So depending on where you are from that data center journey, I mean, I think one of the common themes you've heard is thermals are going up, performance is going up, TBPs are going up power, right? >> Savannah: Yeah. >> So how do we kind of meet in the middle to be able to accommodate for that? >> Savannah: I think it's incredible how many different types of customers you're able to accommodate. I mean, it's really impressive. I feel lucky we've gotten to see these products you're describing. They're here on the show floor. There's millions of dollars of hardware literally sitting in your booth. >> Andrea: Oh yes. >> Which is casual only >> Pies for you. Yeah. >> Yeah. We were, we were chatting over there yesterday and, and oh, which, which, you know which one of these is more expensive? And the response was, they're both expensive. It was like, okay perfect >> But assume the big one is more. >> David: You mentioned, you mentioned thermals. One of the things I've been fascinated by walking around is all of the different liquid cooling solutions. >> Andrea: Yeah. >> And it's almost hysterical. You look, you look inside, it looks like something from it's like, what is, what is this a radiator system for a 19th century building? >> Savannah: Super industrial? >> Because it looks like Yeah, yeah, exactly. Exactly, exactly. It's exactly the way to describe it. But just the idea that you're pumping all of this liquid over this, over this very, very valuable circuitry. A lot of the pitches have to do with, you know this is how we prevent disasters from happening based on the cooling methods. >> Savannah: Quite literally >> How, I mean, you look at the power requirements of a single rack in a data center, and it's staggering. We've talked about this a lot. >> Savannah: Yeah. >> People who aren't kind of EV you know electric vehicle nerds don't appreciate just how much power 90 kilowatts of power is for an individual rack and how much heat that can generate. >> Andrea: Absolutely. >> So Dell's, Dell's view on this is air cooled water cooled figure it out fit for for function. >> Andrea: Optionality, optionality, right? Because our customers are a complete diverse set, right? You have those in which they're in a data center 10 to 15 kilowatt racks, right? You're not going to plum a liquid cool power hungry or air power hungry thing in there, right? You might get one of these systems in, in that kind of rack you know, architecture, but then you have the middle ground the 50 to 60 is a little bit of choice. And then the super extreme, that's where liquid cooling makes sense to really get optimized and have the best density and, and the most servers in that solution. So that's why it really depends, and that's why we're taking that approach of diversity, of not only vendors and, and choice but also implementation and ways to be able to address that. >> So I think, again, again, I'm, you know electric vehicle nerd. >> Yeah. >> It's hysterical when you, when you mention a 15 kilowatt rack at kind of flippantly, people don't realize that's way more power than the average house is consuming. >> Andrea: Yeah, yeah >> So it's like your entire house is likely more like five kilowatts on a given day, you know, air conditioning. >> Andrea: Maybe you have still have solar panel. >> In Austin, I'm sorry >> California, Austin >> But, but, but yeah, it's, it's staggering amounts of power staggering amounts of heat. There are very real problems that you guys are are solving for to drive all of these top line value >> Andrea: Yeah. >> Propositions. It's super interesting. >> Savannah: It is super interesting. All right, Andrea, last question. >> Yes. Yes. >> Dell has been lucky to have you for the last decade. What is the most exciting part about you for the next decade of your Dell career given the exciting stuff that you get to work on. >> I think, you know, really working on what's coming our way and working with my team on that is is just amazing. You know, I can't say it enough from a Dell perspective I have the best team. I work with the most, the smartest people which creates such a fun environment, right? So then when we're looking at all this optionality and and the different technologies and, and, and you know partners we work with, you know, it's that coming together and figuring out what's that best solution and then bringing our customers along that journey. That kind of makes it fun dynamic that over the next 10 years, I think you're going to see fantastic things. >> David: So I, before, before we close, I have to say that's awesome because this event is also a recruiting event where some of these really really smarts students that are surrounding us. There were some sirens going off. They're having competitions back here. >> Savannah: Yeah, yeah, yeah. >> So, so when they hear that. >> Andrea: Where you want to be. >> David: That's exactly right. That's exactly right. >> Savannah: Well played. >> David: That's exactly right. >> Savannah: Well played. >> Have fun. Come on over. >> Well, you've certainly proven that to us. Andrea, thank you so much for being with us This was such a treat. David Nicholson, thank you for being here with me and thank you for tuning in to theCUBE a lot from Dallas, Texas. We are all things HPC and super computing this week. My name's Savannah Peterson and we'll see you soon. >> Andrea: Awesome.
SUMMARY :
Thank you so much for being here Andrea, thank you so much Really excited to be here. and have the live You said you are excited things to different people. machines taking over the world. And that's the other very real way things from, you know, in regards to getting faster business perspective, you know and the opportunity, it's, it's amazing. Are you educating them You have those in which, you know, are on What does that mean to your point? Being able to deliver faster insight out it's going to be 40 in Dallas to our definition in Texas for you to see your technology deployed So if you think about conference calls you know, the test is if you can't discern Andrea: You see if on the horizon that, you right the first time to now, So, you know, little What, what sort of, you get to look I mean, that's the generational change. But they'd they never, Even if when you look at and helping our customers get there Savannah: Yeah and safer too. You're able to model out what don't have to bend over? There's so many of the boring things in life The monotony and the repetitive in the global economy, in my opinion. But definitely as you know, Savannah: Tech's that the digital divide doesn't It's actually taking away people to spend their focus on things David: So a net, A net good. So maybe if you could, if you could How would you differentiate the two? So we want to make sure we have, you know that Dell's addressing in your team? So you can talk about that we built upon, right? Andrea: It's the first time that the technology is emerging so much We have the XE 86 40 that is going to be They're here on the show floor. Yeah. oh, which, which, you know is all of the different You look, you look inside, have to do with, you know How, I mean, you look People who aren't kind of EV you know So Dell's, Dell's view on this is the 50 to 60 is a little bit of choice. So I think, again, again, I'm, you know power than the average house on a given day, you Andrea: Maybe you have problems that you guys are It's super interesting. Savannah: It is super interesting. What is the most exciting part about you I think, you know, that are surrounding us. David: That's exactly right. Come on over. and we'll see you soon.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andrea | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Dallas | LOCATION | 0.99+ |
Austin | LOCATION | 0.99+ |
40 degrees | QUANTITY | 0.99+ |
Texas | LOCATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Andrea Booker | PERSON | 0.99+ |
XE 86 40 | COMMERCIAL_ITEM | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
15 kilowatt | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
first question | QUANTITY | 0.99+ |
XC 85 45 | COMMERCIAL_ITEM | 0.99+ |
90 kilowatts | QUANTITY | 0.99+ |
XC 96 40 | COMMERCIAL_ITEM | 0.99+ |
10 | QUANTITY | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
H 100 | COMMERCIAL_ITEM | 0.99+ |
two month | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
19th century | DATE | 0.99+ |
Dave Jent, Indiana University and Aaron Neal, Indiana University | SuperComputing 22
(upbeat music) >> Welcome back. We're here at Supercomputing 22 in Dallas. My name's Paul Gill, I'm your host. With me, Dave Nicholson, my co-host. And one thing that struck me about this conference arriving here, was the number of universities that are exhibiting here. I mean, big, big exhibits from universities. Never seen that at a conference before. And one of those universities is Indiana University. Our two guests, Dave Jent, who's the AVP of Networks at Indiana University, Aaron Neal, Deputy CIO at Indiana University. Welcome, thanks for joining us. >> Thank you for having us. >> Thank you. >> I've always thought that the CIO job at a university has got to be the toughest CIO job there is, because you're managing this sprawling network, people are doing all kinds of different things on it. You've got to secure it. You've got to make it performant. And it just seems to be a big challenge. Talk about the network at Indiana University and what you have done particularly since the pandemic, how that has affected the architecture of your network. And what you do to maintain the levels of performance and security that you need. >> On the network side one of the things we've done is, kept in close contact with what the incoming students are looking for. It's a different environment than it was then 10 years ago when a student would come, maybe they had a phone, maybe they had one laptop. Today they're coming with multiple phones, multiple laptops, gaming devices. And the expectation that they have to come on a campus and plug all that stuff in causes lots of problems for us, in managing just the security aspect of it, the capacity, the IP space required to manage six, seven devices per student when you have 35,000 students on campus, has always been a challenge. And keeping ahead of that knowing what students are going to come in with, has been interesting. During the pandemic the campus was closed for a bit of time. What we found was our biggest challenge was keeping up with the number of people who wanted to VPN to campus. We had to buy additional VPN licenses so they could do their work, authenticate to the network. We doubled, maybe even tripled our our VPN license count. And that has settled down now that we're back on campus. But again, they came back with a vengeance. More gaming devices, more things to be connected, and into an environment that was a couple years old, that we hadn't done much with. We had gone through a pretty good size network deployment of new hardware to try to get ready for them. And it's worked well, but it's always challenging to keep up with students. >> Aaron, I want to ask you about security because that really is one of your key areas of focus. And you're collaborating with counties, local municipalities, as well as other educational institutions. How's your security strategy evolving in light of some of the vulnerabilities of VPNs that became obvious during the pandemic, and this kind of perfusion of new devices that that Dave was talking about? >> Yeah, so one of the things that we we did several years ago was establish what we call OmniSOC, which is a shared security operations center in collaboration with other institutions as well as research centers across the United States and in Indiana. And really what that is, is we took the lessons that we've learned and the capabilities that we've had within the institution and looked to partner with those key institutions to bring that data in-house, utilize our staff such that we can look for security threats and share that information across the the other institutions so that we can give each of those areas a heads up and work with those institutions to address any kind of vulnerabilities that might be out there. One of the other things that you mentioned is, we're partnering with Purdue in the Indiana Office of Technology on a grant to actually work with municipalities, county governments, to really assess their posture as it relates to security in those areas. It's a great opportunity for us to work together as institutions as well as work with the state in general to increase our posture as it relates to security. >> Dave, what brings IU to Supercomputing 2022? >> We've been here for a long time. And I think one of the things that we're always interested in is, what's next? What's new? There's so many, there's network vendors, software vendors, hardware vendors, high performance computing suppliers. What is out there that we're interested in? IU runs a large Cray system in Indiana called Big Red 200. And with any system you procure it, you get it running, you operate it, and your next goal is to upgrade it. And what's out there that we might be interested? That I think why we come to IU. We also like to showcase what we do at IU. If you come by the booth you'll see the OmniSOC, there's some video on that. The GlobalNOC, which I manage, which supports a lot of the RNE institutions in the country. We talk about that. Being able to have a place for people to come and see us. If you stand by the booth long enough people come and find you, and want to talk about a project they have, or a collaboration they'd like to partner with. We had a guy come by a while ago wanting a job. Those are all good things having a big booth can do for you. >> Well, so on that subject, in each of your areas of expertise and your purview are you kind of interleaved with the academic side of things on campus? Do you include students? I mean, I would think it would be a great source of cheap labor for you at least. Or is there kind of a wall between what you guys are responsible for and what students? >> Absolutely we try to support faculty and students as much as we can. And just to go back a little bit on the OmniSOC discussion. One of the things that we provide is internships for each of the universities that we work with. They have to sponsor at least three students every year and make that financial commitment. We bring them on site for three weeks. They learn us alongside the other analysts, information security analysts and work in a real world environment and gain those skills to be able to go back to their institutions and do an additional work there. So it's a great program for us to work with students. I think the other thing that we do is we provide obviously the infrastructure that enable our faculty members to do the research that they need to do. Whether that's through Big Red 200, our Supercomputer or just kind of the everyday infrastructure that allows them to do what they need to do. We have an environment on premise called our Intelligent Infrastructure, that we provide managed access to hardware and storage resources in a way that we know it's secure and they can utilize that environment to do virtually anything that they need in a server environment. >> Dave, I want to get back to the GigaPOP, which you mentioned earlier you're the managing director of the Indiana GigaPOP. What exactly is it? >> Well, the GigaPOP and there are a number of GigaPOP around the country. It was really the aggregation facility for Indiana and all of the universities in Indiana to connect to outside resources. GigaPOP has connections to internet too, the commodity internet, Esnet, the Big Ten or the BTAA a network in Chicago. It's a way for all universities in Indiana to connect to a single source to allow them to connect nationally to research organizations. >> And what are the benefits of having this collaboration of university. >> If you could think of a researcher at Indiana wants to do something with a researcher in Wisconsin, they both connect to their research networks in Wisconsin and Indiana, and they have essentially direct connection. There's no commodity internet, there's no throttling of of capacity. Both networks and the interconnects because we use internet too, are essentially UNT throttled access for the researchers to do anything they need to do. It's secure, it's fast, easy to use, in fact, so easy they don't even know that they're using it. It just we manage the networks and organize the networks in a way configure them that's the path of least resistance and that's the path traffic will take. And that's nationally. There are lots of these that are interconnected in various ways. I do want to get back to the labor point, just for a moment. (laughs) Because... >> You're here to claim you're not violating any labor laws. Is that what you're going to be? >> I'm here to hopefully hire, get more people to be interested to coming to IU. >> Stop by the booth. >> It's a great place to work. >> Exactly. >> We hire lots of interns and in the network space hiring really experienced network engineers, really hard to do, hard to attract people. And these days when you can work from anywhere, you don't have to be any place to work for anybody. We try to attract as many students as we can. And really we're exposing 'em to an environment that exists in very few places. Tens of thousands of wireless access points, big fast networks, interconnections and national international networks. We support the Noah network which supports satellite systems and secure traffic. It really is a very unique experience and you can come to IU, spend lots of years there and never see the same thing twice. We think we have an environment that's really a good way for people to come out of college, graduate school, work for some number of years and hopefully stay at IU, but if not, leave and get a good job and talk well about IU. In fact, the wireless network today here at SC was installed and is managed by a person who manages our campus network wireless, James Dickerson. That's the kind of opportunity we can provide people at IU. >> Aaron, I'd like to ask, you hear a lot about everything moving to the cloud these days, but in the HPC world I don't think that move is happening as quickly as it is in some areas. In fact, there's a good argument some workloads should never move to the cloud. You're having to balance these decisions. Where are you on the thinking of what belongs in the data center and what belongs in the cloud? >> I think our approach has really been specific to what the needs are. As an institution, we've not pushed all our chips in on the cloud, whether it be for high performance computing or otherwise. It's really looking at what the specific need is and addressing it with the proper solution. We made an investment several years ago in a data center internally, and we're leveraging that through the intelligent infrastructure that I spoke about. But really it's addressing what the specific need is and finding the specific solution, rather than going all in in one direction or another. I dunno if Jet Stream is something that you would like to bring up as well. >> By having our own data center and having our own facilities we're able to compete for NSF grants and work on projects that provide shared resources for the research community. Just dream is a project that does that. Without a data center and without the ability to work on large projects, we don't have any of that. If you don't have that then you're dependent on someone else. We like to say that, what we are proud of is the people come to IU and ask us if they can partner on our projects. Without a data center and those resources we are the ones who have to go out and say can we partner on your project? We'd like to be the leaders of that in that space. >> I wanted to kind of double click on something you mentioned. Couple of things. Historically IU has been I'm sure closely associated with Chicago. You think of what are students thinking of doing when they graduate? Maybe they're going to go home, but the sort of center of gravity it's like Chicago. You mentioned talking about, especially post pandemic, the idea that you can live anywhere. Not everybody wants to live in Manhattan or Santa Clara. And of course, technology over decades has given us the ability to do things remotely and IU is plugged into the globe, doesn't matter where you are. But have you seen either during or post pandemic 'cause we're really in the early stages of this. Are you seeing that? Are you seeing people say, Hey, thinking about their family, where do I want to live? Where do I want to raise my family? I'm in academia and no, I don't want to live in Manhattan. Hey, we can go to IU and we're plugged into the globe. And then students in California we see this, there's some schools on the central coast where people loved living there when they were in college but there was no economic opportunity there. Are you seeing a shift, are basically houses in Bloomington becoming unaffordable because people are saying, you know what, I'm going to stay here. What does that look like? >> I mean, for our group there are a lot of people who do work from home, have chosen to stay in Bloomington. We have had some people who for various reasons want to leave. We want to retain them, so we allow them to work remotely. And that has turned into a tool for recruiting. The kid that graduates from Caltech. Doesn't want to stay in Caltech in California, we have an opportunity now he can move to wherever between here and there and we can hire him do work. We love to have people come to Indiana. We think it is a unique experience, Bloomington, Indianapolis are great places. But I think the reality is, we're not going to get everybody to come live, be a Hoosier, how do we get them to come and work at IU? In some ways disappointing when we don't have buildings full of people, but 40 paying Zoom or teams window, not kind the same thing. But I think this is what we're going to have to figure out, how do we make this kind of environment work. >> Last question here, give you a chance to put in a plug for Indiana University. For those those data scientists those researchers who may be open to working somewhere else, why would they come to Indiana University? What's different about what you do from what every other academic institution does, Aaron? >> Yeah, I think a lot of what we just talked about today in terms of from a network's perspective, that were plugged in globally. I think if you look beyond the networks I think there are tremendous opportunities for folks to come to Bloomington and experience some bleeding edge technology and to work with some very talented people. I've been amazed, I've been at IU for 20 years and as I look at our peers across higher ed, well, I don't want to say they're not doing as well I do want brag at how well we're doing in terms of organizationally addressing things like security in a centralized way that really puts us in a better position. We're just doing a lot of things that I think some of our peers are catching up to and have been catching up to over the last 10, 12 years. >> And I think to sure scale of IU goes unnoticed at times. IU has the largest medical school in the country. One of the largest nursing schools in the country. And people just kind of overlook some of that. Maybe we need to do a better job of talking about it. But for those who are aware there are a lot of opportunities in life sciences, healthcare, the social sciences. IU has the largest logistics program in the world. We teach more languages than anybody else in the world. The varying kinds of things you can get involved with at IU including networks, I think pretty unparalleled. >> Well, making the case for high performance computing in the Hoosier State. Aaron, Dave, thanks very much for joining you making a great case. >> Thank you. >> Thank you. >> We'll be back right after this short message. This is theCUBE. (upbeat music)
SUMMARY :
that are exhibiting here. and security that you need. of the things we've done is, in light of some of the and looked to partner with We also like to showcase what we do at IU. of cheap labor for you at least. that they need to do. of the Indiana GigaPOP. and all of the universities in Indiana And what are the benefits and that's the path traffic will take. You're here to claim you're get more people to be and in the network space but in the HPC world I and finding the specific solution, the people come to IU and IU is plugged into the globe, We love to have people come to Indiana. open to working somewhere else, and to work with some And I think to sure scale in the Hoosier State. This is theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Aaron | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
IU | ORGANIZATION | 0.99+ |
Indiana | LOCATION | 0.99+ |
Dave Jent | PERSON | 0.99+ |
Aaron Neal | PERSON | 0.99+ |
Wisconsin | LOCATION | 0.99+ |
Chicago | LOCATION | 0.99+ |
Paul Gill | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Manhattan | LOCATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
Bloomington | LOCATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
James Dickerson | PERSON | 0.99+ |
three weeks | QUANTITY | 0.99+ |
35,000 students | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
Indiana University | ORGANIZATION | 0.99+ |
Caltech | ORGANIZATION | 0.99+ |
Santa Clara | LOCATION | 0.99+ |
each | QUANTITY | 0.99+ |
IU | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
NSF | ORGANIZATION | 0.99+ |
twice | QUANTITY | 0.99+ |
40 | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Hoosier State | LOCATION | 0.99+ |
BTAA | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
pandemic | EVENT | 0.98+ |
both | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
OmniSOC | ORGANIZATION | 0.98+ |
10 years ago | DATE | 0.98+ |
Indiana Office of Technology | ORGANIZATION | 0.98+ |
one laptop | QUANTITY | 0.97+ |
Esnet | ORGANIZATION | 0.97+ |
six, seven devices | QUANTITY | 0.97+ |
GlobalNOC | ORGANIZATION | 0.96+ |
Big Ten | ORGANIZATION | 0.96+ |
single source | QUANTITY | 0.95+ |
one direction | QUANTITY | 0.93+ |
Jet Stream | ORGANIZATION | 0.93+ |
several years ago | DATE | 0.92+ |
Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22
>>You can put this in a conference. >>Good morning and welcome back to Dallas. Ladies and gentlemen, we are here with the cube Live from, from Supercomputing 2022. David, my cohost, how you doing? Exciting. Day two. Feeling good. >>Very exciting. Ready to start off the >>Day. Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >>Having us, >>For having us. I'm excited that you're starting off the day because we've been hearing a lot of rumors about ethernet as the fabric for hpc, but we really haven't done a deep dive yet during the show. Y'all seem all in on ethernet. Tell us about that. Armando, why don't you start? >>Yeah. I mean, when you look at ethernet, customers are asking for flexibility and choice. So when you look at HPC and you know, infinite band's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial and their enterprise customers. And not everybody wants to be in the top 500. What they want to do is improve their job time and improve their latency over the network. And when you look at ethernet, you kinda look at the sweet spot between 8, 12, 16, 32 nodes. That's a perfect fit for ethernet and that space and, and those types of jobs. >>I love that. Pete, you wanna elaborate? Yeah, yeah, >>Yeah, sure. I mean, I think, you know, one of the biggest things you find with internet for HPC is that, you know, if you look at where the different technologies have gone over time, you know, you've had old technologies like, you know, atm, Sonic, fitty, you know, and pretty much everything is now kind of converged toward ethernet. I mean, there's still some technologies such as, you know, InfiniBand, omnipath that are out there. Yeah. But basically there's single source at this point. So, you know, what you see is that there is a huge ecosystem behind ethernet. And you see that also, the fact that ethernet is used in the rest of the enterprise is using the cloud data centers that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia, you know, into, you know, into enterprise, into cloud service providers is much easier to integrate it with the same technology you're already using in those data centers, in those networks. >>So, so what's this, what is, what's the state of the art for ethernet right now? What, you know, what's, what's the leading edge, what's shipping now and what and what's in the near future? You, you were with Broadcom, you guys design this stuff. >>Yeah, yeah. Right. Yeah. So leading edge right now, I got a couple, you know, Wes stage >>Trough here on the cube. Yeah. >>So this is Tomahawk four. So this is what is in production is shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 tets per second. Okay. Which matches any other technology out there. Like if you look at say, infin band, highest they have right now that's just starting to get into production is 25 point sixt. So state of the art right now is what we introduced. We announced this in August. This is Tomahawk five. So this is 51.2 terabytes per second. So double the bandwidth have, you know, any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, it's actually winds up being a factor of six efficiency. Wow. Cause if you want, I can go into that, but why >>Not? Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, with some like Terminator kind of character. Cause that would be cool if it's not true. Don't just don't say anything. I just want, I can actually shift visual >>It into a terminator. So. >>Well, but so what, what are the, what are the, so this is, this is from a switching perspective. Yeah. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, well, the kns that are, that are going in there, what's, what speed are we talking about today? >>So as far as 30 speeds, it tends to be 50 gigabits per second. Okay. Moving to a hundred gig pan four. Okay. And we do see a lot of Knicks in the 200 gig ethernet port speed. So that would be, you know, four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon. 800 gig in the future. But say state of the art right now, we're seeing for the end nodes tends to be 200 gig E based on 50 gig pan four. Wow. >>Yeah. That's crazy. Yeah, >>That is, that is great. My mind is act actively blown. I wanna circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen. Where do you think we are on the adoption curve and sort of in that cycle? Armand, do you wanna go? >>Yeah, yeah. Well, if you look at the market research, they're actually telling it's 50 50 now. So ethernet is at the level of 50%. InfiniBand is at 50%. Right. Interesting. Yeah. And so what's interesting to us, customers are coming to us and say, Hey, we want to see, you know, flexibility and choice and hey, let's look at ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we have our switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially mpi. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves there. And so what we wanna do is really simplify that so that we can make it easy to install, get the cluster up and running, and they can actually get some value out of the cluster. >>Yeah. Peter, what, talk about that partnership. What, what, what does that look like? Is it, is it, I mean, are you, you working with Dell before the, you know, before the T six comes out? Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? >>No, we've had a very long partnership both on the hardware and the software side. You know, Dell has been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, you know, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group within Broadcom, we've then gotten, you know, very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, you know, Dell can take it and, you know, deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again in both the hardware and the software. >>So, so I, I'm, I'm just, I'm fascinated by, I I, I always like to know kind like what Yeah, exactly. Exactly right. Look, you, you start talking about the largest super supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be 2 million CPUs, 2 million CPU cores, yeah. Ex alop of, of, of, of performance. What are the, what are the outward limits of T five in switches, building out a fabric, what does that look like? What are the, what are the increments in terms of how many, and I know it, I know it's a depends answer, but, but, but how many nodes can you support in a, in a, in a scale out cluster before you need another switch? What does that increment of scale look like today? >>Yeah, so I think, so this is 51.2 terras per second. What we see the most common implementation based on this would be with 400 gig ethernet ports. Okay. So that would be 128, you know, 400 giggi ports connected to, to one chip. Okay. Now, if you went to 200 gig, which is kind of the state of the art for the Nicks, you can have double that. Okay. So, you know, in a single hop you can have 256 end nodes connected through one switch. >>So, okay, so this T five, that thing right there inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what is, what does that, what's the form factor look like for that, for where that T five sits? Is there just one in a chassis or you have, what does that look >>Like? It tends to be pizza boxes these days. Okay. What you've seen overall is that the industry's moved away from chassis for these high end systems more towards pizza, pizza boxes. And you can have composable systems where, you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface to these days, what tends to happen is you'd have a pizza box, and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the, the line card. >>Okay. >>So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a two R U with 64 OSF P ports. And often each of those OSF p, which is an 800 gig e or 800 gig port, we've broken out into two 400 gig quarts. Okay. So yeah, in two r u you've got, and this is all air cooled, you know, in two re you've got 51.2 T. We do see some cases where customers would like to have different optics, and they'll actually deploy a four U just so that way they have the face place density, so they can plug in 128, say qsf P one 12. But yeah, it really depends on which optics, if you wanna have DAK connectivity combined with, with optics. But those are the two most common form factors. >>And, and Armando ethernet isn't, ethernet isn't necessarily ethernet in the sense that many protocols can be run over it. Right. I think I have a projector at home that's actually using ethernet physical connections. But what, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center ethernet, or, or is this, you know, RDMA over converged ethernet? What, what are >>We talking about? Yeah, so our rdma, right? So when you look at, you know, running, you know, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on ethernet, if you look at NPI is officially, you know, built to, Hey, it was designed to run on InfiniBand, but now what you see with Broadcom and the great work they're doing now, we can make that work on ethernet and get, you know, it's same performance. So that's huge for customers. >>Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a, a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of ethernet in hpc in terms of AI and ml. Where, where do you think we're gonna be next year or 10 years from now? >>You wanna go first or you want me to go first? I can start. >>Yeah. Pete feels ready. >>So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. That's >>Impressive. >>Yeah. So nicely >>Done, casual, humble brag there. That was great. That was great. I love that. >>I'm here for you. I mean, I think that's one of the benefits of, of Ethan is like, is the ecosystem, is the trajectory, the roadmap we've had, I mean, you don't see that in any other networking technology >>More who, >>So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, you know, doubling in bandwidth. I think that, you know, they're evolving protocols. You know, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on rdma, you know, for the supercomputing, the a AIML workloads. But we do see that, you know, as you have, you know, a mix of the applications running on these end nodes, maybe they're interfacing to the, the CPUs for some processing, you might use a different mix of protocols. So I'd say it's gonna be doubling a bandwidth over time evolution of the protocols. I mean, I expect that Rocky is probably gonna evolve over time depending on the a AIML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-pack optics. So, you know, right now this chip is all, all the balls in the back here, there's electrical connections. How >>Many are there, by the way? 9,000 plus on the back of that >>352. >>I love how specific it is. It's brilliant. >>Yeah. So we get, so right now, you know, all the thirties, all the signals are coming out electrically based, but we've actually shown, we have this, actually, we have a version of Hawk four at 25 point sixt that has co-pack optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk five Nice. Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. Wow. Cool. So I see, you know, there's, you know, the bandwidth, there's radis increasing protocols, different physical connectivity. So I think there's, you know, a lot of things throughout, and the protocol stack's also evolving. So, you know, a lot of excitement, a lot of new technology coming to bear. >>Okay. You just threw a carrot down the rabbit hole. I'm only gonna chase this one. Okay. >>All right. >>So I think of, I think of individual discreet physical connections to the back of those balls. Yeah. So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that in many optical connections? What's, what's, what's the mapping there? What does that, what does that look like? >>So what we've announced for TAMA five is it would have fr four optics coming out. So you'd actually have, you know, 512 fiber pairs coming out. So you'd have, you know, basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the, the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because >>It's, it's miraculous, essentially. It's, I know. Yeah, yeah, yeah, yeah. Yeah. So, so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus versus ethernet. I think you've highlighted some of the benefits of specifically running ethernet moving forward as, as hpc, you know, which is sort of just trails slightly behind supercomputing as we define it, becomes more pervasive AI ml. What, what are some of the other things that maybe people might not immediately think about when they think about the advantages of running ethernet in that environment? Is it, is it connecting, is it about connecting the HPC part of their business into the rest of it? What, or what, what are the advantages? >>Yeah, I mean, that's a big thing. I think, and one of the biggest things that ethernet has again, is that, you know, the data centers, you know, the networks within enterprises within, you know, clouds right now are run on ethernet. So now if you want to add services for your customers, the easiest thing for you to do is, you know, the drop in clusters that are connected with the same networking technology, you know, so I think what, you know, one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to ethernet. So now you've got to train your technicians, you train your, your assist admins on two different network technologies. You need to have all the, the debug technology, all the interconnect for that. So here, the easiest thing is you can use ethernet, it's gonna give you the same performance. And actually in some cases we seen better performance than we've seen with omnipath than, you know, better than in InfiniBand. >>That's awesome. Armando, we didn't get to you, so I wanna make sure we get your future hot take. Where do you see the future of ethernet here in hpc? >>Well, Pete hit on a big thing is bandwidth, right? So when you look at train a model, okay, so when you go and train a model in ai, you need to have a lot of data in order to train that model, right? So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, but if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the cpu. And essentially, if you're gonna do it maybe on CPU only, but if you do it on accelerators, well guess what? You need a big pipe in order to get all that data through. And here's the deal. The bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage. Maybe it's some new way you design a product, but that's a benefit of speed you want faster, faster, faster. >>It's all about making it faster and easier. It is for, for the users. I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas Stakes, there's a lot going on with with that making >>Me hungry. >>I know exactly. I'm sitting up here thinking, man, I did not have a big enough breakfast. How do you come up with the name Tomahawk? >>So Tomahawk, I think you just came, came from a list. So we had, we have a tri end product line. Ah, a missile product line. And Tomahawk is being kinda like, you know, the bigger and batter missile, so, oh, okay. >>Love this. Yeah, I, well, I >>Mean, so you let your engineers, you get to name it >>Had to ask. It's >>Collaborative. Oh good. I wanna make sure everyone's in sync with it. >>So just so we, it's not the Aquaman tried. Right, >>Right. >>The steak Tomahawk. I >>Think we're, we're good now. Now that we've cleared that up. Now we've cleared >>That up. >>Armando P, it was really nice to have both you. Thank you for teaching us about the future of ethernet N hpc. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to the Cube Live from Dallas. We're here talking all things HPC and Supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us.
SUMMARY :
how you doing? Ready to start off the Gentlemen, thank you for being here with us. why don't you start? So when you look at HPC and you know, infinite band's always been around, right? Pete, you wanna elaborate? I mean, I think, you know, one of the biggest things you find with internet for HPC is that, What, you know, what's, what's the leading edge, Trough here on the cube. So double the bandwidth have, you know, any other technology that's out there. Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, So. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, So that would be, you know, four lanes, 50 gig. Yeah, Where do you think we are on the adoption curve and So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? on the operating system, you know, and they provide very valuable feedback for us on our roadmap. most powerful supercomputers that exist today, and you start looking at the specs and there might be So, you know, in a single hop you can have 256 end nodes connected through one switch. Is there just one in a chassis or you have, what does that look you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface if you wanna have DAK connectivity combined with, with optics. Is this exactly the same as what you think of as data So when you look at, you know, running, you know, a looking into the crystal ball type because you essentially get to see the future knowing what people are You wanna go first or you want me to go first? So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, I love that. the roadmap we've had, I mean, you don't see that in any other networking technology So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, I love how specific it is. So I see, you know, there's, you know, the bandwidth, I'm only gonna chase this one. How do you do So what we've announced for TAMA five is it would have fr four optics coming out. so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus know, so I think what, you know, one of the biggest things there is that if you look at Where do you see the future of ethernet here in So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, It is for, for the users. How do you come up with the name Tomahawk? And Tomahawk is being kinda like, you know, the bigger and batter missile, Yeah, I, well, I Had to ask. I wanna make sure everyone's in sync with it. So just so we, it's not the Aquaman tried. I Now that we've cleared that up. And thank you all for tuning in to the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
August | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
128 | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
2 million | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
400 gig | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
400 gig | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
30 speeds | QUANTITY | 0.99+ |
50 gig | QUANTITY | 0.99+ |
one chip | QUANTITY | 0.99+ |
400 giggi | QUANTITY | 0.99+ |
512 channels | QUANTITY | 0.99+ |
9,000 | QUANTITY | 0.99+ |
seven times | QUANTITY | 0.99+ |
800 gig | QUANTITY | 0.99+ |
Armando | PERSON | 0.99+ |
24 months | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
9,000 plus | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Peter Del Vecchio | PERSON | 0.99+ |
single source | QUANTITY | 0.99+ |
North America | LOCATION | 0.98+ |
double | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Both | QUANTITY | 0.98+ |
Hawk four | COMMERCIAL_ITEM | 0.98+ |
three | QUANTITY | 0.98+ |
Day two | QUANTITY | 0.97+ |
next year | DATE | 0.97+ |
hpc | ORGANIZATION | 0.97+ |
Tomahawk five | COMMERCIAL_ITEM | 0.97+ |
Dell Technologies | ORGANIZATION | 0.97+ |
T six | COMMERCIAL_ITEM | 0.96+ |
two | QUANTITY | 0.96+ |
one switch | QUANTITY | 0.96+ |
Texas | LOCATION | 0.96+ |
six efficiency | QUANTITY | 0.96+ |
25 point | QUANTITY | 0.95+ |
Armando | ORGANIZATION | 0.95+ |
50 | QUANTITY | 0.93+ |
25.6 tets per second | QUANTITY | 0.92+ |
51.2 terabytes per second | QUANTITY | 0.92+ |
18 | QUANTITY | 0.91+ |
512 fiber pairs | QUANTITY | 0.91+ |
two fascinating guests | QUANTITY | 0.91+ |
hundred gig | QUANTITY | 0.91+ |
four lanes | QUANTITY | 0.9+ |
HPC | ORGANIZATION | 0.9+ |
51.2 T. | QUANTITY | 0.9+ |
InfiniBand | ORGANIZATION | 0.9+ |
256 end | QUANTITY | 0.89+ |
first | QUANTITY | 0.89+ |
Armando Acosta | PERSON | 0.89+ |
two different network technologies | QUANTITY | 0.88+ |
Dr. Dan Duffy and Dr. Bill Putman | SuperComputing 22
>>Hello >>Everyone and welcome back to Dallas where we're live from, Super computing. My name is Savannah Peterson, joined with my co-host David, and we have a rocket of a show for you this afternoon. The doctors are in the house and we are joined by nasa, ladies and gentlemen. So excited. Please welcome Dr. Dan Duffy and Dr. Bill Putman. Thank you so much for being here, guys. I know this is kind of last minute. How's it to be on the show floor? What's it like being NASA here? >>What's exciting? We haven't, we haven't been here for three years, so this is actually really exciting to come back and see everybody, to see the showroom floor, see the innovations that have happened over the last three years. It's pretty exciting. >>Yeah, it's great. And, and so, because your jobs are so cool, and I don't wanna even remotely give even too little of the picture or, or not do it justice, could you give the audience a little bit of background on what you do as I think you have one of the coolest jobs ever. YouTube bill. >>I, I appreciate that. I, I, I run high Performance Computing Center at NASA Goddard for science. It's high performance information technology. So we do everything from networking to security, to high performance computing, to data sciences, artificial intelligence and machine learning is huge for us now. Yeah, large amounts of data, big data sets, but we also do scientific visualizations and then cloud and commercial cloud computing, as well as on premises cloud computing. And quite frankly, we support a lot of what Bill and his team does. >>Bill, why don't you tell us what your team >>Does? Yeah, so I'm a, I'm an earth scientist. I work as the associate chief at the global modeling assimilation office. And our job is to really, you know, maximize the use of all the observations that NASA takes from space and build that into a coherent, consistent physical system of the earth. Right? And we're focused on utilizing the HC that, that Dan and the folks at the nccs provide to us, to the best of our abilities to integrate those observations, you know, on time scales from hours, days to, to seasonal to to monthly time scales. That's, that's the essence of our focus at the GMA o >>Casual modeling, all of NASA's earth data. That, that in itself as a sentence is pretty wild. I imagine you're dealing with a ton of data. >>Oh, massive amounts of data. Yes, >>Probably, I mean, as much as one probably could, now that I'm thinking about it. I mean, and especially with how far things have to travel. Bill, sticking with you, just to open us up, what technology here excites you the most about the future and that will make your job easier? Let's put it that way. >>To me, it's the accelerator technologies, right? So there's the limited, the limiting factor for, for us as scientists is how fast we can get an answer. And if we can get our answer faster through accelerated technologies, you know, with the support of the, of the nccs and the computing centers, but also the software engineers enabling that for us, then we can do more, right. And push the questions even further, you know, so once we've gotten fast enough to do what we want to do, there's always something next that we wanna look for. So, >>I mean, at nasa you have to exercise such patience, whether that be data, coming back, images from a rover, doesn't matter what it is. Sometimes there's a lot of time, days, hours, years, depending on the situation. Right? I really, I really admire that. What about you, Dan? What's got you really excited about the future here? So >>Bill talked about the, the accelerated technology, which is absolutely true and, and, and is needed to get us not to only to the point where we have the compute resources to do the simulations that Bill wants to do, and also do it in a energy efficient way. But it's really the software frameworks that go around that and the software frameworks, the technology that dealing with how to use those in an energy efficient and and most efficient way is extremely important. And that's some of the, you know, that's what I'm really here to try to understand better about is how can I support these scientists with not just the hardware, but the software frameworks by which they can be successful. >>Yeah. We've, we've had a lot of kind of philosophical discussion about this, the difference between the quantitative increases in power in computing that we're seeing versus the question of whether or not we need truly qualitative changes moving forward. Where do you see the limits of, of, of, you know, if you, if you're looking at the ability to gather more data and process more data more quickly, what you can do with that data changes when you're getting updates every second versus every month seems pretty obvious. Is there a, is there, but is there, is there a near term target that you have specifically where once you reach that target, if you weren't thinking ahead of that target, you'd kind of be going, Okay, well we solved that problem, we're getting the data in so fast that you can, you can ask me, what is the temperature in this area? And you can go, Oh, well, huh, an hour ago the data said this. Beyond that, do you need a qualitative change in our ability to process information and tease insight into out of chaos? Or do you just need more quantity to be able to get to the point where you can do things like predict weather six months in advance? What are, what are your thoughts on that? Yeah, >>It's an interesting question, right? And, and you ended it with predicting whether six months in advance, and actually I was thinking the other way, right? I was thinking going to finer and finer scales and shorter time scales when you talk about having data more frequently, right? So one of the things that I'm excited about as a modeler is going to hire resolution and representing smaller scale processes at nasa, we're, we're interested in observations that are global. So our models are global and we'd like to push those to as fine a resolution as possible to do things like severe storm predictions and so forth. So the faster we can get the data, the more data we can have, and that area would improve our ability to do that as well. So, >>And your background is in meteorology, right? >>Yes, I'm a meteorologist. >>Excellent. Okay. Yeah, yeah, >>Yeah. So, so I have to ask a question, and I'm sure all the audience cares about this. And I went through this when I was talking about the ghost satellites as well. What, what is it about weather that makes it so hard to predict? >>Oh, it's the classic chaos problem. The, the butterfly effects problem, and it's just true. You know, you always hear the story of a butterfly in Africa flaps, its rings and wings, and the weather changes in, in New York City, and it's just, computers are an excellent example of that, right? So we have a model of the earth, we can run it two times in a row and get the exact same answer, but if we flip a bit somewhere, then the answer changes 10 days later significantly. So it's a, it's a really interesting problem. So, >>Yeah. So do you have any issue with the fact that your colleague believes that butterflies are responsible for weather? No, I does that, does that, is it responsible for climate? Does that bother you at all? >>No, it doesn't. As a matter of fact, they actually run those butterfly like experi experiments within the systems where they do actually flip some bits and see what the uncertainties are that happen out 7, 8, 9 days out in advance to understand exactly what he's saying, to understand the uncertainties, but also the sensitivity with respect to the observations that they're taking. So >>Yeah, it's fascinating. It is. >>That is fascinating. Sticking with you for a second, Dan. So you're at the Center for Climate Simulation. Is that the center that's gonna help us navigate what happens over the next decade? >>Okay, so I, no one center is gonna help us navigate what's gonna happen over the next decade or the next 50 or a hundred years, right. It's gonna be everybody together. And I think NASA's role in that is really to pioneer the, the, the models that that bill and others are doing to understand what's gonna happen in not just the seasonal sub, but we also work with G, which is the God Institute for Space Studies. Yeah. Which does the decatal and, and the century long studies. Our, our job is to really help that research, understand what's happening with the client, but then feed that back into what observations we need to make next in order to better understand and better quantify the risks that we have to better quantify the mitigations that we can make to understand how and, and, and affect how the climate is gonna go for the future. So that's really what we trying to do. We're trying to do that research to understand the climate, understand what mitigations we can have, but also feedback into what observations we can make for the future. >>Yeah. And and what's the partnership ecosystem around that? You mentioned that it's gonna take all of us, I assume you work with a lot of >>Partners, Probably both of you. I mean, obviously the, the, the federal agencies work huge amounts together. Nasa, Noah is our huge partnerships. Sgs, a huge partnerships doe we've talked to doe several times this, so this, this this week already. So there's huge partnerships that go across the federal agency. We, we work also with Europeans as much as we can given the, the, the, you know, sort of the barriers of the countries and the financials. But we do collaborate as much as we can with, And the nice thing about NASA, I would say is the, all the observations that we take are public, they're paid for by the public. They're public, everybody can down them, anybody can down around the world. So that's also, and they're global measurements as Bill said, they're not just regional. >>Do you have, do you have specific, when you think about improving your ability to gain insights from data that that's being gathered? Yeah. Do you set out specific milestones that you're looking for? Like, you know, I hope by June of next year we will have achieved a place where we are able to accomplish X. Yeah. Do you, do you, Yeah. Bill, do you put, what, >>What milestones do we have here? So, yeah, I mean, do you have >>Yeah. Are, are you, are you sort of kept track of that way? Do you think of things like that? Like very specific things? Or is it just so fluid that as long as you're making progress towards the future, you feel okay? >>No, I would say we absolutely have milestones that we like to keep in track, especially from the modeling side of things, right? So whether it's observations that exist now that we want to use in our system, milestones to getting those observations integrated in, but also thinking even further ahead to the observations that we don't have yet. So we can use the models that we have today to simulate those kind of observations that we might want in the future that can help us do things that we can do right now. So those missions are, are aided by the work that we do at the GBO and, and the nccs, but, >>Okay, so if we, if we extrapolate really to the, to the what if future is really trying to understand the entire earth system as best as we can. So all the observations coming in, like you said, in in near real time, feeding that into an earth system model and to be able to predict short term, midterm or even long term predictions with, with some degree of certainty. And that may be things like climate change or it may be even more important, shorter term effects of, of severe weather. Yeah. Which is very important. And so we are trying to work towards that high resolution, immediate impact model that we can, that we can, you know, really share with the world and share those results as best, as best we can. >>Yeah. I, I have a quick, I have a quick follow up on that. I I bet we both did. >>So, so if you think about AI and ml, artificial intelligence and machine learning, something that, you know, people, people talk about a lot. Yeah. There's the concept of teaching a machine to go look for things, call it machine learning. A lot of it's machine teaching we're saying, you know, hit, you know, hit the rack on this side with a stick or the other side with the stick to get it to, to kind of go back and forth. Do you think that humans will be able to guide these systems moving forward enough to tease out the insights that we want? Or do you think we're gonna have to rely on what people think of as artificial intelligence to be able to go in with this massive amount of information with an almost infinite amount of variables and have the AI figure out that, you know what, it was the butterfly, It really was the butterfly. We all did models with it, but, but you understand the nuance that I'm saying. It's like we, we, we think we know what all the variables are and that it's chaotic because there's so many variables and there's so much data, but maybe there's something we're not taking into >>A account. Yeah, I I, I'm, I'm, I'm sure that's absolutely the case. And I'll, I'll start and let Bill, Bill jump in here. Yeah, there's a lot of nuances with a aiml. And so the, the, the, the real approach to get to where we want to be with this earth system model approach is a combination of both AI ML train models as best as we can and as unbiased way as we can. And there's a, there's a big conversation we have around that, but also with a physics or physical based model as well, Those two combined with the humans or the experts in the loop, we're not just gonna ask the artificial intelligence to predict anything and everything. The experts need to be in the loop to guide the training in as best as we, as, as we can in an unbiased, equitable way, but also interpret the results and not just give over to the ai. But that's the combination of that earth system model that we really wanna see. The future's a combination of AI l with physics based, >>But there's, there's a, there's an obvious place for a AI and ML in the modeling world that is in the parameterizations of the estimations that we have to do in our systems, right? So when we think about the earth system and modeling the earth system, there are many things like the equations of motions and thermodynamics that have fixed equations that we know how to solve on a computer. But there's a lot of things that happen physically in the atmosphere that we don't have equations for, and we have to estimate them. And machine learning through the use of high resolution models or observations in training the models to understand and, and represent that, yeah, that that's the place where it's really useful >>For us. There's so many factors, but >>We have to, but we have to make sure that we have the physics in that machine learning in those, in those training. So physics informed training isn't very important. So we're not just gonna go and let a model go off and do whatever it wants. It has to be constrained within physical constraints that the, that the experts know. >>Yeah. And with the wild amount of variables that affect our, our earth, quite frankly. Yeah, yeah. Which is geez. Which is insane. My god. So what's, what, what technology or what advancement needs to happen for your jobs to get easier, faster for our ability to predict to be even more successful than it is currently? >>You know, I think for me, the vision that I have for the future is that at some point, you know, all data is centrally located, essentially shared. We have our applications are then services that sit around all that data. I don't have to sit as a user and worry about, oh, is this all this data in place before I run my application? It's already there, it's already ready for me. My service is prepared and I just launch it out on that service. But that coupled with the performance that I need to get the result that I want in time. And I don't know when that's gonna happen, but at some point it might, you know, I don't know rooting for you, but that's, >>So there are, there are a lot of technologies we can talk about. What I'd like to mention is, is open science. So NASA is really trying to make a push and transformation towards open science. 2023 is gonna be the year of open science for nasa. And what does that mean? It means a lot of what Bill just said is that we have equity and fairness and accessibility and you can find the data, it's findability, it's fair data, you know, a fair findability accessibility reproducibility, and I forget what the eye stands for, but these are, these are tools and, and, and things that we need to, as, as a computing centers and including all the HC centers here, as well as the scientists need to support, to be as transparent as possible with the data sets and the, and the research that we're doing. And that's where I think is gonna be the best thing is if we can get this data out there that anybody can use in an equitable way and as transparent as possible, that's gonna eliminate, in my opinion, the bias over time because mistakes will be found and mistakes will be corrected over time. >>I love that. Yeah. The open source science end of this. No, it's great. And the more people that have access people I find in the academic world, especially people don't know what's going on in the private sector and vice versa. And so I love that you just brought that up. Closing question for you, because I suspect there might be some members of our audience who maybe have fantasized about working at nasa. You've both been working there for over a decade. Is it as cool as we all think of it? It is on the outside. >>I mean, it's, it's definitely pretty cool. >>You don't have to be modest about it, you know, >>I mean, just being at Goddard and being at the center where they build the James web web telescope and you can go to that clean room and see it, it's just fascinating. So it, it's really an amazing opportunity. >>Yeah. So NASA Goddard as a, as a center has, you know, information technologist, It has engineers, it has scientists, it has support staff, support team members. We have built more things, more instruments that have flown in this space than any other place in the world. The James Lab, we were part of that, part of a huge group of people that worked on James. We and James, we came through and was assembled in our, our, our clean room. It's one of the biggest clean rooms in, in, in the world. And we all took opportunities to go over and take selfies with this as they put those loveness mirrors on them. Yeah, it was awesome. It was amazing. And to see what the James we has done in such a short amount of time, the successes that they've gone through is just incredible. Now, I'm not a, I'm not a part of the James web team, but to be a, to be at the same center, to to listen to scientists like Bill talk about their work, to listen to scientists that, that talk about James, we, that's what's inspiring. And, and we get that all the time. >>And to have the opportunity to work with the astronauts that service the, the Hubble Telescope, you know, these things are, >>That's literally giving me goosebumps right now. I'm sitting over >>Here just, just an amazing opportunity. And woo. >>Well, Dan, Bill, thank you both so much for being on the show. I know it was a bit last minute, but I can guarantee we all got a lot out of it. David and I both, I know I speak for us in the whole cube audience, so thank you. We'll have you, anytime you wanna come talk science on the cube. Thank you all for tuning into our supercomputing footage here, live in Dallas. My name is Savannah Peterson. I feel cooler having sat next to these two gentlemen for the last 15 minutes and I hope you did too. We'll see you again soon.
SUMMARY :
The doctors are in the house and we are joined by We haven't, we haven't been here for three years, so this is actually really could you give the audience a little bit of background on what you do as I think you And quite frankly, we support a lot of what Bill and his And our job is to really, you know, maximize the use of all the observations I imagine you're dealing with a ton of data. Oh, massive amounts of data. what technology here excites you the most about the future and that will make your job easier? And push the questions even further, you know, I mean, at nasa you have to exercise such patience, whether that be data, coming back, images from a rover, And that's some of the, you know, be able to get to the point where you can do things like predict weather six months in advance? So the faster we can get the data, the more data we can have, and that area would improve our ability And I went through this when I was talking about the ghost satellites So we have a model of the earth, we can run it two times Does that bother you at all? what he's saying, to understand the uncertainties, but also the sensitivity with respect to the observations that they're taking. Yeah, it's fascinating. Is that the center that's gonna help us navigate what happens over the next decade? just the seasonal sub, but we also work with G, which is the God Institute for I assume you work with a lot of the, the, you know, sort of the barriers of the countries and the financials. Like, you know, I hope by Do you think of things like that? So we can use the models that we have today to simulate those kind of observations that we can, that we can, you know, really share with the world and share those results as best, I I bet we both did. We all did models with it, but, but you understand the nuance that I'm saying. And there's a, there's a big conversation we have around that, but also with a physics or physical based model as is in the parameterizations of the estimations that we have to do in our systems, right? There's so many factors, but We have to, but we have to make sure that we have the physics in that machine learning in those, in those training. to get easier, faster for our ability to predict to be even more successful you know, I don't know rooting for you, but that's, it's findability, it's fair data, you know, a fair findability accessibility reproducibility, And so I love that you just brought telescope and you can go to that clean room and see it, it's just fascinating. And to see what the James we has done in such a short amount of time, the successes that they've gone through is I'm sitting over And woo. next to these two gentlemen for the last 15 minutes and I hope you did too.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Dan | PERSON | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
God Institute for Space Studies | ORGANIZATION | 0.99+ |
James | PERSON | 0.99+ |
Nasa | ORGANIZATION | 0.99+ |
Bill | PERSON | 0.99+ |
Africa | LOCATION | 0.99+ |
New York City | LOCATION | 0.99+ |
three years | QUANTITY | 0.99+ |
Dan Duffy | PERSON | 0.99+ |
Bill Putman | PERSON | 0.99+ |
earth | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two | QUANTITY | 0.98+ |
YouTube | ORGANIZATION | 0.98+ |
2023 | DATE | 0.98+ |
9 days | QUANTITY | 0.97+ |
an hour ago | DATE | 0.97+ |
8 | QUANTITY | 0.97+ |
Center for Climate Simulation | ORGANIZATION | 0.97+ |
7 | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
nasa | ORGANIZATION | 0.97+ |
next decade | DATE | 0.96+ |
June of next year | DATE | 0.96+ |
Dr. | PERSON | 0.94+ |
10 days later | DATE | 0.94+ |
six months | QUANTITY | 0.93+ |
two gentlemen | QUANTITY | 0.93+ |
this week | DATE | 0.92+ |
this afternoon | DATE | 0.92+ |
James Lab | ORGANIZATION | 0.9+ |
over a decade | QUANTITY | 0.87+ |
last three years | DATE | 0.85+ |
next 50 | DATE | 0.84+ |
Performance Computing Center | ORGANIZATION | 0.8+ |
GBO | ORGANIZATION | 0.77+ |
second | QUANTITY | 0.75+ |
two times in a row | QUANTITY | 0.72+ |
much | QUANTITY | 0.7+ |
last 15 minutes | DATE | 0.66+ |
Hubble Telescope | ORGANIZATION | 0.65+ |
NASA Goddard | ORGANIZATION | 0.65+ |
Noah | PERSON | 0.61+ |
Rajesh Pohani, Dell Technologies | SuperComputing 22
>>Good afternoon friends, and welcome back to Supercomputing. We're live here at the Cube in Dallas. I'm joined by my co-host, David. My name is Savannah Peterson and our a fabulous guest. I feel like this is almost his show to a degree, given his role at Dell. He is the Vice President of HPC over at Dell. Raja Phan, thank you so much for being on the show with us. How you doing? >>Thank you guys. I'm doing okay. Good to be back in person. This is a great show. It's really filled in nicely today and, and you know, a lot of great stuff happening. >>It's great to be around all of our fellow hardware nerds. The Dell portfolio grew by three products. It it did, I believe. Can you give us a bit of an intro on >>That? Sure. Well, yesterday afternoon and yesterday evening, we had a series of events that announced our new AI portfolio, artificial intelligence portfolio, you know, which will really help scale where I think the world is going in the future with, with the creation of, of all this data and what we can do with it. So yeah, it was an exciting day for us. Yesterday we had a, a session over in a ballroom where we did a product announce and then in the evening had an unveil in our booth here at the SUPERCOMPUTE conference, which was pretty eventful cupcakes, you know, champagne drinks and, and most importantly, Yeah, I know. Good time. Did >>You get the invite? >>No, I, most importantly, some really cool new servers for our customers. >>Well, tell us about them. Yeah, so what's, what's new? What's in the news? >>Well, you know, as you think about artificial intelligence and what customers are, are needing to do and the way artificial intelligence is gonna change how, you know, frankly, the world works. We have now developed and designed new purpose-built hardware, new purpose-built servers for a variety of AI and artificial intelligence needs. We launched our first eight way, you know, Invidia H 100 a a 100 s XM product. Yesterday we launched a four u four way H 100 product yesterday and a two u fully liquid cooled intel data center, Max GPU server yesterday as well. So, you know, a full range of portfolio for a variety of customer needs, depending on their use cases, what they're trying to do, their infrastructure, we're able to now provide, you know, servers to and hardware that help, you know, meet those needs in those use cases. >>So I wanna double click, you just said something interesting, water cooled. >>Yeah. So >>Where does, at what point do you need to move in the direction of water cooling and, you know, I know you mentioned, you know, GPU centric, but, but, but talk about that, that balance between, you know, a density and what you can achieve with the power that's going into the system. Well, you system, >>It all depends on what the customers are trying to accommodate, right? I, I think that there's a dichotomy that's existing now between customers who have already or are planning liquid cooled infrastructures and power distribution to the rack. So you take those two together and if you have the power distribution to the rack, you wanna take advantage of the density to take advantage of the density you need to be able to cool the servers and therefore liquid cooling comes into play. Now you have other customers that either don't have the power to the rack or aren't ready for liquid cooling, and at that point, you know, they're not gonna want to take advantage. They can't take advantage of the density. So there's this dichotomy in products, and that's why we've got our XE 96 40, which is in two U dense liquid cooled, but we also have our XE 86 40, which is a four U air cold, right? Or liquid assisted air cold, right? So depending on where you are on your journey, whether it's power infrastructure, liquid cooling, infrastructure, we've got the right solution for you that, you know, meets your needs. You don't have to take advantage of the density, the expense of liquid cooling, unless you're ready to do that. Otherwise we've got this other option for you. And so that's really what dichotomy is beginning to exist in our customers infrastructures today. >>I was curious about that. So do you see, is there a category or a vertical that is more in the liquid cooling zone because that's a priority in terms of the density or >>Yeah, yeah. I mean, you've got your, your large HTC installations, right? Your large clusters that not only have the power have, you know, the liquid cooling density that they've built in, you've got, you know, federal government installations, you've got financial tech installations, you've got colos that are built for sustainability and density and space that, that can also take advantage of it. Then you've got others that are, you know, more enterprises, more in the mainstream of what they do, where, you know, they're not ready for that. So it just, it just depends on the scale of the customer that we're talking about and what they're trying to do and, and where they're, and where they're doing it. >>So we hear, you know, we hear at Supercomputing conference and HPC is sort of the kind of trailing mini version of supercomputing in a way where maybe you have someone who they don't need 2 million CPU cores, but maybe they need a hundred thousand CPU cores. So it's all a matter of scale. What is, can you identify kind of an HPC sweet spot right now as, as Dell customers are adopting the kinds of things that you just just announced? >>You know, I think >>How big are these clusters at this >>Point? Well, let, let me, let me hit something else first. Yeah, I think people talk about HPC as, as something really specific and what we're seeing now with the, you know, vast amount of data creation, the need for computational analytics, the need for artificial intelligence, the HPC is kind of morphing right into, into, you know, more and more general customer use cases. And so where before you used to think about HPC is research and academics and computational dynamics. Now, you know, there's a significant Venn diagram overlap with just regular artificial intelligence, right? And, and so that is beginning to change the nature of how we think about hpc. You think about the vast data that's being created. You've got data driven HPC where you're running computational analytics on this data that's giving you insights or outcomes or information. It's not just, Hey, I'm running, you know, physics calculations or astronomical how, you know, calculations. It is now expanding in a variety of ways where it's democratizing into, you know, customers who wouldn't actually talk about themselves as HVC customers. And when you meet with them, it's like, well, yeah, but your compute needs are actually looking like HPC customers. So let's talk to you about these products. Let's talk to you about these solutions, whether it's software solutions, hardware solutions, or even purpose-built hardware. Like we're, like we talked about that now becomes the new norm. >>Customer feedback and community engagement is big for you. I know this portfolio of products that was developed based on customer feedback, correct? Yep. >>So everything we do at Dell is customer driven, right? We want to be, we want to drive, you know, customer driven innovation, customer driven value to meet our customer's needs. So yeah, we spent a while, right, researching these products, researching these needs, understanding is this one product? Is it two products? Is it three products? Talking to our partners, right? Driving our own innovation in IP and then where they're going with their roadmaps to be able to deliver kind of a harmonized solution to customers. So yeah, it was a good amount of customer engagement. I know I was on the road quite a bit talking to customers, you know, one of our products was, you know, we almost named after one of our customers, right? I'm like, Hey, this, we've talked about this. This is what you said you wanted. Now he, he was representative of a group of customers and we validated that with other customers and it's also a way of me making sure he buys it. But great, great. Yeah, >>Sharing sales there, >>That was good. But you know, it's heavily customer driven and that's where understanding those use cases and where they fit drove the various products. And, you know, in terms of, in terms of capability, in terms of size, in terms of liquid versus air cooling, in terms of things like number of P C I E lanes, right? What the networking infrastructure was gonna look like. All customer driven, all designed to meet where customers are going in their artificial intelligence journey, in their AI journey. >>It feels really collaborative. I mean, you've got both the intel and the Nvidia GPU on your new product. There's a lot of CoLab between academics and the private sector. What has you most excited today about supercomputing? >>What it's going to enable? If you think about what artificial intelligence is gonna enable, it's gonna enable faster medical research, right? Genomics the next pandemic. Hopefully not anytime soon. We'll be able to diagnose, we'll be able to track it so much faster through artificial intelligence, right? That the data that was created in this last one is gonna be an amazing source of research to, to go address stuff like that in the future and get to the heart of the problem faster. If you think about a manufacturing and, and process improvement, you can now simulate your entire manufacturing process. You don't have to run physical pilots, right? You can simulate it all, get 90% of the way there, which means your, your either factory process will get reinvented factor faster, or a new factory can get up and running faster. Think about retail, how retail products are laid out. >>You can use media analytics to track how customers go through the store, what they're buying. You can lay things out differently. You're not gonna have in the future people going, you know, to test cell phone reception. Can you hear me now? Can you hear me? Now you can simulate where customers are patterns to ensure that the 5G infrastructure is set up, you know, to the maximum advantage. All of that through digital simulation, through digital twins, through media analytics, through natural language processing. Customer experience is gonna be better, communication's gonna be better. All of this stuff with, you know, using this data, training it, and then applying it is probably what excites me the most about super computing and, and really compute in the future. >>So on the hardware front, kind of digging down below the, the covers, you know, the surface a little more, Dell has been well known for democratizing things in it, making them available to, at a variety of levels. Never a one size fits all right? Company, these latest announcements would be fair to say. They represent sort of the tip of the spear in terms of high performance. What about, what about rpc regular performance computing? Where's, where's the overlap? Cause you know, we're in this season where we've got AMD and Intel leapfrogging one another, new bus architectures. The, the, you know, the, the connectivity that's plugged into these things are getting faster and faster and faster. So from a Dell perspective, where does my term rpc regular performance computing and, and HPC begin? Are you seeing people build stuff on kind of general purpose clusters also? >>Well, sure, I mean, you can run a, a good amount of artificial acceleration on, you know, high core count CPUs without acceleration, and you can do it with P C I E accelerators and then, then you can do it with some of the, the, the very specific high performance accelerators like that, the intel, you know, data center, Max GPUs or NVIDIAs a 100 or H 100. So there are these scale up opportunities. I mean, if you think about, >>You know, >>Our mission to democratize compute, not just hpc, but general compute is about making it easier for customers to implement, to get the value out of what they're trying to do. So we focus on that with, you know, reference designs or validated designs that take out a good amount of time that customers would have to do it on their own, right? We can cut by six to 12 months the ability for customers in, in, I'm gonna use an HPC example and then I'll come back to your, your regular performance compute by us doing the work us, you know, setting, you know, determining the configuration, determining the software packages, testing it, tuning it so that by the time it gets to the customer, they get to take advantage of the expertise of Dell Engineers Dell Scale and they are ready to go in a much faster point of view. >>The challenge with AI is, and you talk to customers, is they all know what it can lead to and the benefits of it. Sometimes they just dunno how to start. We are trying to make it easier for customers to start, whether it is using regular RPC or you know, non optimized, non specialized compute, or as you move up the value stack into compute capability, our goal is to make it easier for customers to start to get on their journey and to get to what they're trying to do faster. So where do I see, you know, regular performance compute, you know, it's, it's, you know, they go hand in hand, right? As you think about what customers are trying to do. And I think a lot of customers, like we talked about, don't actually think about what they're trying to do as high performance computing. They don't think of themselves as one of those specialized institutions as their hpc, but they're on this glide path to greater and greater compute needs and greater and greater compute attributes that that merge kind of regular performance computing and high performance computing to where it's hard to really draw the line, especially when you get to data driven HPC data's everywhere >>And so much data. And it sounds like a lot people are very early in this journey. From our conversation with Travis, I mean five AI programs per very large company or less at this point for 75% of customers, that's pretty wild. I mean you're, you're an educational coach, you're teachers, you're innovating on the hardware front, you're doing everything at Dell. Last question for you. You've been at 24 years, >>25 in this coming march. >>What has a company like that done to retain talent like you for more than two and a half decades? >>You know, for me and I, I, and I'd like to say I had an atypical journey, but I don't think I have right there, there has always been opportunity for me, right? You know, I started off as a quality engineer. A couple years later I'm living in Singapore running or you know, running services for Enterprise and apj. I come back couple years in Austin, then I'm in our Bangalore development center helping set that up. Then I come back, then I'm in our Taiwan development center helping with some of the work out there. And then I come back. There has always been the next opportunity before I could even think about am I ready for the next opportunity? Oh. And so for me, why would I leave? Right? Why would I do anything different given that there's always been the next opportunity? The other thing is jobs are what you make of it and Dell embraces that. So if there's something that needs to be done or there was an opportunity, or even in the case of our AI ML portfolio, we saw an opportunity, we reviewed it, we talked about it, and then we went all in. So that innovation, that opportunity, and then most of all the people at Dell, right? I can't ask to work with a better set of set of folks from from the top on down. >>That's fantastic. Yeah. So it's culture. >>It is culture B really, at the end of the day, it is culture. >>That's fantastic. Raja, thank you so much for being here with us. >>Thank you guys, the >>Show. >>Really appreciate it. >>Questions? Yeah, this was such a pleasure. And thank you for tuning into the Cube Live from Dallas here at Supercomputing. My name is Savannah Peterson, and we'll see y'all in just a little bit.
SUMMARY :
Raja Phan, thank you so much for being on the show with us. nicely today and, and you know, a lot of great stuff happening. Can you give us a bit of an intro on which was pretty eventful cupcakes, you know, What's in the news? the way artificial intelligence is gonna change how, you know, frankly, the world works. cooling and, you know, I know you mentioned, you know, either don't have the power to the rack or aren't ready for liquid cooling, and at that point, you know, So do you see, is there a category or a vertical that is more in the more in the mainstream of what they do, where, you know, they're not ready for that. So we hear, you know, we hear at Supercomputing conference and HPC is sort of ways where it's democratizing into, you know, customers who wouldn't actually I know this portfolio of products that was developed customers, you know, one of our products was, you know, we almost named after one of our But you know, it's heavily customer driven and that's where understanding those use cases has you most excited today about supercomputing? you can now simulate your entire manufacturing process. you know, to the maximum advantage. So on the hardware front, kind of digging down below the, the covers, you know, the surface a little more, that, the intel, you know, data center, Max GPUs or NVIDIAs a 100 or H 100. you know, setting, you know, determining the configuration, determining the software packages, testing it, see, you know, regular performance compute, you know, it's, And it sounds like a lot people are very early in this journey. you know, running services for Enterprise and apj. That's fantastic. Raja, thank you so much for being here with us. And thank you for tuning into the Cube Live from Dallas here at
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rajesh Pohani | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Singapore | LOCATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
Raja Phan | PERSON | 0.99+ |
24 years | QUANTITY | 0.99+ |
Austin | LOCATION | 0.99+ |
75% | QUANTITY | 0.99+ |
HTC | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
XE 86 40 | COMMERCIAL_ITEM | 0.99+ |
XE 96 40 | COMMERCIAL_ITEM | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
yesterday afternoon | DATE | 0.99+ |
two products | QUANTITY | 0.99+ |
three products | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
yesterday evening | DATE | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
Yesterday | DATE | 0.99+ |
25 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
one product | QUANTITY | 0.99+ |
12 months | QUANTITY | 0.99+ |
H 100 | COMMERCIAL_ITEM | 0.99+ |
five | QUANTITY | 0.99+ |
Taiwan | LOCATION | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.98+ |
Raja | PERSON | 0.98+ |
Travis | PERSON | 0.98+ |
Supercomputing | ORGANIZATION | 0.98+ |
HPC | ORGANIZATION | 0.98+ |
intel | ORGANIZATION | 0.98+ |
more than two and a half decades | QUANTITY | 0.98+ |
Dell Technologies | ORGANIZATION | 0.98+ |
Bangalore | LOCATION | 0.97+ |
H 100 a a | COMMERCIAL_ITEM | 0.97+ |
pandemic | EVENT | 0.97+ |
NVIDIAs | ORGANIZATION | 0.96+ |
Invidia | ORGANIZATION | 0.96+ |
2 million CPU | QUANTITY | 0.93+ |
first | QUANTITY | 0.92+ |
A couple years later | DATE | 0.92+ |
Cube | ORGANIZATION | 0.9+ |
Dell Engineers | ORGANIZATION | 0.88+ |
a hundred thousand CPU | QUANTITY | 0.88+ |
Cube Live | TITLE | 0.84+ |
SUPERCOMPUTE | EVENT | 0.82+ |
couple years | QUANTITY | 0.79+ |
100 s XM | COMMERCIAL_ITEM | 0.77+ |
first eight way | QUANTITY | 0.76+ |
SuperComputing 22 | ORGANIZATION | 0.73+ |
CoLab | ORGANIZATION | 0.7+ |
hpc | ORGANIZATION | 0.68+ |
double | QUANTITY | 0.68+ |
Supercomputing | EVENT | 0.66+ |
four way | QUANTITY | 0.62+ |
march | DATE | 0.62+ |
100 | COMMERCIAL_ITEM | 0.6+ |
four | QUANTITY | 0.57+ |
President | PERSON | 0.54+ |
Bhavesh Patel, Dell Technologies & Shreya Shah, Dell Technologies | SuperComputing 22
(upbeat jingle) >> Cameraman: Just look, Mike. >> Good afternoon everyone, and welcome back to Supercomputing. We're live here with theCUBE in Dallas. I'm joined by my cohost, David. Wonderful to be sharing the afternoon with you. And we are going to be kicking things off with a very thrilling discussion from two important thought leaders at Dell. Bhavesh and Shreya, thank you so much for being on the show. Welcome. How you doing? How does it feel to be at Supercomputing? >> Pretty good. We really enjoying the show and enjoying a lot of customer conversations ongoing. >> Yeah. Are most of your customers here? >> Yes. Most of the customers are, mostly in the Hyatt over there and a lot of discussions ongoing. >> Yeah. Must be nice to see everybody show off. Are you enjoying the show so far, Shreya? >> Yeah, I missed this for two years and so it's nice to be back and meeting people in person. >> Yeah, definitely. We all missed it. So, it's been a very exciting week for Dell. Do you want to talk about what you're most excited about in the announcement portfolio that we saw yesterday? >> Absolutely. >> Go for it, Shreya. >> Yeah, so, you know, before we get into the portfolio side of the house, you know, we really wanted to, kind of, share our thoughts, in terms of, you know, what is it that's, kind of, moving HPC and supercomputing, you know, for a long time- >> Stock trends >> For a long time HPC and supercomputing has been driven by packing the racks, you know, maximizing the performance. And as the work that Bhavesh and I have been doing over the last, you know, couple of generations, we're seeing an emerging trend and that is the thermal dissipated power is actually exploding. And so the idea of packing the racks is now turning into, how do you maximize your performance, but are able to deliver the infrastructure in that limited kilowatts per rack that you have in your data center. >> So I, it's been interesting walking around the show seeing how many businesses associated with cooling- >> Savannah: So many. >> are here. And it's funny to see, they open up the cabinet, and it's almost 19th-century-looking technology. It's pipes and pumps and- >> Savannah: And very industrial-like. >> Yeah, very, very industrial-looking. Yeah, and I think, so that's where the, the trends are more in the power and cooling. That is what everybody is trying to solve from an industry perspective. And what we did when we looked at our portfolio, what we want to bring up in this timeframe for targeting more the HPC and AI space. There are a couple of vectors we had to look at. We had to look at cooling, we had to look at power where the trends are happening. We had to look at, what are the data center needs showing up, be it in the cooler space, be it in the HPC space, be it in the large install happening out there. So, looking at those trends and then factoring in, how do you build a node out? We said, okay, we need to diversify and build out an infrastructure. And that's what me and Shreya looked into, not only looking at the silicon diversity showing up, but more looking at, okay, there is this power, there is this cooling, there is silicon diversity. Now, how do you start packing it up and bringing it to the marketplace? So, kind of, those are some of the trends that we captured. And that's what you see, kind of, in the exhibit floor today, even. >> And Dell technology supports both, liquid cooling, air cooling. Do you have a preference? Is it more just a customer-based? >> It is going to be, and Shreya can allude to it, it's more workload and application-focused. That is what we want to be thinking about. And it's not going to be siloed into, okay, is we going to be just targeting air-cooling, we wanted to target a breadth between air to liquid. And that's how we built into our portfolio when we looked at our GPUs. >> To add to that, if we look at our customer landscape, we see that there's a peak between 35 to 45 kilowatts per rack. We see another peak at 60, we see another peak at 80, and we've got selects, you know, very specialized customers above hundred kilowatts per rack. And so, if we take that 35 to 45 kilowatts per rack, you know, you can pack maybe three or four of these chassis, right? And so, to what Bhavesh is saying, we're really trying to provide the flexibility for what our customers can deliver in their data centers. Whether it be at the 35 end where air cooling may make complete sense. As you get above 45 and above, maybe that's the time to pivot to a liquid-cool solution. >> So, you said that there, so there are situations where you could have 90 kilowatts being consumed by a rack of equipment. So, I live in California where we are very, very closely attuned to things like the price for a kilowatt hour of electricity. >> Seriously. >> And I'm kind of an electric car nerd, so, for the folks who really aren't as attuned, 90 kilowatts, that's like over a hundred horsepower. So, think about a hundred horsepower worth of energy being used for compute in one of these racks. It's insane. So, we, you can kind of imagine a layperson can kind of imagine the variables that go into this equation of, you know, how do we, how do we bring the power and get the maximum bang for, per kilowatt hour. But, are there any, are there any kind of interesting odd twists in your equations that you find when you're trying to figure out. Do you have a- >> Yeah, and we, a lot of these trends when we look at it, okay, it's not, we think about it more from a power density that we want to try to go and solve. We are mindful about all the, from an energy perspective where the energy prices are moving. So, what we do is we try to be optimizing right at the node level and how we going to do our liquid-cooling and air cooled infrastructure. So, it's trying to, how do you keep a balance with it? That's what we are thinking about. And thinking about it is not just delivering or consuming the power that is maybe not needed for that particular node itself. So, that's what we are thinking about. The other way we optimize when we built this infrastructure out is we are thinking about, okay, how are we go going to deliver it at the rack level and more keeping in mind as to how this liquid-cooling plumbing will happen. Where is it coming into the data center? Is it coming in the bottom of the floor? Are we going to do it on the left hand side of your rack or the right hand side? It's a big thing. It's like it becomes, okay, yeah, it doesn't matter which side you put it on, but there is a piece of it going into our decision as to how we are going to build that, no doubt. So, there are multiple factors coming in and besides the power and cooling, which we all touched upon, But, Shreya and me also look at is where this whole GPU and accelerators are moving into. So, we're not just looking at the current set of GPUs and where they're moving from a power perspective. We are looking at this whole silicon diversity that is happening out there. So, we've been looking at multiple accelerators. There are multiple companies out there and we can tell you there'll be over three 30 to 50 silicon companies out there that we are actively engaged and looking into. So, our decision in building this particular portfolio out was being mindful about what the maturity curve is from a software point of view. From a hardware point of view and what can we deliver, what the customer really needs in it, yeah. >> It's a balancing act, yeah. >> Bhavesh: It is a balancing act. >> Let's, let's stay in that zone a little bit. What other trends, Shreya, let's go to you on this one. What other trends are you seeing in the acceleration landscape? >> Yeah, I think you know, to your point, the balancing act is actually a very interesting paradigm. One of the things that Bhavesh and I constantly think about, and we call it the Goldilocks syndrome, which is, you know, at that 90 and and a hundred, right? Density matters. >> Savannah: A lot. >> But, what we've done is we have really figured out what that optimal point is, 'cause we don't want to be the thinnest most possible. You lose a lot of power redundancy, you lose a lot of I/O capability, you lose a lot of storage capability. And so, from our portfolio perspective, we've really tried to think about the Goldilocks syndrome and where that sweet spot is. >> I love that. I love the thought of you all just standing around server racks, having a little bit of porridge and determining >> the porridge. Exactly the thickness that you want in terms of the density trade off there. Yeah, that's, I love that, though. I mean it's very digestible. Are you seeing anything else? >> No, I think that's pretty much, Shreya summed it up and we think about what we are thinking about, where the technology features are moving and what we are thinking, in terms of our portfolio, so it is, yeah. >> So, just a lesson, you know, Shreya, a lesson for us, a rudimentary lesson. You put power into a CPU or a GPU and you're getting something out and a lot of what we get out is heat. Is there a measure, is there an objective measure of efficiency in these devices that we look at? Because you could think of a 100 watt light bulb, an incandescent light bulb is going to give out a certain amount of light and a certain amount of heat. A 100 watt equivalent led, in terms of the lumens that it's putting out, in terms of light, a lot more light for the power going in, a lot less heat. We have led lights around us, thankfully, instead of incandescent lights. >> Savannah: Otherwise we would be melting. >> But, what is, when you put power into a CPU or a GPU, how do you measure that efficiency? 'Cause it's sort of funny, 'cause it's like, it's not moving, so it's not like measuring, putting power into a vehicle and measuring forward motion and heat. You're measuring this, sort of, esoteric thing, this processing thing that you can't see or touch. But, I mean, how much per watt of power, how do you, how do you measure it I guess? Help us out, from the base up understanding, 'cause people generally, most people have never been in a data center before. Maybe they've put their hand behind the fan in a personal computer or they've had a laptop feel warm on their lap. But, we're talking about massive amounts of heat being generated. Can you, kind of, explain the fundamentals of that? >> So, the way we think about it is, you know, there's a performance per dollar metric. There's a performance per dollar per watt metric and that's where the power kind of comes in. But, on the flip side, we have something called PUE, power utilization efficiency from a data center aspect. And so, we try to marry up those concepts together and really try to find that sweet spot. >> Is there anything in the way of harvesting that heat to do other worthwhile work, I mean? >> Yes. >> You know, it's like, hey, everybody that works in the data center, you all have your own personal shower now, water heated. >> Recirculating, too. >> Courtesy of Intel AMD. >> Or a heated swimming pool. >> Right, a heated swimming pool. >> I like the pool. >> So, that's the circulation of, or recycling of that thermal heat that you're talking about, absolutely. And we see that our customers in the, you know, in the Europe region, actually a lot more advanced in terms of taking that power and doing something that's valuable with it, right? >> Cooking croissant and, and making lattes, probably right? >> (laughing) Or heating your home. >> Makes me want to go on >> vacation, a pool, croissants. >> That would be a good use. But, do you, it's more on the PUE aspect of it. It's more thinking about how are we more energy efficient in our design, even, so we are more thinking about what's the best efficiency we can get, but what's the amount of heat capture we can get? Are we just kind of wasting any heat out there? So, that's always the goal when designing these particular platforms, so that's something that we had kept in mind with a lot of our power and cooling experts within Dell. When thinking about, okay, is it, how much can we get, can we capture? If we are not capturing anything, then what are we, kind of, recirculating it back in order to get much better efficiency when we think about it at a rack level and for the other equipment which is going to be purely air-cooled out there and what can we do about it, so. >> Do you think both of these technologies are going to continue to work in tandem, air cooling and liquid cooling? Yeah, so we're not going to see- >> Yeah, we don't, kind of, when we think about our portfolio and what we see the trends moving in the future, I think so, air-cooling is definitely going to be there. There'll be a huge amount of usage for customers looking into air-cooling. Air-cooling is not going to go away. Liquid-cooling is definitely something that a lot of customers are looking into adopting. PUE become the bigger factor for it. How much can I heat capture with it? That's a bigger equation that is coming into the picture. And that's where we said, okay, we have a transition happening. And that's what you see in our portfolio now. >> Yeah, Intel is, Intel, excuse me, Dell is agnostic when it comes to things like Intel, AMD, Broadcom, Nvidia. So, you can look at this landscape and I think make a, you know, make a fair judgment. When we talk about GPU versus CPU, in terms of efficiency, do you see that as something that will live on into the future for some applications? Meaning look, GPU is the answer or is it simply a question of leveraging what we think of as CPU cores differently? Is this going to be, is this going to ebb and flow back and forth? Shreya, are things going to change? 'Cause right now, a lot of what's announced recently, in the high performance computer area, leverages GPUs. But, we're right in the season of AMD and Intel coming out with NextGen processor architectures. >> Savannah: Great point. >> Shreya: Yeah >> Any thoughts? >> Yeah, so what I'll tell you is that it is all application dependent. If you rewind, you know, a couple of generations you'll see that the journey for GPU just started, right? And so there is an ROI, a minimum threshold ROI that customers have to realize in order to move their workloads from CPU-based to GPU-based. As the technology evolves and matures, you'll have more and more applications that will fit within that bucket. Does that mean that everything will fit in that bucket? I don't believe so, but as, you know, the technology will continue to mature on the CPU side, but also on the GPU side. And so, depending on where the customer is in their journey, it's the same for air versus liquid. Liquid is not an if, but it's a when. And when the environment, the data center environment is ready to support that, and when you have that ROI that goes with it is when it makes sense to transition to one way or the other. >> That's awesome. All right, last question for you both in a succinct phrase, if possible, I won't character count. What do you hope that we get to talk about next year when we have you back on theCUBE? Shreya, we'll start with you. >> Ooh, that's a good one. I'm going to let Bhavesh go first. >> Savannah: Go for it. >> (laughs) >> What do you think, Bhavesh? Next year, I think so, what you'll see more, because I'm in the CTI group, more talking about where cache coherency is moving. So, that's what, I'll just leave it at that and we'll talk about it more. >> Savannah: All right. >> Dave: Tantalizing. >> I was going to say, a little window in there, yeah. And I think, to kind of add to that, I'm excited to see what the future holds with CPUs, GPUs, smart NICs and the integration of these technologies and where that all is headed and how that helps ultimately, you know, our customers being able to solve these really, really large and complex problems. >> The problems our globe faces. Wow, well it was absolutely fantastic to have you both on the show. Time just flew. David, wonderful questions, as always. Thank you all for tuning in to theCUBE. Here live from Dallas where we are broadcasting all about supercomputing, high-performance computing, and everything that a hardware nerd, like I, loves. My name is Savannah Peterson. We'll see you again soon. (upbeat jingle)
SUMMARY :
And we are going to be kicking things off We really enjoying the show Are most of your customers here? mostly in the Hyatt over there Are you enjoying the show so far, Shreya? and so it's nice to be back in the announcement portfolio have been doing over the last, you know, And it's funny to see, And that's what you see, Do you have a preference? And it's not going to maybe that's the time to pivot So, you said that there, and get the maximum bang and we can tell you there'll be Shreya, let's go to you on this one. Yeah, I think you know, to your point, about the Goldilocks syndrome I love the thought of Exactly the thickness that you want and we think about what and a lot of what we get out is heat. we would be melting. But, what is, when you put So, the way we think you all have your own personal shower now, So, that's the circulation of, Or heating your home. and for the other equipment And that's what you see and I think make a, you and when you have that ROI What do you hope that we get to talk about I'm going to let Bhavesh go first. because I'm in the CTI group, and how that helps ultimately, you know, to have you both on the show.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Shreya | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
California | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
100 watt | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
35 | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Shreya Shah | PERSON | 0.99+ |
Dallas | LOCATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
60 | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
Bhavesh | PERSON | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
80 | QUANTITY | 0.99+ |
90 kilowatts | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Bhavesh Patel | PERSON | 0.99+ |
Next year | DATE | 0.99+ |
Mike | PERSON | 0.99+ |
90 | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
45 kilowatts | QUANTITY | 0.98+ |
Dell Technologies | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
two important thought leaders | QUANTITY | 0.98+ |
over a hundred horsepower | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
Goldilocks | OTHER | 0.96+ |
Supercomputing | ORGANIZATION | 0.96+ |
today | DATE | 0.96+ |
theCUBE | ORGANIZATION | 0.93+ |
CTI | ORGANIZATION | 0.92+ |
One | QUANTITY | 0.91+ |
50 silicon | QUANTITY | 0.9+ |
one way | QUANTITY | 0.89+ |
19th-century | DATE | 0.83+ |
a hundred | QUANTITY | 0.78+ |
above | QUANTITY | 0.77+ |
couple | QUANTITY | 0.76+ |
Cameraman | PERSON | 0.74+ |
over three 30 | QUANTITY | 0.74+ |
Hyatt | LOCATION | 0.73+ |
one of these | QUANTITY | 0.68+ |
a hundred horsepower | QUANTITY | 0.68+ |
hundred kilowatts per | QUANTITY | 0.67+ |
above 45 | QUANTITY | 0.6+ |
Travis Vigil, Dell Technologies | SuperComputing 22
>>How do y'all, and welcome to Dallas, where we're proud to be live from Supercomputing 2022. My name is Savannah Peterson, joined here by my cohost David on the Cube, and our first guest today is a very exciting visionary. He's a leader at Dell. Please welcome Travis Vhi. Travis, thank you so much for being here. >>Thank you so much for having me. >>How you feeling? >>Okay. I I'm feeling like an exciting visionary. You >>Are. That's, that's the ideas why we tee you up for that. Great. So, so tell us, Dell had some huge announcements Yes. Last night. And you get to break it to the cube audience. Give us the rundown. >>Yeah. It's a really big show for Dell. We announced a brand new suite of GPU enabled servers, eight ways, four ways, direct liquid cooling. Really the first time in the history of the portfolio that we've had this much coverage across Intel amd, Invidia getting great reviews from the show floor. I had the chance earlier to be in the whisper suite to actually look at the gear. Customers are buzzing over it. That's one thing I love about this show is the gear is here. >>Yes, it is. It is a haven for hardware nerds. Yes. Like, like well, I'll include you in this group, it sounds like, on >>That. Great. Yes. Oh >>Yeah, absolutely. And I know David is as well, sew up >>The street. Oh, big, big time. Big time hardware nerd. And just to be clear, for the kids that will be watching these videos Yes. We're not talking about alien wear gaming systems. >>No. Right. >>So they're >>Yay big yay tall, 200 pounds. >>Give us a price point on one of these things. Re retail, suggested retail price. >>Oh, I'm >>More than 10 grand. >>Oh, yeah. Yeah. Try another order of magnitude. Yeah. >>Yeah. So this is, this is the most exciting stuff from an infrastructure perspective. Absolutely. You can imagine. Absolutely. But what is it driving? So talk, talk to us about where you see the world of high performance computing with your customers. What are they, what are they doing with this? What do they expect to do with this stuff in the future? >>Yeah. You know, it's, it's a real interesting time and, and I know that the provenance of this show is HPC focused, but what we're seeing and what we're hearing from our customers is that AI workloads and traditional HPC workloads are becoming almost indistinguishable. You need the right mix of compute, you need GPU acceleration, and you need the ability to take the vast quantities of data that are being generated and actually gather insight from them. And so if you look at what customers are trying to do with, you know, enterprise level ai, it's really, you know, how do I classify and categorize my data, but more, more importantly, how do I make sense of it? How do I derive insights from it? Yeah. And so at the end of the day, you know, you look, you look at what customers are trying to do. It's, it's take all the various streams of data, whether it be structured data, whether it be unstructured data, bring it together and make decisions, make business decisions. >>And it's a really exciting time because customers are saying, you know, the same things that, that, that, you know, research scientists and universities have been trying to do forever with hpc. I want to do it on industrial scale, but I want to do it in a way that's more open, more flexible, you know, I call it AI for the rest of us. And, and, and customers are here and they want those systems, but they want the ecosystem to support ease of deployment, ease of use, ease of scale. And that's what we're providing in addition to the systems. We, we provide, you know, Dell's one of the only providers on the on in the industry that can provide not only the, the compute, but the networking and the storage, and more importantly, the solutions that bring it all together. Give you one example. We, we have what we call a validated design for, for ai. And that validated design, we put together all of the pieces, provided the recipe for customers so that they can take what used to be two months to build and run a model. We provide that capability 18 times faster. So we're talking about hours versus months. So >>That's a lot. 18 times faster. I just wanna emphasize that 18 times faster, and we're talking about orders of magnitude and whatnot up here, that makes a huge difference in what people are able to do. Absolutely. >>Absolutely. And so, I mean, we've, you know, you've been doing this for a while. We've been talking about the, the deluge of data forever, but it's gotten to the point and it's, you know, the, the disparity of the data, the fact that much of it remains siloed. Customers are demanding that we provide solutions that allow them to bring that data together, process it, make decisions with it. So >>Where, where are we in the adoption cycle early because we, we've been talking about AI and ML for a while. Yeah. You, you mentioned, you know, kind of the leading edge of academia and supercomputing and HPC and what that, what that conjures up in people's minds. Do you have any numbers or, you know, any, any thoughts about where we are in this cycle? How many, how many people are actually doing this in production versus, versus experimenting at this point? Yeah, >>I think it's a, it's a reason. There's so much interest in what we're doing and so much demand for not only the systems, but the solutions that bring the systems together. The ecosystem that brings the, the, the systems together. We did a study recently and ask customers where they felt they were at in terms of deploying best practices for ai, you know, mass deployment of ai. Only 31% of customers said that they felt that they self-reported. 31% said they felt that they were deploying best practices for their AI deployments. So almost 70% self reporting saying we're not doing it right yet. Yeah. And, and, and another good stat is, is three quarters of customers have fewer than five AI applications deployed at scale in their, in their IT environments today. So, you know, I think we're on the, you know, if, if I, you think about it as a traditional S curve, I think we're at the first inflection point and customers are asking, Can I do it end to end? >>Can I do it with the best of breed in terms of systems? But Dell, can you also use an ecosystem that I know and understand? And I think that's, you know, another great example of something that Dell is doing is, is we have focused on ethernet as connectivity for many of the solutions that we put together. Again, you know, provenance of hpc InfiniBand, it's InfiniBand is a great connectivity option, but you know, there's a lot of care and feeding that goes along with InfiniBand and the fact that you can do it both with InfiniBand for those, you know, government class CU scale, government scale clusters or university scale clusters and more of our enterprise customers can do it with, with ethernet on premises. It's a great option. >>Yeah. You've got so many things going on. I got to actually check out the million dollar hardware that you have just casually Yeah. Sitting in your booth. I feel like, I feel like an event like this is probably one of the only times you can let something like that out. Yeah, yeah. And, and people would actually know what it is you're working >>With. We actually unveiled it. There was a sheet on it and we actually unveiled it last night. >>Did you get a lot of uz and os >>You know, you said this was a show for hardware nerds. It's been a long time since I've been at a shoe, a show where people cheer and u and a when you take the sheet off the hardware and, and, and Yes, yes, >>Yes, it has and reveal you had your >>Moment. Exactly, exactly. Our three new systems, >>Speaking of u and os, I love that. And I love that everyone was excited as we all are about it. What I wanna, It's nice to be home with our nerds. Speaking of, of applications and excitement, you get to see a lot of different customers across verticals. Is there a sector or space that has you personally most excited? >>Oh, personally most excited, you know, for, for credibility at home when, when the sector is media and entertainment and the movie is one that your, your children have actually seen, that one gives me credibility. Exciting. It's, you can talk to your friends about it at, at at dinner parties and things like that. I'm like, >>Stuff >>Curing cancer. Marvel movie at home cred goes to the Marvel movie. Yeah. But, but, but you know, what really excites me is the variety of applications that AI is being used, used in healthcare. You know, on a serious note, healthcare, genomics, a huge and growing application area that excites me. You know, doing, doing good in the world is something that's very important to Dell. You know, know sustainability is something that's very important to Dell. Yeah. So any application related to that is exciting to me. And then, you know, just pragmatically speaking, anything that helps our customers make better business decisions excites me. >>So we are, we are just at the beginning of what I refer to as this rolling thunder of cpu. Yes. Next generation releases. We re recently from AMD in the near future it'll be, it'll be Intel joining the party Yeah. Going back and forth, back and forth along with that gen five PCI e at the motherboard level. Yep. It's very easy to look at it and say, Wow, previous gen, Wow, double, double, double. It >>Is, double >>It is. However, most of your customers, I would guess a fair number of them might be not just N minus one, but n minus two looking at an upgrade. So for a lot of people, the upgrade season that's ahead of us is going to be not a doubling, but a four x or eight x in a lot of, in a lot of cases. Yeah. So the quantity of compute from these new systems is going to be a, it's gonna be a massive increase from where we've been in, in, in the recent past, like as in last, last Tuesday. So is there, you know, this is sort of a philosophical question. We talked a little earlier about this idea of the quantitative versus qualitative difference in computing horsepower. Do we feel like we're at a point where there's gonna be an inflection in terms of what AI can actually deliver? Yeah. Based on current technology just doing it more, better, faster, cheaper? Yeah. Or do we, or do we need this leap to quantum computing to, to get there? >>Yeah. I look, >>I think we're, and I was having some really interesting conversations with, with, with customers that whose job it is to run very, very large, very, very complex clusters. And we're talking a little bit about quantum computing. Interesting thing about quantum computing is, you know, I think we're or we're a ways off still. And in order to make quantum computing work, you still need to have classical computing surrounding Right. Number one. Number two, with, with the advances that we're, we're seeing generation on generation with this, you know, what, what has moved from a kind of a three year, you know, call it a two to three year upgrade cycle to, to something that because of all of the technology that's being deployed into the industry is almost more continuous upgrade cycle. I, I'm personally optimistic that we are on the, the cusp of a new level of infrastructure modernization. >>And it's not just the, the computing power, it's not just the increases in GPUs. It's not, you know, those things are important, but it's things like power consumption, right? One of the, the, the ways that customers can do better in terms of power consumption and sustainability is by modernizing infrastructure. Looking to your point, a lot of people are, are running n minus one, N minus two. The stuff that's coming out now is, is much more energy efficient. And so I think there's a lot of, a lot of vectors that we're seeing in, in the market, whether it be technology innovation, whether it be be a drive for energy efficiency, whether it be the rise of AI and ml, whether it be all of the new silicon that's coming in into the portfolio where customers are gonna have a continuous reason to upgrade. I mean, that's, that's my thought. What do you think? >>Yeah, no, I think, I think that the, the, the objective numbers that are gonna be rolling out Yeah. That are starting to roll out now and in the near future. That's why it's really an exciting time. Yeah. I think those numbers are gonna support your point. Yeah. Because people will look and they'll say, Wait a minute, it used to be a dollar, but now it's $2. That's more expensive. Yeah. But you're getting 10 times as much Yeah. For half of the amount of power boom. And it's, and it's >>Done. Exactly. It's, it's a >>Tco It's, it's no brainer. It's Oh yeah. You, it gets to the point where it's, you look at this rack of amazing stuff that you have a personal relationship with and you say, I can't afford to keep you plugged in anymore. Yeah. >>And Right. >>The power is such a huge component of this. Yeah. It's huge, huge. >>Our customer, I mean, it's always a huge issue, but our customers, especially in Amia with what's going on over there are, are saying, I, you know, I need to upgrade because I need to be more energy efficient. >>Yeah. >>Yeah. I I, we were talking about 20 years from now, so you've been at Dell over 18 years. >>Yeah. It'll be 19 in in May. >>Congratulations. Yeah. What, what commitment, so 19 years from now in your, in your second Dell career. Yeah. What are we gonna be able to say then that perhaps we can't say now? >>Oh my gosh. Wow. 19 years from now. >>Yeah. I love this as an arbitrary number too. This is great. Yeah. >>38 year Dell career. Yeah. >>That might be a record. Yeah. >>And if you'd like to share the winners of Super Bowls and World Series in advance, like the world and the, the sports element act from back to the future. So we can play ball bets power and the >>Power ball, but, but any >>Point building Yeah. I mean this is what, what, what, what do you think ai, what's AI gonna deliver in the next decade? >>Yeah. I, I look, I mean, there are are, you know, global issues that advances in computing power will help us solve. And, you know, the, the models that are being built, the ability to generate a, a digital copy of the analog world and be able to run models and simulations on it is, is amazing. Truly. Yeah. You know, I, I was looking at some, you know, it's very, it's a very simple and pragmatic thing, but I think it's, it, it's an example of, of what could be, we were with one of our technology providers and they, they were, were showing us a digital simulation, you know, a digital twin of a factory for a car manufacturer. And they were saying that, you know, it used to be you had to build the factory, you had to put the people in the factory. You had to, you know, run cars through the factory to figure out sort of how you optimize and you know, where everything's placed. >>Yeah. They don't have to do that anymore. No. Right. They can do it all via simulation, all via digital, you know, copy of, of analog reality. And so, I mean, I think the, you know, the, the, the, the possibilities are endless. And, you know, 19 years ago, I had no idea I'd be sitting here so excited about hardware, you know, here we are baby. I think 19 years from now, hardware still matters. Yeah. You know, hardware still matters. I know software eats the world, the hardware still matters. Gotta run something. Yeah. And, and we'll be talking about, you know, that same type of, of example, but at a broader and more global scale. Well, I'm the knucklehead who >>Keeps waving his phone around going, There's one terabyte in here. Can you believe that one terabyte? Cause when you've been around long enough, it's like >>Insane. You know, like, like I've been to nasa, I live in Texas, I've been to NASA a couple times. They, you know, they talk about, they sent, you know, they sent people to the moon on, on way less, less on >>Too far less in our pocket computers. Yeah. It's, it's amazing. >>I am an optimist on, on where we're going clearly. >>And we're clearly an exciting visionary, like we said, said the gate. It's no surprise that people are using Dell's tech to realize their AI ecosystem dreams. Travis, thank you so much for being here with us David. Always a pleasure. And thank you for tuning in to the Cube Live from Dallas, Texas. My name is Savannah Peterson. We'll be back with more supercomputing soon.
SUMMARY :
Travis, thank you so much for being here. You And you get to break it to the cube audience. I had the chance earlier to be in the whisper suite to actually look at the gear. Like, like well, I'll include you in this group, And I know David is as well, sew up And just to be clear, for the kids that will be Give us a price point on one of these things. Yeah. you see the world of high performance computing with your customers. And so at the end of the day, you know, And it's a really exciting time because customers are saying, you know, the same things that, I just wanna emphasize that 18 times faster, and we're talking about orders of magnitude and whatnot you know, the, the disparity of the data, the fact that much of it remains siloed. you have any numbers or, you know, any, any thoughts about where we are in this cycle? you know, if, if I, you think about it as a traditional S curve, I think we're at the first inflection point and but you know, there's a lot of care and feeding that goes along with InfiniBand and the fact that you can do it I got to actually check out the million dollar hardware that you have just There was a sheet on it and we actually unveiled it last night. You know, you said this was a show for hardware nerds. Our three new systems, that has you personally most excited? Oh, personally most excited, you know, for, for credibility at home And then, you know, the near future it'll be, it'll be Intel joining the party Yeah. you know, this is sort of a philosophical question. you know, what, what has moved from a kind of a three year, you know, call it a two to three year upgrade It's not, you know, those things are important, but it's things like power consumption, For half of the amount of power boom. It's, it's a of amazing stuff that you have a personal relationship with and you say, I can't afford to keep you plugged in anymore. Yeah. what's going on over there are, are saying, I, you know, I need to upgrade because Yeah. Wow. 19 years from now. Yeah. Yeah. Yeah. advance, like the world and the, the sports element act from back to the future. what's AI gonna deliver in the next decade? And they were saying that, you know, it used to be you had to build the factory, And so, I mean, I think the, you know, the, the, the, the possibilities are endless. Can you believe that one terabyte? They, you know, they talk about, they sent, you know, they sent people to the moon on, on way less, less on Yeah. And thank you for tuning in to the Cube Live from Dallas,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Travis | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
$2 | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
18 times | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
two months | QUANTITY | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
10 times | QUANTITY | 0.99+ |
Invidia | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
200 pounds | QUANTITY | 0.99+ |
38 year | QUANTITY | 0.99+ |
31% | QUANTITY | 0.99+ |
last Tuesday | DATE | 0.99+ |
today | DATE | 0.99+ |
three year | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Super Bowls | EVENT | 0.99+ |
More than 10 grand | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
19 years ago | DATE | 0.99+ |
first time | QUANTITY | 0.98+ |
Last night | DATE | 0.98+ |
million dollar | QUANTITY | 0.98+ |
World Series | EVENT | 0.98+ |
one example | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first guest | QUANTITY | 0.98+ |
May | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
next decade | DATE | 0.97+ |
over 18 years | QUANTITY | 0.97+ |
last night | DATE | 0.97+ |
19 years | QUANTITY | 0.97+ |
Travis Vigil | PERSON | 0.97+ |
hpc | ORGANIZATION | 0.97+ |
19 | QUANTITY | 0.96+ |
four ways | QUANTITY | 0.96+ |
eight ways | QUANTITY | 0.96+ |
both | QUANTITY | 0.95+ |
InfiniBand | COMMERCIAL_ITEM | 0.95+ |
one thing | QUANTITY | 0.94+ |
four | QUANTITY | 0.92+ |
Intel | ORGANIZATION | 0.92+ |
almost 70% | QUANTITY | 0.92+ |
Amia | LOCATION | 0.91+ |
first inflection | QUANTITY | 0.91+ |
NASA | ORGANIZATION | 0.88+ |
Marvel | ORGANIZATION | 0.88+ |
Intel amd | ORGANIZATION | 0.83+ |
three quarters | QUANTITY | 0.82+ |
five | QUANTITY | 0.82+ |
three new systems | QUANTITY | 0.82+ |
eight x | QUANTITY | 0.81+ |
nasa | LOCATION | 0.78+ |
Cube Live | COMMERCIAL_ITEM | 0.77+ |
couple times | QUANTITY | 0.73+ |
about 20 years | QUANTITY | 0.7+ |
doubling | QUANTITY | 0.67+ |
times | QUANTITY | 0.64+ |
a dollar | QUANTITY | 0.61+ |
Day 1 Keynote Analysis | SuperComputing 22
>>Hello everyone. Welcome to the Cubes Live here in Dallas, Texas. I'm John Ferer, host of the Cube, Three days of wall to wall coverage. Of course, we've got the three fabulous guests here, myself, Savannah, Peterson. S look wonderful. >>Thank you. Jong on. I, I feel lucky to play the part here with my 10 gallon hat. >>Dave Nicholson, who's the analyst uncovering all the Dell Supercomputing, hpe all the technology is changing the game. Dave, you look great. Thanks for coming on. >>Thanks, John. I appreciate >>It. All right, so, so, so you look good. So we're in Dallas, Texas is a trade show conference. I don't know what you'd call this these days, but thousands of booths are here. What's the take here? Why supercomputing 22? What's the big deal? >>Well, the big deal is dramatic incremental progress in terms of supercomputing capability. So what this conference represents is the leading edge in what it can deliver to the world. We're talking about scale that is impossible to comprehend with the human brain, but you can toss out facts and figures like performance measured in ex flops, millions of CPU cores working together, thousands of kilowatts of power required to power these systems. And I think what makes this, what makes this show unique is that it's not just a bunch of vendors, but it's academia. It's PhD candidates coming and looking for companies that they might work with. So it's a very, very different vibe here. >>Savannah, we were talking last night before we were setting up our agenda for it to drill down on this week. And you know, you were, by the way, that looks great. I mean, I wish I had one. >>We'll get you one by the end of the show, >>John. Don't worry. You know, Texas is always big in Texas and that's the, the thing here, but Supercomputing seems like that had a lull for a while. Yeah, it seems like it's gonna explode and you get a chance to review the papers, take a look at it. You, you're a, I won't say closet hardware nerd, but that's your roots. >>Yeah, yeah. Very openly hardware nerd. And, and I'm excited because I, we saw a lot of hype around quantum and around AI five, 10 years ago, but we weren't seeing the application at scale and we also weren't seeing, quite frankly, the hardware wasn't ready to power these types of endeavors at scale. Whereas now, you know, we've got, we've got air cooling, we've got liquid cooling, we've got multiple GPU's. Dell was just showing me all eight of theirs that they put in their beautiful million dollar piece of equipment, which is extremely impressive for folks to run complex calculations. And, but what I'm excited about with all the, I love when we fuse business and academia together, I think that that doesn't happen very often. I've been impressed. I mean, when I walked in today, right away, I'm sure y'all can't see this at home just yet, but we'll try and give you a feel over the course of the next few days. This conference is huge. This >>Is, yeah, it is >>Way bigger than I was expecting, You know, a lot larger than where we just were in Detroit. And, and I love it because we've got the people that are literally inventing the calculations that will determine a lot of our future from sequencing our genome to powering our weather forecasting, as well as all of the companies that create the hardware and the software that's gonna actually support that. Those algorithms and >>Those, and, and the science and the engineering involved has just been going on since 1988. This conference, this trade show going on since 1988, which is, it, it passes the test of time and now the future with all the new use cases emerging from the compute and supercomputing architectures out there, it's from cradle to grave. If you're, if you're in this business, you, you're in school all the way through the industry, it doesn't seem to stop that, that university student side of it. I mean that whole student section here. So you don't see that very often in some of these tech shows, like from students to boardroom. >>Yeah. I actually brought the super computer from 1988 with me in my pocket. And I'm not sure that I'm even joking. I this may have as much processing power, certainly as much storage with one terabyte on board. I sprung for the one terabyte folks. But it is mind boggling the amount of compute power we're, we're talking about. When you dig below the surface, which we'll be doing in the coming days, you see things like leaping from P C I E, you know, gen four to gen five, and the increase that that gives us in, in terms of capabilities for plugging into the motherboard and accessing the CPU complex and on and on and on. But, but you know, something Savannah alluded to, we're talking about the leading edge of what is possible from a humanity perspective. 1%. And, and so I'd like to get into, you know, as we're we're talking to some of the experts that we'll get a chance to talk to, I'd like to get their view on what the future holds and whether we can simply grow through quantitative increases in compute power, or if the real promise is out there in the land of quantum computing, are we all sort of hanging our hats, our large 10 gallon hats? >>If that's yes. Our hats, if we're hanging our hats on that, that that's when truly we'll be able to tease insight out of chaos. I'd like to hear from some of the real experts on that subject. >>I'm glad you brought that up, cuz I'm personally pretty pumped about quantum computing, but I've seen it sit in this hype stage for quite a while and I'm ready for the application. So I'm curious to hear >>What our experts, That's an awesome, that would be, I think that would be an awesome bumper sticker. Frankly. Savannah, I'm pumped, I'm pumped about quantum computing. Who is this person? Who is this person? >>I wanna see it first. Did someone show me it? >>Yeah, yeah. 400 qubits I think was the latest IBM announcement, which, which means something. I'll pretend like I completely understand what it means. >>Tell us what that means, David. >>Well, well, so, so Savannah, let me man explain it to you. Yeah, >>Let's >>Hear it. So, so it's basically, it's, you know, in conventional computing you can either, you can either be on or off zero or one in quantum computing, you can be both, neither or all of the above. That's, that's, that's, that's the depth to which I can go. I >>Like that. That was actually a succinct, as humanly possible >>Really sounds like a Ponzi scheme to me. I, I'm not sure if I, >>Well, let's get into some of the thoughts that you guys have on some of the papers. We saw Savannah and Dave, your perspective on this whole next level kind of expansion with supercomputing and super cloud and super apps will do for this next gen. What use cases are kind of shining out of this, because, you know, it used to be you were limited by how much gear you had stacked up, how big the server could be, the supercomputer. Now you've got large scale cloud computing, you got the ability to have different subsystems like advances in networking. So you're seeing a new architectural, almost bigger. Super computing isn't just a machine, it's a collection of machines, It's a collection of Yeah. Of other stuff. What's your thoughts on these, this architecture and then the use cases that are gonna emerge that were not getable before? >>So in the past, you, you talk about, you know, 1988 and, and you know, let's say a decade ago, the race was to assemble enough compute power to be able to do things quickly enough to be practical. So we knew that if we applied software to hardware, we could get an answer to a problem because we were asking very, very specific questions. And how quickly we got the answer would determine whether it was practical to pursue it or not. So if something took a day instead of a month, okay, fantastic. But now we've reached this critical mass. You could argue when that happened, but definitely I think we're there where things like artificial intelligence and machine learning are the core of what we're doing. We're not just simply asking systems to deliver defined answers. We're asking them to learn from their experiences, starts getting a little spooky, and we're asking them to tease insights out in a way that we haven't figured out. >>So we're saying give us the insight. We're not telling the system specifically how to give us that insight. So I think that's, that's the fundamental difference that's the frontier, is, you know, you're gonna hear a lot about AI and ml and then if you retreat back a bit from Supercomputing, you're in the realm of high performance computing, which is sort of junior version of supercomputing. It's instead of the billion dollar system, it's the system that, you know, schlubs like, like, like, like Facebook or AWS might be able to afford, you know, maybe a hundred million dollars for a system casual, just, just sort of casual kind of thing next to the coffee table in the living room. But I think that's really gonna be the talk. So that's a huge tent when you talk about AI and ml. Yeah, >>I I, I totally agree. We're having some of the conversations that we've had for a long time about AI and bias. I saw a lot of the papers were looking at that. I think that's what's gonna be really interesting to me, what's most exciting about this is how are we pulling together all of this on a global scale. So I'm excited to see how supercomputing impacts climate change, our ability to monitor environmental conditions around the globe and different governments and bodies can all combine. And all of this information can be going into a central brain and learning from it and figuring out how we can make the world a better place. We're learning about the body. There's a lot of people doing molecular biology and sequencing of the genome here. We've got, there's, there's, It's just, it's very, I I don't think a lot of people realize that supercomputing pretty much touches every aspect of our >>Lives. I mean, we've had it, we've had it for a while. I think cloud computing took a lot of the attention, given that that brought in massive capabilities, a lot of agility. And I think what's interesting here at this show, if you look at, you know, what's going on from the guess, like I said, from the dorm room to the boardroom, everyone's here, but you look at what's actually going on above the hardware, CNCF is here. They have a booth, the whole cloud native software business. It's gonna be interesting to see how the software business takes advantage of totally. How these architectures, because let's face it, I've never heard a developer pointer say, I wanna run on slower hardware. So no one wants that. So now if you abstract away the hardware, as we know with, with cloud computing and DevOps cloud on premises and Edge, David, this is like, this is again, nirvana for the industry because you want, it's an exciting thing, the fastest possible compute system for the software. >>Yeah, yeah. >>I I, at the end of the day, that's what we're talking >>About. So I asked, I asked the, the gift question to my Wharton students this morning on a call, and I, you know, I asked specifically if, if I could give you something that was the result of super computing's amazing nature, what would it be? Would it be personalized therapeutics in healthcare? Would it be something related to climate? Being able to figure out exactly what we can do. There's a whole range of possibilities. And what's interesting is >>What were some of the answers? >>So, so, so a lot of the answers, a lot of the answers came down to, to two categories and it was really, it was healthcare and climate. Yeah. A lot of, a lot of understanding and of course, and of course a lot of jokes about how eventually supercomputers will determine that. The problem is people, >>It's people. Yeah, no. So I knew you were headed there, >>But >>Don't people just want custom jeans? Yeah. >>Or, well, so one of the, one of the good ones though was, >>Was also that >>While we're >>Here, a person from a company who shall not be named said, oh, advertising, it was the, it was the what if you could predict with a high degree of certainty that when you sent someone an email saying, Hey, do you wanna buy this? They would say, Well, yeah, I do. Dramatically lowering the cost of acquisition for an individual customer as an example. Those are the kinds of breakthroughs that will transform how we live. Because all of a sudden, industries are completely disrupted, disrupted, not necessarily directly related to supercomputing, but you think about automating the entire fleet of, of, of trucks in, in North America. What does that do to people who currently drive those trucks? Yeah, so there are, there are societal questions at hand that I don't necessarily know the academics are, are, are considering when they're thinking what's possible. >>Well, I think, I think the point about the ad thing brings up the whole cultural shift that's going on from the old generation of, Hey, let's use our best minds in the industry to figure out how to place an ad at the right place in the right pixel, at the right time. Versus solving real problems like climate change our, you know, culture and society and get us getting along as a country and world water sustainability fires in California. Yeah, I mean, come on. >>There's a lot. So I, I gotta say, I was curious when you were playing with your pocket computer there and talking about the terabyte that you have inside. So back in 1988 when Supercomputing started, the first show was in Orlando. It was actually the same four days that we're here right now. I was born in 1988 if we're just talking about how great 1988 is. And so I guess I, >>I was born, So were we Savannah? So were we >>The era of, I think I was in third grade at that time. >>We won't tell, we won't say what you told me earlier about 1988 for you. But that said, so 1988 was when Steve Jobs released the next computer. He was out of Apple at that time. Yeah, that's right. >>Eight >>Megabytes of Ram. >>It's called the Cube. I think >>It's respectable. That's all it was called. It was, it was, it was, it was the cube, which is pretty, pretty exciting. But when we were looking at, yeah, on the supercomputing side, your phone would've been about, is a capable, >>So where will we be in 20 years? It's amazing >>What we gonna, >>Will our holograms be here instead of us physically sitting, sitting at the table? I don't know. >>Well, it's gonna be very interesting to see how the global ecosystem evolves. It used to be very nationalistic culture with computing. I think, I think we're gonna see global, you know, flattening of culture relative to computing. I think space will be a, a massive hopeful, massive discussion. I think software and automation will be at levels we don't even see. So I think software, to me, I'm looking at, that's the enablement of this supercomputing show. In terms of the next five years, what are they gonna do to enable more faster intelligent horsepower? And, and what does that look like? Is it, it used to be simple processor, more processors, more threads, multicores, and then stuff around it. I think this is where I think it's gonna shift to more network computing, network processing, edge latency, physics is involved. I mean, every, everything you can squeeze out of the physics will be Yeah. Interesting to watch. Well, when >>We, when we, when we peel back the cover on the actual pieces of hardware that are driving this revolution, parallelizing, you know, of workloads is critical to this. It's what super computing consists of. There's no such thing as a supercomputer sitting by itself on a table. Even the million dollar system from Dell, which is crazy when you hear Dell and million dollar system. >>And it's still there too, >>Right? Just, just hanging out. Yeah. But, but it's all about the interconnect. When you want to take advantage of parallel processing, you have to have software that can leverage all of the resources and connectivity becomes increasingly important. I think that's gonna be a thread that we're gonna see throughout the next few days with the, with the, you know, the motherboards, for lack of a lack of a better term, allowing faster access to memory, faster access to cpu, gpu, dpu, networking, storage devices, plugging in those all work together. But increasingly it's that connectivity layer that's critically important. Questions of InfiniBand versus ethernet. Our DMA over converged ethernet as an example, a lot of these architectural decisions are gonna be based on power cooling, dead city. So lot of details behind the scenes to make the magic happen. I >>Think the power is gonna be, you know, thinking 20 years out, hopefully everything here is powered sustainably 20 years from now because power pull, I mean these, the more exciting things going on in your supercomputer. The power suck is massive. That when we were talking to Dell, they were saying that's one of the biggest problems, >>Concerns, that's gonna their customers and that's gonna play into sustainability. So a lot of great guests, we got folks from Dell and the industry, a lot of the manufacturers, a lot of the hardware software experts gonna come on and share what's going on. You know, we did a, we did a post why hardware matters a few months ago, Dave. Everyone's like, well it does now more than ever. So we're gonna get into it here at Supercomputing 22, where the hardware matters. Faster power, as we say for the applications. Mr. Cube, moving back with more live coverage. Stay with us back.
SUMMARY :
host of the Cube, Three days of wall to wall coverage. I, I feel lucky to play the part here with my 10 gallon hat. hpe all the technology is changing the game. It. All right, so, so, so you look good. And I think what makes And you know, you were, by the way, that looks great. Yeah, it seems like it's gonna explode and you get a chance to review the papers, Whereas now, you know, we've got, we've got air cooling, that will determine a lot of our future from sequencing our genome to powering our weather forecasting, So you don't see that very often in some of these tech shows, 1%. And, and so I'd like to get into, you know, I'd like to hear from some of the real experts on So I'm curious to hear What our experts, That's an awesome, that would be, I think that would be an awesome bumper sticker. I wanna see it first. 400 qubits I think was the latest IBM announcement, Well, well, so, so Savannah, let me man explain it to you. That's, that's, that's, that's the depth to which I That was actually a succinct, as humanly possible Really sounds like a Ponzi scheme to me. Well, let's get into some of the thoughts that you guys have on some of the papers. So in the past, you, you talk about, you know, 1988 and, and you know, let's say a decade ago, It's instead of the billion dollar system, it's the system that, you know, I saw a lot of the papers were looking at that. So now if you abstract away the hardware, as we know with, and I, you know, I asked specifically if, if I could give you something that was So, so, so a lot of the answers, a lot of the answers came down to, to two categories and it was Yeah, no. So I knew you were headed there, Yeah. oh, advertising, it was the, it was the what if you could predict with a high degree of certainty change our, you know, culture and society and get us getting along as a So I, I gotta say, I was curious when you were playing with your pocket computer there and We won't tell, we won't say what you told me earlier about 1988 for you. That's all it was called. I don't know. So I think software, to me, I'm looking at, that's the enablement of this Even the million dollar system from Dell, which is crazy when you hear Dell and million dollar system. So lot of details behind the scenes to make the magic happen. Think the power is gonna be, you know, thinking 20 years out, hopefully everything here is powered sustainably 20 years So a lot of great guests,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Savannah | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
John | PERSON | 0.99+ |
David | PERSON | 0.99+ |
1988 | DATE | 0.99+ |
John Ferer | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Joe Tucci | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Texas | LOCATION | 0.99+ |
Detroit | LOCATION | 0.99+ |
Red Sox | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
Pat Cal | PERSON | 0.99+ |
two aspects | QUANTITY | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
New England | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
M&A | ORGANIZATION | 0.99+ |
1990 | DATE | 0.99+ |
California | LOCATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
North America | LOCATION | 0.99+ |
10 gallon | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
10 gallon | QUANTITY | 0.99+ |
Eight | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Cube | PERSON | 0.99+ |
first show | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
DMC | ORGANIZATION | 0.99+ |
million dollar | QUANTITY | 0.99+ |
17,000 people | QUANTITY | 0.99+ |
Orlando | LOCATION | 0.99+ |
Tom Peck | PERSON | 0.99+ |
twice | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
400 qubits | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Peterson | PERSON | 0.99+ |
30-year | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
two categories | QUANTITY | 0.99+ |
1% | QUANTITY | 0.99+ |
iPad | COMMERCIAL_ITEM | 0.99+ |
last week | DATE | 0.99+ |
Joe touchier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
last night | DATE | 0.99+ |
HP | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
10 years ago | DATE | 0.98+ |
Moscone Center South | LOCATION | 0.98+ |
one terabyte | QUANTITY | 0.98+ |
one-year | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
theCUBE Previews Supercomputing 22
(inspirational music) >> The history of high performance computing is unique and storied. You know, it's generally accepted that the first true supercomputer was shipped in the mid 1960s by Controlled Data Corporations, CDC, designed by an engineering team led by Seymour Cray, the father of Supercomputing. He left CDC in the 70's to start his own company, of course, carrying his own name. Now that company Cray, became the market leader in the 70's and the 80's, and then the decade of the 80's saw attempts to bring new designs, such as massively parallel systems, to reach new heights of performance and efficiency. Supercomputing design was one of the most challenging fields, and a number of really brilliant engineers became kind of quasi-famous in their little industry. In addition to Cray himself, Steve Chen, who worked for Cray, then went out to start his own companies. Danny Hillis, of Thinking Machines. Steve Frank of Kendall Square Research. Steve Wallach tried to build a mini supercomputer at Convex. These new entrants, they all failed, for the most part because the market at the time just wasn't really large enough and the economics of these systems really weren't that attractive. Now, the late 80's and the 90's saw big Japanese companies like NEC and Fujitsu entering the fray and governments around the world began to invest heavily in these systems to solve societal problems and make their nations more competitive. And as we entered the 21st century, we saw the coming of petascale computing, with China actually cracking the top 100 list of high performance computing. And today, we're now entering the exascale era, with systems that can complete a billion, billion calculations per second, or 10 to the 18th power. Astounding. And today, the high performance computing market generates north of $30 billion annually and is growing in the high single digits. Supercomputers solve the world's hardest problems in things like simulation, life sciences, weather, energy exploration, aerospace, astronomy, automotive industries, and many other high value examples. And supercomputers are expensive. You know, the highest performing supercomputers used to cost tens of millions of dollars, maybe $30 million. And we've seen that steadily rise to over $200 million. And today we're even seeing systems that cost more than half a billion dollars, even into the low billions when you include all the surrounding data center infrastructure and cooling required. The US, China, Japan, and EU countries, as well as the UK, are all investing heavily to keep their countries competitive, and no price seems to be too high. Now, there are five mega trends going on in HPC today, in addition to this massive rising cost that we just talked about. One, systems are becoming more distributed and less monolithic. The second is the power of these systems is increasing dramatically, both in terms of processor performance and energy consumption. The x86 today dominates processor shipments, it's going to probably continue to do so. Power has some presence, but ARM is growing very rapidly. Nvidia with GPUs is becoming a major player with AI coming in, we'll talk about that in a minute. And both the EU and China are developing their own processors. We're seeing massive densities with hundreds of thousands of cores that are being liquid-cooled with novel phase change technology. The third big trend is AI, which of course is still in the early stages, but it's being combined with ever larger and massive, massive data sets to attack new problems and accelerate research in dozens of industries. Now, the fourth big trend, HPC in the cloud reached critical mass at the end of the last decade. And all of the major hyperscalers are providing HPE, HPC as a service capability. Now finally, quantum computing is often talked about and predicted to become more stable by the end of the decade and crack new dimensions in computing. The EU has even announced a hybrid QC, with the goal of having a stable system in the second half of this decade, most likely around 2027, 2028. Welcome to theCUBE's preview of SC22, the big supercomputing show which takes place the week of November 13th in Dallas. theCUBE is going to be there. Dave Nicholson will be one of the co-hosts and joins me now to talk about trends in HPC and what to look for at the show. Dave, welcome, good to see you. >> Hey, good to see you too, Dave. >> Oh, you heard my narrative up front Dave. You got a technical background, CTO chops, what did I miss? What are the major trends that you're seeing? >> I don't think you really- You didn't miss anything, I think it's just a question of double-clicking on some of the things that you brought up. You know, if you look back historically, supercomputing was sort of relegated to things like weather prediction and nuclear weapons modeling. And these systems would live in places like Lawrence Livermore Labs or Los Alamos. Today, that requirement for cutting edge, leading edge, highest performing supercompute technology is bleeding into the enterprise, driven by AI and ML, artificial intelligence and machine learning. So when we think about the conversations we're going to have and the coverage we're going to do of the SC22 event, a lot of it is going to be looking under the covers and seeing what kind of architectural things contribute to these capabilities moving forward, and asking a whole bunch of questions. >> Yeah, so there's this sort of theory that the world is moving toward this connectivity beyond compute-centricity to connectivity-centric. We've talked about that, you and I, in the past. Is that a factor in the HPC world? How is it impacting, you know, supercomputing design? >> Well, so if you're designing an island that is, you know, tip of this spear, doesn't have to offer any level of interoperability or compatibility with anything else in the compute world, then connectivity is important simply from a speeds and feeds perspective. You know, lowest latency connectivity between nodes and things like that. But as we sort of democratize supercomputing, to a degree, as it moves from solely the purview of academia into truly ubiquitous architecture leverage by enterprises, you start asking the question, "Hey, wouldn't it be kind of cool if we could have this hooked up into our ethernet networks?" And so, that's a whole interesting subject to explore because with things like RDMA over converged ethernet, you now have the ability to have these supercomputing capabilities directly accessible by enterprise computing. So that level of detail, opening up the box of looking at the Nix, or the storage cards that are in the box, is actually critically important. And as an old-school hardware knuckle-dragger myself, I am super excited to see what the cutting edge holds right now. >> Yeah, when you look at the SC22 website, I mean, they're covering all kinds of different areas. They got, you know, parallel clustered systems, AI, storage, you know, servers, system software, application software, security. I mean, wireless HPC is no longer this niche. It really touches virtually every industry, and most industries anyway, and is really driving new advancements in society and research, solving some of the world's hardest problems. So what are some of the topics that you want to cover at SC22? >> Well, I kind of, I touched on some of them. I really want to ask people questions about this idea of HPC moving from just academia into the enterprise. And the question of, does that mean that there are architectural concerns that people have that might not be the same as the concerns that someone in academia or in a lab environment would have? And by the way, just like, little historical context, I can't help it. I just went through the upgrade from iPhone 12 to iPhone 14. This has got one terabyte of storage in it. One terabyte of storage. In 1997, I helped build a one terabyte NAS system that a government defense contractor purchased for almost $2 million. $2 million! This was, I don't even know, it was $9.99 a month extra on my cell phone bill. We had a team of seven people who were going to manage that one terabyte of storage. So, similarly, when we talk about just where are we from a supercompute resource perspective, if you consider it historically, it's absolutely insane. I'm going to be asking people about, of course, what's going on today, but also the near future. You know, what can we expect? What is the sort of singularity that needs to occur where natural language processing across all of the world's languages exists in a perfect way? You know, do we have the compute power now? What's the interface between software and hardware? But really, this is going to be an opportunity that is a little bit unique in terms of the things that we typically cover, because this is a lot about cracking open the box, the server box, and looking at what's inside and carefully considering all of the components. >> You know, Dave, I'm looking at the exhibitor floor. It's like, everybody is here. NASA, Microsoft, IBM, Dell, Intel, HPE, AWS, all the hyperscale guys, Weka IO, Pure Storage, companies I've never heard of. It's just, hundreds and hundreds of exhibitors, Nvidia, Oracle, Penguin Solutions, I mean, just on and on and on. Google, of course, has a presence there, theCUBE has a major presence. We got a 20 x 20 booth. So, it's really, as I say, to your point, HPC is going mainstream. You know, I think a lot of times, we think of HPC supercomputing as this just sort of, off in the eclectic, far off corner, but it really, when you think about big data, when you think about AI, a lot of the advancements that occur in HPC will trickle through and go mainstream in commercial environments. And I suspect that's why there are so many companies here that are really relevant to the commercial market as well. >> Yeah, this is like the Formula 1 of computing. So if you're a Motorsports nerd, you know that F1 is the pinnacle of the sport. SC22, this is where everybody wants to be. Another little historical reference that comes to mind, there was a time in, I think, the early 2000's when Unisys partnered with Intel and Microsoft to come up with, I think it was the ES7000, which was supposed to be the mainframe, the sort of Intel mainframe. It was an early attempt to use... And I don't say this in a derogatory way, commodity resources to create something really, really powerful. Here we are 20 years later, and we are absolutely smack in the middle of that. You mentioned the focus on x86 architecture, but all of the other components that the silicon manufacturers bring to bear, companies like Broadcom, Nvidia, et al, they're all contributing components to this mix in addition to, of course, the microprocessor folks like AMD and Intel and others. So yeah, this is big-time nerd fest. Lots of academics will still be there. The supercomputing.org, this loose affiliation that's been running these SC events for years. They have a major focus, major hooks into academia. They're bringing in legit computer scientists to this event. This is all cutting edge stuff. >> Yeah. So like you said, it's going to be kind of, a lot of techies there, very technical computing, of course, audience. At the same time, we expect that there's going to be a fair amount, as they say, of crossover. And so, I'm excited to see what the coverage looks like. Yourself, John Furrier, Savannah, I think even Paul Gillin is going to attend the show, because I believe we're going to be there three days. So, you know, we're doing a lot of editorial. Dell is an anchor sponsor, so we really appreciate them providing funding so we can have this community event and bring people on. So, if you are interested- >> Dave, Dave, I just have- Just something on that point. I think that's indicative of where this world is moving when you have Dell so directly involved in something like this, it's an indication that this is moving out of just the realm of academia and moving in the direction of enterprise. Because as we know, they tend to ruthlessly drive down the cost of things. And so I think that's an interesting indication right there. >> Yeah, as do the cloud guys. So again, this is mainstream. So if you're interested, if you got something interesting to talk about, if you have market research, you're an analyst, you're an influencer in this community, you've got technical chops, maybe you've got an interesting startup, you can contact David, david.nicholson@siliconangle.com. John Furrier is john@siliconangle.com. david.vellante@siliconangle.com. I'd be happy to listen to your pitch and see if we can fit you onto the program. So, really excited. It's the week of November 13th. I think November 13th is a Sunday, so I believe David will be broadcasting Tuesday, Wednesday, Thursday. Really excited. Give you the last word here, Dave. >> No, I just, I'm not embarrassed to admit that I'm really, really excited about this. It's cutting edge stuff and I'm really going to be exploring this question of where does it fit in the world of AI and ML? I think that's really going to be the center of what I'm really seeking to understand when I'm there. >> All right, Dave Nicholson. Thanks for your time. theCUBE at SC22. Don't miss it. Go to thecube.net, go to siliconangle.com for all the news. This is Dave Vellante for theCUBE and for Dave Nicholson. Thanks for watching. And we'll see you in Dallas. (inquisitive music)
SUMMARY :
And all of the major What are the major trends on some of the things that you brought up. that the world is moving or the storage cards that are in the box, solving some of the across all of the world's languages a lot of the advancements but all of the other components At the same time, we expect and moving in the direction of enterprise. Yeah, as do the cloud guys. and I'm really going to be go to siliconangle.com for all the news.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Danny Hillis | PERSON | 0.99+ |
Steve Chen | PERSON | 0.99+ |
NEC | ORGANIZATION | 0.99+ |
Fujitsu | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Steve Wallach | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Steve Frank | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Seymour Cray | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Unisys | ORGANIZATION | 0.99+ |
1997 | DATE | 0.99+ |
Savannah | PERSON | 0.99+ |
Dallas | LOCATION | 0.99+ |
EU | ORGANIZATION | 0.99+ |
Controlled Data Corporations | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Penguin Solutions | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Tuesday | DATE | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
21st century | DATE | 0.99+ |
iPhone 12 | COMMERCIAL_ITEM | 0.99+ |
10 | QUANTITY | 0.99+ |
Cray | PERSON | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
CDC | ORGANIZATION | 0.99+ |
thecube.net | OTHER | 0.99+ |
Lawrence Livermore Labs | ORGANIZATION | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Kendall Square Research | ORGANIZATION | 0.99+ |
iPhone 14 | COMMERCIAL_ITEM | 0.99+ |
john@siliconangle.com | OTHER | 0.99+ |
$2 million | QUANTITY | 0.99+ |
November 13th | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
over $200 million | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
more than half a billion dollars | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
seven people | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
mid 1960s | DATE | 0.99+ |
three days | QUANTITY | 0.99+ |
Convex | ORGANIZATION | 0.99+ |
70's | DATE | 0.99+ |
SC22 | EVENT | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
late 80's | DATE | 0.98+ |
80's | DATE | 0.98+ |
ES7000 | COMMERCIAL_ITEM | 0.98+ |
today | DATE | 0.98+ |
almost $2 million | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
20 years later | DATE | 0.98+ |
tens of millions of dollars | QUANTITY | 0.98+ |
Sunday | DATE | 0.98+ |
Japanese | OTHER | 0.98+ |
90's | DATE | 0.97+ |
SuperComputing Intro | SuperComputing22
>>Hello everyone. My name is Savannah Peterson, coming to you from the Cube Studios in Palo Alto, California. We're gonna be talking about super computing an event coming up in Dallas this November. I'm joined by the infamous John Furrier. John, thank you for joining me today. >>Great to see you. You look great. >>Thank you. You know, I don't know if anyone's checked out the conference colors for for supercomputing, but I happen to match the accent pink and you are rocking their blue. I got the so on >>There it is. >>We don't always tie our fashion to the tech ladies and gentlemen, but we're, we're a new crew here at, at the Cube and I think it should be a thing that we, that we do moving forward. So John, you are a veteran and I'm a newbie to Supercomputing. It'll be my first time in Dallas. What can I expect? >>Basically it's a hardware nerd fest, basically of the top >>Minds. So it's like ces, >>It's like CES for like, like hardware. It's like really the coolest show if you're into like high performance computing, I mean game changing kind of, you know, physics, laws of physics and hardware. This is the show. I mean this is like the confluence of it's, it's really old. It started when I graduated college, 1988. And back then it was servers, you know, super computing was a concept. It was usually a box and it was hardware, big machine. And it would crank out calculations, simulations and, and you know, you were limited to the processor and all the, the systems components, just the architecture system software, I mean it was technical, it was, it was, it was hardware, it was fun. Very cool back then. But you know, servers got bigger and you got grid computing, you got clusters and then it be really became high performance computing concept. But that's now multiple disciplines, hence it's been around for a while. It's evergreen in the sense it's always changing, attracting talent, students, mentors, scholarships. It's kind of big funding and big companies are behind it. Wl, look, Packard Enterprise, Dell computing startups and hardware matters more than ever. You look at the cloud, what Amazon and, and the cloud hyper skills, they're building the fastest chips down at the root level hardware's back. And I think this show's gonna show a lot of that. >>There isn't the cloud without hardware to support it. So I think it's important that we're all headed here. You, you touched on the evolution there from super computing in the beginning and complex calculations and processing to what we're now calling high performance computing. Can you go a little bit deeper? What is, what does that mean, What does that cover? >>Well, I mean high high performance computing and now is a range of different things. So the super computing needs to be like a thing now. You got clusters and grids that's distributed, you got a backbone, it's well architected and there's a lot involved. This network and security, there's system software. So now it's multiple disciplines in high performance computing and you can do a lot more. And now with cloud computing you can do simulations, say drug research or drug testing. You have, you can do all kinds of cal genome sequencing. I mean the, the, the ability to actually use compute right now is so awesome. The field's got, you know, is rebooting itself in real time, you know, pun intended. So it's like really, it's really good thing. More compute makes things go faster, especially with more data. So high encapsulates all the, the engineering behind it. A lot of robotics coming in the future. All this is gonna be about the edge. You're seeing a lot more hardware making noise around things that are new use cases. You know, your Apple watch that's, you know, very high functionality to a cell tower. Cars again, high performance computing hits all these new use cases. >>It yeah, it absolutely does. I mean high performance computing touches pretty much every aspect of our lives in some capacity at this point and including how we drive our cars to, to get to the studio here in Palo Alto. Do you think that we're entering an era when all of this is about to scale exponentially versus some of the linear growth that we've seen in the space due to the frustration of some of us in the hardware world the last five to 10 years? >>Well, it's a good question. I think everyone has, has seen Moore's law, right? They've seen, you know, that's been, been well documented. I think the world's changing. You're starting to see the trend of more hardware that's specialized like DPU are now out there. You got GPUs, you're seeing the, you know, Bolton hardware, accelerators, you got chi layer software abstraction. So essentially it's, it's a software industry that's in impacted the hardware. So hardware really is software too and it's a lot more software in there. Again, system software's a lot different. So it's, I think it's, it's boomerang back up. I think there's an inflection point because if you look at cyber security and physical devices, they all kind of play in this world where they need compute at the edge. Edge is gonna be a big use case. You can see Dell Technologies there. I think they have a really big opportunity to sell more hardware. H WL Packard Enterprise, others, these are old school >>Box companies. >>So I think the distributed nature of cloud and hybrid and multi-cloud coming on earth and in space means a lot more high performance computing will be sold and and implemented. So that's my take on it. I just think I'm very bullish on this space. >>Ah, yes. And you know me, I get really personally excited about the edge. So I can't wait to see what's in store. Thinking about the variety of vendors and companies, I know we see some of the biggest players in the space. Who are you most excited to see in Dallas coming up in November? >>You know, HP enter, you look back on enterprise has always been informally, HP huge on hpc, Dell and hpe. This is their bread and butter. They've been making servers from many computers to Intel based servers now to arm-based servers and and building their own stuff. So you're gonna start to see a lot more of those players kind of transforming. We're seeing both Dell and HPE transforming and you're gonna see a lot of chip companies there. I'm sure you're gonna see a lot more younger talent, a lot, a lot of young talent are coming, like I said, robotics and the new physical world we're living in is software and IP connected. So it's not like the old school operational technology systems. You have, you know, IP enabled devices that opens up all kinds of new challenges around security vulnerabilities and also capabilities. So it's, I think it's gonna be a lot younger crowd I think than we usually see this year. And you seeing a lot of students, and again universities participating. >>Yeah, I noticed that they have a student competition that's a, a big part of the event. I'm curious when you say younger, are you expecting to see new startups and some interesting players in the space that maybe we haven't heard of before? >>I think we might see more use cases that are different. When I say younger, I don't mean so much on the Democratic but young, younger i new ideas, right? So I think you're gonna see a lot of smart people coming in that might not have the, you know, the, the lens from when it started in 1988 and remember 1988 to now so much has changed. In fact we just did AEG a segment on the cube called does hardware matter because for many, many years, over the past decades, like hardware doesn't matter, it's all about the cloud and we're not a box company. Boxes are coming back. So you know, that's gonna be music for for into the years of Dell Technologies HPE the world. But like hardware does matter and this, you're starting to see that here. So I think you'll see a lot a younger thinking, a little bit different thinking. You're gonna start to see more conf confluence of like machine learning. You're gonna see security and again, I mentioned space. These are areas where you're starting to see where hardware and high performance is gonna be part of all the new systems. And so it's just gonna be industrial to i o is gonna be a big part too. >>Yeah, absolutely. I, I was thinking about some of these use cases, I don't know if you heard about the new drones they're sending up into hurricanes, but it takes literally what a, what an edge use case, how durable it has to be and the rapid processing that has to happen as a result of the software. So many exciting things we could dive down the rabbit hole with. What can folks expect to see here on the cube during supercomputing? >>Well we're gonna talk to a lot of the leaders on the cube from this community, mostly from the practitioner's side, expert side. We're gonna have, we're gonna hear from Dell Technologies, Hewlett Packer Enterprise and a lot of other executives who are investing wanna find out what they're investing in, how it ties into the cloud. Cuz the cloud has become a great environment for multi-cloud with more grid-like capability and what's the future? Where's the hardware going, what's the evolution of the components? How is it being designed? And then how does it fit into the overall software open source market that's booming right now that cloud technology has been doing. So I wanna, we wanna try to connect the dots on the cube. >>Great. So we have a very easy task ahead of us. Hopefully everyone will enjoy the content and the guests that we leaving to, to our table here from from the show floor. When we think about, do you think there's gonna be any trends that we've seen in the past that might not be there? Has anything phased out of the super computing world? You're someone who's been around this game for a while? >>Yeah, that's a good question. I think the game is still the same but the players might shift a little bit. So for example, a lot more with the supply chain challenges you might see that impact. We're gonna watch that very closely to find out what components are gonna be in what. But I'm thinking more about system architecture because the use case is interesting. You know, I was talking to Dell folks about this, you know they have standard machines but then they have use cases for how do you put the equivalent of a data center next to say a mobile cell tower because now you have the capability for wireless and 5g. You gotta put the data center like CAPA speed functionality and capacity for compute at these edges in a smaller form factor. How do you do that? How do you handle all the IO and that's gonna be all these, all these things are nerd again nerdy conversations but they're gonna be very relevant. So I like the new use cases of power more compute in places that they've never been before. So I think that to me is where the exciting part is. Like okay, who's got the, who's really got the real deal going on here? That's something be the fun part. >>I think it allows for a new era in innovation and I don't say that lightly, but when we can put processing power literally anywhere, it certainly thrills the minds of hardware nerds. Like me, my I'm OG hardware, I know you are too, I won't reveal your roots, but I got my, my start in in hardware product design back in the day. So I can't wait >>To, well you then, you know, you know hardware, when you talk about processing power and memory, you can never have enough compute and memory. It's like, it's like the internet bandwidth. You can't never have enough bandwidth. Bandwidth, right? Network power, compute power, you know, bring it on, you know, >>Even battery life, simple things like that when it comes to hardware, especially when we're talking about being on the edge. It's just like our cell phones. Our cell phones are an edge device >>And we get, well when you combine cloud on premises hybrid and then multi-cloud and edge, you now have the ability to get compute at capabilities that were never fathom in the past. And most of the creativity is limited to the hardware capability and now that's gonna be unleashed. I think a lot of creativity. That's again back to the use cases and yes, again, you're gonna start to see more industrial stuff come out edge and I, I, I love the edge. I think this is a great use case for the edge. >>Me too. A absolutely so bold claim. I don't know if you're ready to, to draw a line in the sand. Are we on the precipice of a hardware renaissance? >>Definitely no doubt about it. When we, when we did the does hardware matter segment, it was really kind of to test, you know, everyone's talking about the cloud, but cloud also runs hardware. You look at what AWS is doing, for instance, all the innovation, it's at robotics, it's at that at the physical level, pro, pro, you know you got physics, I mean they're working on so low level engineering and the speed difference. I think from a workload standpoint, whoever can get the best out of the physics and the materials will have a winning formula. Cause you can have a lot more processing specialized processors. That's a new system architecture. And so to me the hype, definitely the HPC high press computing fits perfectly into that construct because now you got more power so that software can be more capable. And I think at the end of the day, nobody wants to write a app on our workload to run on on bad hardware, not have enough compute. >>Amen to that. On that note, John, how can people get in touch with you and us here on the show in anticipation of supercomputing? >>Of course hit the cube handle at the cube at Furrier, my last name F U R R I E R. And of course my dms are always open for scoops and story ideas. And go to silicon angle.com and the cube.net. >>Fantastic. John, I look forward to joining you in Dallas and thank you for being here with me today. And thank you all for joining us for this super computing preview. My name is Savannah Peterson and we're here on the cube live. Well not live prerecorded from Palo Alto. And look forward to seeing you for some high performance computing excitement soon.
SUMMARY :
My name is Savannah Peterson, coming to you from the Cube Studios Great to see you. supercomputing, but I happen to match the accent pink and you are rocking their blue. So John, you are a veteran and I'm a newbie to Supercomputing. So it's like ces, And back then it was servers, you know, super computing was a So I think it's important that we're all headed here. So now it's multiple disciplines in high performance computing and you can do a lot more. Do you think that we're entering an era when all of this is about to scale exponentially I think there's an inflection point because if you look at cyber security and physical devices, So I think the distributed nature of cloud and hybrid and multi-cloud coming on And you know me, I get really personally excited about the edge. So it's not like the old school operational technology systems. I'm curious when you say younger, are you expecting to see new startups and some interesting players in the space that maybe So you know, that's gonna be music for I, I was thinking about some of these use cases, I don't know if you heard about the new Cuz the cloud has become a great environment for multi-cloud with more grid-like When we think about, do you think there's gonna be any So I like the new use cases of Like me, my I'm OG hardware, I know you are too, bring it on, you know, It's just like our cell phones. And most of the creativity is limited to the hardware capability and now that's gonna to draw a line in the sand. it's at that at the physical level, pro, pro, you know you got physics, On that note, John, how can people get in touch with you and us here on And go to silicon angle.com and the cube.net. And look forward to seeing you for some high performance computing excitement
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Dallas | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
1988 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Hewlett Packer Enterprise | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
H WL Packard Enterprise | ORGANIZATION | 0.99+ |
November | DATE | 0.99+ |
hpc | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
today | DATE | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Packard Enterprise | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Cube | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
first time | QUANTITY | 0.97+ |
hpe | ORGANIZATION | 0.95+ |
this year | DATE | 0.95+ |
CES | EVENT | 0.94+ |
10 years | QUANTITY | 0.92+ |
earth | LOCATION | 0.9+ |
Bolton | ORGANIZATION | 0.87+ |
AEG | ORGANIZATION | 0.85+ |
5g | QUANTITY | 0.85+ |
Cube Studios | ORGANIZATION | 0.81+ |
Furrier | ORGANIZATION | 0.81+ |
five | QUANTITY | 0.81+ |
Moore | PERSON | 0.78+ |
Intel | ORGANIZATION | 0.75+ |
cube.net | OTHER | 0.74+ |
this November | DATE | 0.71+ |
silicon angle.com | OTHER | 0.71+ |
past decades | DATE | 0.63+ |
Democratic | ORGANIZATION | 0.55+ |
Seamus Jones & Milind Damle
>>Welcome to the Cube's Continuing coverage of AMD's fourth generation Epic launch. I'm Dave Nicholson and I'm joining you here in our Palo Alto Studios. We have two very interesting guests to dive into some of the announcements that have been made and maybe take a look at this from an AI and ML perspective. Our first guest is Milland Doley. He's a senior director for software and solutions at amd, and we're also joined by Shamus Jones, who's a director of server engineering at Dell Technologies. Welcome gentlemen. How are you? >>Very good, thank >>You. Welcome to the Cube. So let's start out really quickly, Shamus, what, give us a thumbnail sketch of what you do at Dell. >>Yeah, so I'm the director of technical marketing engineering here at Dell, and our team really takes a look at the technical server portfolio and solutions and ensures that we can look at, you know, the performance metrics, benchmarks, and performance characteristics, so that way we can give customers a good idea of what they can expect from the server portfolio when they're looking to buy Power Edge from Dell. >>Milland, how about you? What's, what's new at a M D? What do you do there? >>Great to be here. Thank you for having me at amd, I'm the senior director of performance engineering and ISV ecosystem enablement, which is a long winter way of saying we do a lot of benchmarks, improved performance and demonstrate with wonderful partners such as Shamus and Dell, the combined leverage that AMD four generation processes and Dell systems can bring to bear on a multitude of applications across the industry spectrum. >>Shamus, talk about that relationship a little bit more. The relationship between a M D and Dell. How far back does it go? What does it look like in practical terms? >>Absolutely. So, you know, ever since AM MD reentered the server space, we've had a very close relationship. You know, it's one of those things where we are offering solutions that are out there to our customers no matter what generation A portfolio, if they're, if they're demanding either from their competitor or a m d, we offer a portfolio solutions that are out there. What we're finding is that within their generational improvements, they're just getting better and better and better. Really exciting things happening from a m D at the moment, and we're seeing that as we engineer those CPU stacks into our, our server portfolio, you know, we're really seeing unprecedented performance across the board. So excited about the, the history, you know, my team and Lin's team work very closely together, so much so that we were communicating almost on a daily basis around portfolio platforms and updates around the, the, the benchmarks testing and, and validation efforts. >>So Melind, are you happy with these PowerEdge boxes that Seamus is building to, to house, to house your baby? >>We are delighted, you know, it's hard to find stronger partners than Shamus and Dell with AMD's, second generation epic service CPUs. We already had undisputable industry performance leadership, and then with the third and now the fourth generation CPUs, we've just increased our lead with competition. We've got so many outstanding features at the platform, at the CPU level, everybody focuses on the high core counts, but there's also the DDR five, the memory, the io, and the storage subsystem. So we believe we have a fantastic performance and performance per dollar performance per what edge over competition, and we look to partners such as Dell to help us showcase that leadership. >>Well. So Shay Yeah, through Yeah, go ahead >>Dave. What, what I'd add, Dave, is that through the, the partnership that we've had, you know, we've been able to develop subsystems and platform features that historically we couldn't have really things around thermals power efficiency and, and efficiency within the platform. That means that customers can get the most out of their compute infrastructure. >>So this is gonna be a big question moving forward as next generation platforms are rolled out, there's the potential for people to have sticker shock. You talk about something that has eight or 12 cores in a, in a physical enclosure versus 96 cores, and, and I guess the, the question is, do the ROI and TCO numbers look good for someone to make that upgrade? Shamus, you wanna, you wanna hit that first or you guys are integrated? >>Absolutely, yeah, sorry. Absolutely. So we, I'll tell you what, at the moment, customers really can't afford not to upgrade at the moment, right? We've taken a look at the cost basis of keeping older infrastructure in place, let's say five or seven year old infrastructure servers that are, that are drawing more power maybe are, are poorly utilized within the infrastructure and take more and more effort and time to manage, maintain and, and really keep in production. So as customers look to upgrade or refresh their platforms, what we're finding right is that they can take a dynamic consolidation sometimes 5, 7, 8 to one consolidation depending on which platform they have as a historical and which one they're looking to upgrade to. Within AI specifically and machine learning frameworks, we're seeing really unprecedented performance. Lin's team partnered with us to deliver multiple benchmarks for the launch, some of which we're still continuing to see the goodness from things like TP C X AI as a framework, and I'm talking about here specifically the CPU U based performance. >>Even though in a lot of those AI frameworks, you would also expect to have GPUs, which all of the four platforms that we're offering on the AM MD portfolio today offer multiple G P U offerings. So we're seeing a balance between a huge amount of C P U gain and performance, as well as more and more GPU offerings within the platform. That was real, that was a real challenge for us because of the thermal challenges. I mean, you think GPUs are going up 300, 400 watt, these CPUs at 96 core are, are quite demanding thermally, but what we're able to do is through some, some unique smart cooling engineering within the, the PowerEdge portfolio, we can take a look at those platforms and make the most efficient use case by having things like telemetry within the platform so that way we can dynamically change fan speeds to get customers the best performance without throttling based on their need. >>Melin the cube was at the Supercomputing conference in Dallas this year, supercomputing conference 2022, and a lot of the discussion was around not only advances in microprocessor technology, but also advances in interconnect technology. How do you manage that sort of research partnership with Dell when you aren't strictly just focusing on the piece that you are bringing to the party? It's kind of a potluck, you know, we, we, we, we mentioned P C I E Gen five or 5.0, whatever you want to call it, new DDR storage cards, Nicks, accelerators, all of those, all of those things. How do you keep that straight when those aren't things that you actually build? >>Well, excellent question, Dave. And you know, as we are developing the next platform, obviously the, the ongoing relationship is there with Dell, but we start way before launch, right? Sometimes it's multiple years before launch. So we are not just focusing on the super high core counts at the CPU level and the platform configurations, whether it's single socket or dual socket, we are looking at it from the memory subsystem from the IO subsystem, P c i lanes for storage is a big deal, for example, in this generation. So it's really a holistic approach. And look, core counts are, you know, more important at the higher end for some customers h HPC space, some of the AI applications. But on the lower end you have database applications or some other is s v applications that care a lot about those. So it's, I guess different things matter to different folks across verticals. >>So we partnered with Dell very early in the cycle, and it's really a joint co-engineering. Shamus talked about the focus on AI with TP C X xci, I, so we set five world records in that space just on that one benchmark with AD and Dell. So fantastic kick kick off to that across a multitude of scale factors. But PPP c Xci is not just the only thing we are focusing on. We are also collaborating with Dell and des e i on some of the transformer based natural language processing models that we worked on, for example. So it's not just a steep CPU story, it's CPU platform, es subsystem software and the whole thing delivering goodness across the board to solve end user problems in AI and and other verticals. >>Yeah, the two of you are at the tip of the spear from a performance perspective. So I know it's easy to get excited about world records and, and they're, they're fantastic. I know Shamus, you know, that, you know, end user customers might, might immediately have the reaction, well, I don't need a Ferrari in my data center, or, you know, what I need is to be able to do more with less. Well, aren't we delivering that also? And you know, you imagine you milland you mentioned natural, natural language processing. Shamus, are you thinking in 2023 that a lot more enterprises are gonna be able to afford to do things like that? I mean, what are you hearing from customers on this front? >>I mean, while the adoption of the top bin CPU stack is, is definitely the exception, not the rule today we are seeing marked performance, even when we look at the mid bin CPU offerings from from a m d, those are, you know, the most common sold SKUs. And when we look at customers implementations, really what we're seeing is the fact that they're trying to make the most, not just of dollar spend, but also the whole subsystem that Melin was talking about. You know, the fact that balanced memory configs can give you marked performance improvements, not just at the CPU level, but as actually all the way through to the, to the application performance. So it's, it's trying to find the correct balance between the application needs, your budget, power draw and infrastructure within the, the data center, right? Because not only could you, you could be purchasing and, and look to deploy the most powerful systems, but if you don't have an infrastructure that's, that's got the right power, right, that's a large challenge that's happening right now and the right cooling to deal with the thermal differences of the systems, might you wanna ensure that, that you can accommodate those for not just today but in the future, right? >>So it's, it's planning that balance. >>If I may just add onto that, right? So when we launched, not just the fourth generation, but any generation in the past, there's a natural tendency to zero in on the top bin and say, wow, we've got so many cores. But as Shamus correctly said, it's not just that one core count opn, it's, it's the whole stack. And we believe with our four gen CPU processor stack, we've simplified things so much. We don't have, you know, dozens and dozens of offerings. We have a fairly simple skew stack, but we also have a very efficient skew stack. So even, even though at the top end we've got 96 scores, the thermal budget that we require is fairly reasonable. And look, with all the energy crisis going around, especially in Europe, this is a big deal. Not only do customers want performance, but they're also super focused on performance per want. And so we believe with this generation, we really delivered not just on raw performance, but also on performance per dollar and performance per one. >>Yeah. And it's not just Europe, I'm, we're, we are here in Palo Alto right now, which is in California where we all know the cost of an individual kilowatt hour of electricity because it's quite, because it's quite high. So, so thermals, power cooling, all of that, all of that goes together and that, and that drives cost. So it's a question of how much can you get done per dollar shame as you made the point that you, you're not, you don't just have a one size fits all solution that it's, that it's fit for function. I, I'm, I'm curious to hear from you from the two of you what your thoughts are from a, from a general AI and ML perspective. We're starting to see right now, if you hang out on any kind of social media, the rise of these experimental AI programs that are being presented to the public, some will write stories for you based on prom, some will create images for you. One of the more popular ones will create sort of a, your superhero alter ego for, I, I can't wait to do it, I just got the app on my phone. So those are all fun and they're trivial, but they sort of get us used to this idea that, wow, these systems can do things. They can think on their own in a certain way. W what do, what do you see the future of that looking like over the next year in terms of enterprises, what they're going to do for it with it >>Melan? Yeah, I can go first. Yeah, yeah, yeah, yeah, >>Sure. Yeah. Good. >>So the couple of examples, Dave, that you mentioned are, I, I guess it's a blend of novelty and curiosity. You know, people using AI to write stories or poems or, you know, even carve out little jokes, check grammar and spelling very useful, but still, you know, kind of in the realm of novelty in the mainstream, in the enterprise. Look, in my opinion, AI is not just gonna be a vertical, it's gonna be a horizontal capability. We are seeing AI deployed across the board once the models have been suitably trained for disparate functions ranging from fraud detection or anomaly detection, both in the financial markets in manufacturing to things like image classification or object detection that you talked about in, in the sort of a core AI space itself, right? So we don't think of AI necessarily as a vertical, although we are showcasing it with a specific benchmark for launch, but we really look at AI emerging as a horizontal capability and frankly, companies that don't adopt AI on a massive scale run the risk of being left behind. >>Yeah, absolutely. There's an, an AI as an outcome is really something that companies, I, I think of it in the fact that they're adopting that and the frameworks that you're now seeing as the novelty pieces that Melin was talking about is, is really indicative of the under the covers activity that's been happening within infrastructures and within enterprises for the past, let's say 5, 6, 7 years, right? The fact that you have object detection within manufacturing to be able to, to be able to do defect detection within manufacturing lines. Now that can be done on edge platforms all the way at the device. So you're no longer only having to have things be done, you know, in the data center, you can bring it right out to the edge and have that high performance, you know, inferencing training models. Now, not necessarily training at the edge, but the inferencing models especially, so that way you can, you know, have more and, and better use cases for some of these, these instances things like, you know, smart cities with, with video detection. >>So that way they can see, especially during covid, we saw a lot of hospitals and a lot of customers that were using using image and, and spatial detection within their, their video feeds to be able to determine who and what employees were at risk during covid. So there's a lot of different use cases that have been coming around. I think the novelty aspect of it is really interesting and I, I know my kids, my daughters love that, that portion of it, but really what's been happening has been exciting for quite a, quite a period of time in the enterprise space. We're just now starting to actually see those come to light in more of a, a consumer relevant kind of use case. So the technology that's been developed in the data center around all of these different use cases is now starting to feed in because we do have more powerful compute at our fingertips. We do have the ability to talk more about the framework and infrastructure that's that's right out at the edge. You know, I know Dave in the past you've said things like the data center of, you know, 20 years ago is now in my hand as, as my cell phone. That's right. And, and that's, that's a fact and I'm, it's exciting to think where it's gonna be in the next 10 or 20 years. >>One terabyte baby. Yeah. One terabyte. Yeah. It's mind bo. Exactly. It's mind boggling. Yeah. And it makes me feel old. >>Yeah, >>Me too. And, and that and, and Shamus, that all sounded great. A all I want is a picture of me as a superhero though, so you guys are already way ahead of the curve, you know, with, with, with that on that note, Seamus wrap us up with, with a, with kind of a summary of the, the highlights of what we just went through in terms of the performance you're seeing out of this latest gen architecture from a md. >>Absolutely. So within the TPC xai frameworks that Melin and my team have worked together to do, you know, we're seeing unprecedented price performance. So the fact that you can get 220% uplift gen on gen for some of these benchmarks and, you know, you can have a five to one consolidation means that if you're looking to refresh platforms that are historically legacy, you can get a, a huge amount of benefit, both in reduction in the number of units that you need to deploy and the, the amount of performance that you can get per unit. You know, Melinda had mentioned earlier around CPU performance and performance per wat, specifically on the Tu socket two U platform using the fourth generation a m d Epic, you know, we're seeing a 55% higher C P U performance per wat that is that, you know, when for people who aren't necessarily looking at these statistics, every generation of servers, that that's, that is a huge jump leap forward. >>That combined with 121% higher spec scores, you know, as a benchmark, those are huge. Normally we see, let's say a 40 to 60% performance improvement on the spec benchmarks, we're seeing 121%. So while that's really impressive at the top bin, we're actually seeing, you know, large percentile improvements across the mid bins as well, you know, things in the range of like 70 to 90% performance improvements in those standard bins. So it, it's a, it's a huge performance improvement, a power efficiency, which means customers are able to save energy, space and time based on, on their deployment size. >>Thanks for that Shamus, sadly, gentlemen, our time has expired. With that, I want to thank both of you. It's a very interesting conversation. Thanks for, thanks for being with us, both of you. Thanks for joining us here on the Cube for our coverage of AMD's fourth generation Epic launch. Additional information, including white papers and benchmarks plus editorial coverage can be found on does hardware matter.com.
SUMMARY :
I'm Dave Nicholson and I'm joining you here in our Palo Alto Studios. Shamus, what, give us a thumbnail sketch of what you do at Dell. and ensures that we can look at, you know, the performance metrics, benchmarks, and Dell, the combined leverage that AMD four generation processes and Shamus, talk about that relationship a little bit more. So, you know, ever since AM MD reentered the server space, We are delighted, you know, it's hard to find stronger partners That means that customers can get the most out you wanna, you wanna hit that first or you guys are integrated? So we, I'll tell you what, and make the most efficient use case by having things like telemetry within the platform It's kind of a potluck, you know, we, But on the lower end you have database applications or some But PPP c Xci is not just the only thing we are focusing on. Yeah, the two of you are at the tip of the spear from a performance perspective. the fact that balanced memory configs can give you marked performance improvements, but any generation in the past, there's a natural tendency to zero in on the top bin and say, the two of you what your thoughts are from a, from a general AI and ML perspective. Yeah, I can go first. So the couple of examples, Dave, that you mentioned are, I, I guess it's a blend of novelty have that high performance, you know, inferencing training models. So the technology that's been developed in the data center around all And it makes me feel old. so you guys are already way ahead of the curve, you know, with, with, with that on that note, So the fact that you can get 220% uplift gen you know, large percentile improvements across the mid bins as well, Thanks for that Shamus, sadly, gentlemen, our time has
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
70 | QUANTITY | 0.99+ |
40 | QUANTITY | 0.99+ |
55% | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
220% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
121% | QUANTITY | 0.99+ |
96 cores | QUANTITY | 0.99+ |
California | LOCATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Shamus Jones | PERSON | 0.99+ |
12 cores | QUANTITY | 0.99+ |
Shamus | ORGANIZATION | 0.99+ |
Shamus | PERSON | 0.99+ |
2023 | DATE | 0.99+ |
eight | QUANTITY | 0.99+ |
96 core | QUANTITY | 0.99+ |
300 | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
dozens | QUANTITY | 0.99+ |
seven year | QUANTITY | 0.99+ |
5 | QUANTITY | 0.99+ |
Ferrari | ORGANIZATION | 0.99+ |
96 scores | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
Milland Doley | PERSON | 0.99+ |
first guest | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
amd | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
Lin | PERSON | 0.98+ |
20 years ago | DATE | 0.98+ |
Melinda | PERSON | 0.98+ |
One terabyte | QUANTITY | 0.98+ |
Seamus | ORGANIZATION | 0.98+ |
one core | QUANTITY | 0.98+ |
Melind | PERSON | 0.98+ |
fourth generation | QUANTITY | 0.98+ |
this year | DATE | 0.97+ |
7 years | QUANTITY | 0.97+ |
Seamus Jones | PERSON | 0.97+ |
Dallas | LOCATION | 0.97+ |
One | QUANTITY | 0.97+ |
Melin | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
6 | QUANTITY | 0.96+ |
Milind Damle | PERSON | 0.96+ |
Melan | PERSON | 0.96+ |
first | QUANTITY | 0.95+ |
8 | QUANTITY | 0.94+ |
second generation | QUANTITY | 0.94+ |
Seamus | PERSON | 0.94+ |
TP C X | TITLE | 0.93+ |