Image Title

Search Results for L. A.:

SC22 Karan Batta, Kris Rice


 

>> Welcome back to Supercloud22, #Supercloud22. This is Dave Vellante. In 2019 Oracle and Microsoft announced a collaboration to bring interoperability between OCI, Oracle Cloud Infrastructure and Azure Clouds. It was Oracle's initial foray into so-called multi-cloud and we're joined by Karan Batta, who's the Vice President for Product Management at OCI. And Kris Rice is the Vice President of Software Development at Oracle Database. And we're going to talk about how this technology's evolving and whether it fits our view of what we call supercloud. Welcome gentlemen, thank you. >> Thanks for having us. >> So you recently just last month announced the new service. It extends on the initial partnership with Microsoft Oracle interconnect with Azure, and you refer to this as a secure private link between the two clouds, it cross 11 regions around the world, under two milliseconds data transmission sounds pretty cool. It enables customers to run Microsoft applications against data stored in Oracle databases without any loss in efficiency or presumably performance. So we use this term supercloud to describe a service or sets of services built on hyper scale infrastructure that leverages the core primitives and APIs of an individual cloud platform, but abstracts that underlying complexity to create a continuous experience across more than one cloud. Is that what you've done? >> Absolutely. I think it starts at the top layer in terms of just making things very simple for the customer, right. I think at the end of the day we want to enable true workloads running across two different clouds where you're potentially running maybe the app layer in one and the database layer or the back in another. And the integration I think starts with, you know, making it ease of use. Right. So you can start with things like, okay can you log into your second or your third cloud with the first cloud provider's credentials? Can you make calls against another cloud using another cloud's APIs? Can you peer the networks together? Can you make it seamless? I think those are all the components that are sort of, they're kind of the ingredients to making a multi-cloud or supercloud experience successful. >> Oh, thank you for that, Karan. So I guess there's a question for Chris is I'm trying to understand what you're really solving for? What specific customer problems are you focused on? What's the service optimized for presumably it's database but maybe you could double click on that. >> Sure. So, I mean, of course it's database. So it's a super fast network so that we can split the workload across two different clouds leveraging the best from both, but above the networking, what we had to do do is we had to think about what a true multi-cloud or what you're calling supercloud experience would be it's more than just making the network bites flow. So what we did is we took a look as Karan hinted at right, is where is my identity? Where is my observability? How do I connect these things across how it feels native to that other cloud? >> So what kind of engineering do you have to do to make that work? It's not just plugging stuff together. Maybe you could explain a little bit more detail, the the resources that you had to bring to bear and the technology behind the architecture. >> Sure. I think, it starts with actually, what our goal was, right? Our goal was to actually provide customers with a fully managed experience. What that means is we had to basically create a brand new service. So, we have obviously an Azure like portal and an experience that allows customers to do this but under the covers, we actually have a fully managed service that manages the networking layer, the physical infrastructure, and it actually calls APIs on both sides of the fence. It actually manages your Azure resources, creates them but it also interacts with OCI at the same time. And under the covers this service actually takes Azure primitives as inputs. And then it sort of like essentially translates them to OCI action. So, we actually truly integrated this as a service that's essentially built as a PaaS layer on top of these two clouds. >> So, the customer doesn't really care or know maybe they know cuz they might be coming through, an Azure experience, but you can run work on either Azure and or OCI. And it's a common experience across those clouds. Is that correct? >> That's correct. So like you said, the customer does know that they know there is a relationship with both clouds but thanks to all the things we built there's this thing we invented we created called a multi-cloud control plane. This control plane does operate against both clouds at the same time to make it as seamless as possible so that maybe they don't notice, you know, the power of the interconnect is extremely fast networking, as fast as what we could see inside a single cloud. If you think about how big a data center might be from edge to edge in that cloud, going across the interconnect makes it so that that workload is not important that it's spanning two clouds anymore. >> So you say extremely fast networking. I remember I used to, I wrote a piece a long time ago. Larry Ellison loves InfiniBand. I presume we've moved on from them, but maybe not. What is that interconnect? >> Yeah, so it's funny you mentioned interconnect you know, my previous history comes from Edge PC where we actually inside OCI today, we've moved from Infinite Band as is part of Exadata's core to what we call Rocky V two. So that's just another RDMA network. We actually use it very successfully, not just for Exadata but we use it for our standard computers that we provide to high performance computing customers. >> And the multi-cloud control plane runs. Where does that live? Does it live on OCI? Does it live on Azure? Yes? >> So it does it lives on our side. Our side of the house as part of our Oracle OCI control plane. And it is the veneer that makes these two clouds possible so that we can wire them together. So it knows how to take those Azure primitives and the OCI primitives and wire them at the appropriate levels together. >> Now I want to talk about this PaaS layer. Part of supercloud, we said to actually make it work you're going to have to have a super PaaS. I know we're taking this this term a little far but it's still it's instructive in that, what we surmised was you're probably not going to just use off the shelf, plain old vanilla PaaS, you're actually going to have a purpose built PaaS to solve for the specific problem. So as an example, if you're solving for ultra low latency, which I think you're doing, you're probably no offense to my friends at Red Hat but you're probably not going to develop this on OpenShift, but tell us about that PaaS layer or what we call the super PaaS layer. >> Go ahead, Chris. >> Well, so you're right. We weren't going to build it out on OpenShift. So we have Oracle OCI, you know, the standard is Terraform. So the back end of everything we do is based around Terraform. Today, what we've done is we built that control plane and it will be API drivable, it'll be drivable from the UI and it will let people operate and create primitives across both sides. So you can, you mentioned developers, developers love automation, right, because it makes our lives easy. We will be able to automate a multi-cloud workload from ground up config is code these days. So we can config an entire multi-cloud experience from one place. >> So, double click Chris on that developer experience. What is that like? They're using the same tool set irrespective of, which cloud we're running on is, and it's specific to this service or is it more generic, across other Oracle services? >> There's two parts to that. So one is the, we've only onboarded a portion. So the database portfolio and other services will be coming into this multi-cloud. For the majority of Oracle cloud, the automation, the config layer is based on Terraform. So using Terraform, anyone can configure everything from a mid-tier to an Exadata, all the way soup to nuts from smallest thing possible to the largest. What we've not done yet is integrated truly with the Azure API, from command line drivable. That is coming in the future. It is on the roadmap, it is coming. Then they could get into one tool but right now they would have half their automation for the multi-cloud config on the Azure tool set and half on the OCI tool set. >> But we're not crazy saying from a roadmap standpoint that will provide some benefit to developers and is a reasonable direction for the industry generally but Oracle and Microsoft specifically. >> Absolutely. I'm a developer at heart. And so one of the things we want to make sure is that developers' lives are as easy as possible. >> And is there a metadata management layer or intelligence that you've built in to optimize for performance or low latency or cost across the respective clouds? >> Yeah, definitely. I think, latency's going to be an important factor. The service that we've initially built isn't going to serve, the sort of the tens of microseconds but most applications that are sort of in, running on top of the enterprise applications that are running on top of the database are in the several millisecond range. And we've actually done a lot of work on the networking pairing side to make sure that when we launch these resources across the two clouds we actually picked the right trial site. We picked the right region we pick the right availability zone or domain. So we actually do the due diligence under the cover so the customer doesn't have to do the trial and error and try to find the right latency range. And this is actually one of the big reasons why we only launch the service on the interconnect regions. Even though we have close to, I think close to 40 regions at this point in OCI, this service is only built for the regions that we have an interconnect relationship with Microsoft. >> Okay, so you started with Microsoft in 2019. You're going deeper now in that relationship, is there any reason that you couldn't, I mean technically what would you have to do to go to other clouds? You talked about understanding the primitives and leveraging the primitives of Azure. Presumably if you wanted to do this with AWS or Google or Alibaba, you would have to do similar engineering work, is that correct? Or does what you've developed just kind of poured over to any cloud? >> Yeah, that's absolutely correct Dave. I think Chris talked a lot about the multi-cloud control plane, right? That's essentially the control plane that goes and does stuff on other clouds. We would have to essentially go and build that level of integration into the other clouds. And I think, as we get more popularity and as more products come online through these services I think we'll listen to what customers want. Whether it's, maybe it's the other way around too, Dave maybe it's the fact that they want to use Oracle cloud but they want to use other complimentary services within Oracle cloud. So I think it can go both ways. I think, the market and the customer base will dictate that. >> Yeah. So if I understand that correctly, somebody from another cloud Google cloud could say, Hey we actually want to run this service on OCI cuz we want to expand our market. And if TK gets together with his old friends and figures that out but then we're just, hypothesizing here. But, like you said, it can go both ways. And then, and I have another question related to that. So, multi clouds. Okay, great. Supercloud. How about the Edge? Do you ever see a day where that becomes part of the equation? Certainly the near Edge would, you know, a Home Depot or Lowe's store or a bank, but what about the far Edge, the tiny Edge. Can you talk about the Edge and where that fits in your vision? >> Yeah, absolutely. I think Edge is a interestingly, it's getting fuzzier and fuzzier day by day. I think, the term. Obviously every cloud has their own sort of philosophy in what Edge is, right. We have our own. It starts from, if you do want to do far Edge, we have devices like red devices, which is our ruggedized servers that talk back to our control plane in OCI. You could deploy those things unlike, into war zones and things like that underground. But then we also have things like clouded customer where customers can actually deploy components of our infrastructure like compute or Exadata into a facility where they only need that certain capability. And then a few years ago we launched, what's now called Dedicated Region. And that actually is a different take on Edge in some sense where you get the entire capability of our public commercial region, but within your facility. So imagine if a customer was to essentially point a finger on a commercial map and say, Hey, look, that region is just mine. Essentially that's the capability that we're providing to our customers, where if you have a white space if you have a facility, if you're exiting out of your data center space, you could essentially place an OCI region within your confines behind your firewall. And then you could interconnect that to a cloud provider if you wanted to, and get the same multi-cloud capability that you get in a commercial region. So we have all the spectrums of possibilities here. >> Guys, super interesting discussion. It's very clear to us that the next 10 years of cloud ain't going to be like the last 10. There's a whole new layer. Developing, data is a big key to that. We see industries getting involved. We obviously didn't get into the Oracle Cerner acquisitions. It's a little too early for that but we've actually predicted that companies like Cerner and you're seeing it with Goldman Sachs and Capital One they're actually building services on the cloud. So this is a really exciting new area and really appreciate you guys coming on the Supercloud22 event and sharing your insights. Thanks for your time. >> Thanks for having us. >> Okay. Keep it right there. #Supercloud22. We'll be right back with more great content right after this short break. (lighthearted marimba music)

Published Date : Aug 10 2022

SUMMARY :

And Kris Rice is the Vice President that leverages the core primitives And the integration I think What's the service optimized but above the networking, the resources that you on both sides of the fence. So, the customer at the same time to make So you say extremely fast networking. computers that we provide And the multi-cloud control plane runs. And it is the veneer that So as an example, if you're So the back end of everything we do and it's specific to this service and half on the OCI tool set. for the industry generally And so one of the things on the interconnect regions. and leveraging the primitives of Azure. of integration into the other clouds. of the equation? that talk back to our services on the cloud. with more great content

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Karan BattaPERSON

0.99+

ChrisPERSON

0.99+

OracleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

OCIORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Capital OneORGANIZATION

0.99+

Goldman SachsORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Kris RicePERSON

0.99+

KaranPERSON

0.99+

CernerORGANIZATION

0.99+

LoweORGANIZATION

0.99+

2019DATE

0.99+

secondQUANTITY

0.99+

DavePERSON

0.99+

two partsQUANTITY

0.99+

11 regionsQUANTITY

0.99+

Larry EllisonPERSON

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

two cloudsQUANTITY

0.99+

Supercloud22EVENT

0.99+

both sidesQUANTITY

0.99+

Home DepotORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

third cloudQUANTITY

0.99+

last monthDATE

0.98+

one placeQUANTITY

0.98+

both waysQUANTITY

0.98+

SupercloudORGANIZATION

0.98+

OpenShiftTITLE

0.98+

TodayDATE

0.98+

one toolQUANTITY

0.98+

ExadataORGANIZATION

0.97+

more than one cloudQUANTITY

0.97+

first cloudQUANTITY

0.96+

AzureTITLE

0.96+

Edge PCORGANIZATION

0.96+

EdgeORGANIZATION

0.96+

10QUANTITY

0.96+

two different cloudsQUANTITY

0.96+

Oracle DatabaseORGANIZATION

0.96+

TKPERSON

0.95+

both cloudsQUANTITY

0.95+

under two millisecondsQUANTITY

0.95+

40 regionsQUANTITY

0.94+

todayDATE

0.94+

single cloudQUANTITY

0.93+

Vice PresidentPERSON

0.91+

TerraformORGANIZATION

0.91+

PaaSTITLE

0.91+

InfiniBandORGANIZATION

0.91+

tens of microsecondsQUANTITY

0.9+

#Supercloud22EVENT

0.9+

Oracle Cloud InfrastructureORGANIZATION

0.9+

doubleQUANTITY

0.88+

OCICOMMERCIAL_ITEM

0.88+

AzureORGANIZATION

0.86+

Armando Acosta, Dell Technologies and Matt Leininger, Lawrence Livermore National Laboratory


 

(upbeat music) >> We are back, approaching the finish line here at Supercomputing 22, our last interview of the day, our last interview of the show. And I have to say Dave Nicholson, my co-host, My name is Paul Gillin. I've been attending trade shows for 40 years Dave, I've never been to one like this. The type of people who are here, the type of problems they're solving, what they talk about, the trade shows are typically, they're so speeds and feeds. They're so financial, they're so ROI, they all sound the same after a while. This is truly a different event. Do you get that sense? >> A hundred percent. Now, I've been attending trade shows for 10 years since I was 19, in other words, so I don't have necessarily your depth. No, but seriously, Paul, totally, completely, completely different than any other conference. First of all, there's the absolute allure of looking at the latest and greatest, coolest stuff. I mean, when you have NASA lecturing on things when you have Lawrence Livermore Labs that we're going to be talking to here in a second it's a completely different story. You have all of the academics you have students who are in competition and also interviewing with organizations. It's phenomenal. I've had chills a lot this week. >> And I guess our last two guests sort of represent that cross section. Armando Acosta, director of HPC Solutions, High Performance Solutions at Dell. And Matt Leininger, who is the HPC Strategist at Lawrence Livermore National Laboratory. Now, there is perhaps, I don't know you can correct me on this, but perhaps no institution in the world that uses more computing cycles than Lawrence Livermore National Laboratory and is always on the leading edge of what's going on in Supercomputing. And so we want to talk to both of you about that. Thank you. Thank you for joining us today. >> Sure, glad to be here. >> For having us. >> Let's start with you, Armando. Well, let's talk about the juxtaposition of the two of you. I would not have thought of LLNL as being a Dell reference account in the past. Tell us about the background of your relationship and what you're providing to the laboratory. >> Yeah, so we're really excited to be working with Lawrence Livermore, working with Matt. But actually this process started about two years ago. So we started looking at essentially what was coming down the pipeline. You know, what were the customer requirements. What did we need in order to make Matt successful. And so the beauty of this project is that we've been talking about this for two years, and now it's finally coming to fruition. And now we're actually delivering systems and delivering racks of systems. But what I really appreciate is Matt coming to us, us working together for two years and really trying to understand what are the requirements, what's the schedule, what do we need to hit in order to make them successful >> At Lawrence Livermore, what drives your computing requirements I guess? You're working on some very, very big problems but a lot of very complex problems. How do you decide what you need to procure to address them? >> Well, that's a difficult challenge. I mean, our mission is a national security mission dealing with making sure that we do our part to provide the high performance computing capabilities to the US Department of Energy's National Nuclear Security Administration. We do that through the Advanced Simulation computing program. Its goal is to provide that computing power to make sure that the US nuclear rep of the stockpile is safe, secure, and effective. So how we go about doing that? There's a lot of work involved. We have multiple platform lines that we accomplish that goal with. One of them is the advanced technology systems. Those are the ones you've heard about a lot, they're pushing towards exit scale, the GPU technologies incorporated into those. We also have a second line, a platform line, called the Commodity Technology Systems. That's where right now we're partnering with Dell on the latest generation of those. Those systems are a little more conservative, they're right now CPU only driven but they're also intended to be the everyday work horses. So those are the first systems our users get on. It's very easy for them to get their applications up and running. They're the first things they use usually on a day to day basis. They run a lot of small to medium size jobs that you need to do to figure out how to most effectively use what workloads you need to move to the even larger systems to accomplish our mission goals. >> The workhorses. >> Yeah. >> What have you seen here these last few days of the show, what excites you? What are the most interesting things you've seen? >> There's all kinds of things that are interesting. Probably most interesting ones I can't talk about in public, unfortunately, 'cause of NDA agreements, of course. But it's always exciting to be here at Supercomputing. It's always exciting to see the products that we've been working with industry and co-designing with them on for, you know, several years before the public actually sees them. That's always an exciting part of the conference as well specifically with CTS-2, it's exciting. As was mentioned before, I've been working with Dell for nearly two years on this, but the systems first started being delivered this past August. And so we're just taking the initial deliveries of those. We've deployed, you know, roughly about 1600 nodes now but that'll ramp up to over 6,000 nodes over the next three or four months. >> So how does this work intersect with Sandia and Los Alamos? Explain to us the relationship there. >> Right, so those three laboratories are the laboratories under the National Nuclear Security Administration. We partner together on CTS. So the architectures, as you were asking, how do we define these things, it's the labs coming together. Those three laboratories we define what we need for that architecture. We have a joint procurement that is run out of Livermore but then the systems are deployed at all three laboratories. And then they serve the programs that I mentioned for each laboratory as well. >> I've worked in this space for a very long time you know I've worked with agencies where the closest I got to anything they were actually doing was the sort of guest suite outside the secure area. And sometimes there are challenges when you're communicating, it's like you have a partner like Dell who has all of these things to offer, all of these ideas. You have requirements, but maybe you can't share 100% of what you need to do. How do you navigate that? Who makes the decision about what can be revealed in these conversations? You talk about NDA in terms of what's been shared with you, you may be limited in terms of what you can share with vendors. Does that cause inefficiency? >> To some degree. I mean, we do a good job within the NSA of understanding what our applications need and then mapping that to technical requirements that we can talk about with vendors. We also have kind of in between that we've done this for many years. A recent example is of course with the exit scale computing program and some things it's doing creating proxy apps or mini apps that are smaller versions of some of the things that we are important to us. Some application areas are important to us, hydrodynamics, material science, things like that. And so we can collaborate with vendors on those proxy apps to co-design systems and tweak the architectures. In fact, we've done a little bit that with CTS-2, not as much in CTS as maybe in the ATS platforms but that kind of general idea of how we collaborate through these proxy applications is something we've used across platforms. >> Now is Dell one of your co-design partners? >> In CTS-2 absolutely, yep. >> And how, what aspects of CTS-2 are you working on with Dell? >> Well, the architecture itself was the first, you know thing we worked with them on, we had a procurement come out, you know they bid an architecture on that. We had worked with them, you know but previously on our requirements, understanding what our requirements are. But that architecture today is based on the fourth generation Intel Xeon that you've heard a lot about at the conference. We are one of the first customers to get those systems in. All the systems are interconnected together with the Cornell Network's Omni-Path Network that we've used before and are very excited about as well. And we build up from there. The systems get integrated in by the operations teams at the laboratory. They get integrated into our production computing environment. Dell is really responsible, you know for designing these systems and delivering to the laboratories. The laboratories then work with Dell. We have a software stack that we provide on top of that called TOSS, for Tri-Lab Operating System. It's based on Redhead Enterprise Linux. But the goal there is that it allows us, a common user environment, a common simulation environment across not only CTS-2, but maybe older systems we have and even the larger systems that we'll be deploying as well. So from a user perspective they see a common user interface, a common environment across all the different platforms that they use at Livermore and the other laboratories. >> And Armando, what does Dell get out of the co-design arrangement with the lab? >> Well, we get to make sure that they're successful. But the other big thing that we want to do, is typically when you think about Dell and HPC, a lot of people don't make that connection together. And so what we're trying to do is make sure that, you know they know that, hey, whether you're a work group customer at the smallest end or a super computer customer at the highest end, Dell wants to make sure that we have the right setup portfolio to match any needs across this. But what we were really excited about this, this is kind of our, you know big CTS-2 first thing we've done together. And so, you know, hopefully this has been successful. We've made Matt happy and we look forward to the future what we can do with bigger and bigger things. >> So will the labs be okay with Dell coming up with a marketing campaign that said something like, "We can't confirm that alien technology is being reverse engineered." >> Yeah, that would fly. >> I mean that would be right, right? And I have to ask you the question directly and the way you can answer it is by smiling like you're thinking, what a stupid question. Are you reverse engineering alien technology at the labs? >> Yeah, you'd have to suck the PR office. >> Okay, okay. (all laughing) >> Good answer. >> No, but it is fascinating because to a degree it's like you could say, yeah, we're working together but if you really want to dig into it, it's like, "Well I kind of can't tell you exactly how some of this stuff is." Do you consider anything that you do from a technology perspective, not what you're doing with it, but the actual stack, do you try to design proprietary things into the stack or do you say, "No, no, no, we're going to go with standards and then what we do with it is proprietary and secret."? >> Yeah, it's more the latter. >> Is the latter? Yeah, yeah, yeah. So you're not going to try to reverse engineer the industry? >> No, no. We want the solutions that we develop to enhance the industry to be able to apply to a broader market so that we can, you know, gain from the volume of that market, the lower cost that they would enable, right? If we go off and develop more and more customized solutions that can be extraordinarily expensive. And so we we're really looking to leverage the wider market, but do what we can to influence that, to develop key technologies that we and others need that can enable us in the high forms computing space. >> We were talking with Satish Iyer from Dell earlier about validated designs, Dell's reference designs for for pharma and for manufacturing, in HPC are you seeing that HPC, Armando, and is coming together traditionally and more of an academic research discipline beginning to come together with commercial applications? And are these two markets beginning to blend? >> Yeah, I mean so here's what's happening, is you have this convergence of HPC, AI and data analytics. And so when you have that combination of those three workloads they're applicable across many vertical markets, right? Whether it's financial services, whether it's life science, government and research. But what's interesting, and Matt won't brag about, but a lot of stuff that happens in the DoE labs trickles down to the enterprise space, trickles down to the commercial space because these guys know how to do it at scale, they know how to do it efficiently and they know how to hit the mark. And so a lot of customers say, "Hey we want what CTS-2 does," right? And so it's very interesting. The way I love it is their process the way they do the RFP process. Matt talked about the benchmarks and helping us understand, hey here's kind of the mark you have to hit. And then at the same time, you know if we make them successful then obviously it's better for all of us, right? You know, I want to secure nuclear stock pile so I hope everybody else does as well. >> The software stack you mentioned, I think Tia? >> TOSS. >> TOSS. >> Yeah. >> How did that come about? Why did you feel the need to develop your own software stack? >> It originated back, you know, even 20 years ago when we first started building Linux clusters when that was a crazy idea. Livermore and other laboratories were really the first to start doing that and then push them to larger and larger scales. And it was key to have Linux running on that at the time. And so we had the. >> So 20 years ago you knew you wanted to run on Linux? >> Was 20 years ago, yeah, yeah. And we started doing that but we needed a way to have a version of Linux that we could partner with someone on that would do, you know, the support, you know, just like you get from an EoS vendor, right? Security support and other things. But then layer on top of that, all the HPC stuff you need either to run the system, to set up the system, to support our user base. And that evolved into to TOSS which is the Tri-Lab Operating System. Now it's based on the latest version of Redhead Enterprise Linux, as I mentioned before, with all the other HPC magic, so to speak and all that HPC magic is open source things. It's not stuff, it may be things that we develop but it's nothing closed source. So all that's there we run it across all these different environments as I mentioned before. And it really originated back in the early days of, you know, Beowulf clusters, Linux clusters, as just needing something that we can use to run on multiple systems and start creating that common environment at Livermore and then eventually the other laboratories. >> How is a company like Dell, able to benefit from the open source work that's coming out of the labs? >> Well, when you look at the open source, I mean open source is good for everybody, right? Because if you make a open source tool available then people start essentially using that tool. And so if we can make that open source tool more robust and get more people using it, it gets more enterprise ready. And so with that, you know, we're all about open source we're all about standards and really about raising all boats 'cause that's what open source is all about. >> And with that, we are out of time. This is our 28th interview of SC22 and you're taking us out on a high note. Armando Acosta, director of HPC Solutions at Dell. Matt Leininger, HPC Strategist, Lawrence Livermore National Laboratories. Great discussion. Hopefully it was a good show for you. Fascinating show for us and thanks for being with us today. >> Thank you very much. >> Thank you for having us >> Dave it's been a pleasure. >> Absolutely. >> Hope we'll be back next year. >> Can't believe, went by fast. Absolutely at SC23. >> We hope you'll be back next year. This is Paul Gillin. That's a wrap, with Dave Nicholson for theCUBE. See here in next time. (soft upbear music)

Published Date : Nov 17 2022

SUMMARY :

And I have to say Dave You have all of the academics and is always on the leading edge about the juxtaposition of the two of you. And so the beauty of this project How do you decide what you need that you need to do but the systems first Explain to us the relationship there. So the architectures, as you were asking, 100% of what you need to do. And so we can collaborate with and the other laboratories. And so, you know, hopefully that said something like, And I have to ask you and then what we do with it reverse engineer the industry? so that we can, you know, gain And so when you have that combination running on that at the time. all the HPC stuff you need And so with that, you know, and thanks for being with us today. Absolutely at SC23. with Dave Nicholson for theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt LeiningerPERSON

0.99+

Dave NicholsonPERSON

0.99+

Dave NicholsonPERSON

0.99+

Paul GillinPERSON

0.99+

National Nuclear Security AdministrationORGANIZATION

0.99+

Armando AcostaPERSON

0.99+

Cornell NetworkORGANIZATION

0.99+

DellORGANIZATION

0.99+

MattPERSON

0.99+

CTS-2TITLE

0.99+

US Department of EnergyORGANIZATION

0.99+

DavePERSON

0.99+

twoQUANTITY

0.99+

10 yearsQUANTITY

0.99+

40 yearsQUANTITY

0.99+

two yearsQUANTITY

0.99+

next yearDATE

0.99+

Lawrence LivermoreORGANIZATION

0.99+

100%QUANTITY

0.99+

CTSTITLE

0.99+

Dell TechnologiesORGANIZATION

0.99+

PaulPERSON

0.99+

LinuxTITLE

0.99+

NASAORGANIZATION

0.99+

HPC SolutionsORGANIZATION

0.99+

bothQUANTITY

0.99+

Lawrence Livermore LabsORGANIZATION

0.99+

todayDATE

0.99+

Los AlamosORGANIZATION

0.99+

OneQUANTITY

0.99+

Lawrence Livermore National LaboratoryORGANIZATION

0.99+

ArmandoORGANIZATION

0.99+

each laboratoryQUANTITY

0.99+

second lineQUANTITY

0.99+

over 6,000 nodesQUANTITY

0.99+

20 years agoDATE

0.98+

three laboratoriesQUANTITY

0.98+

28th interviewQUANTITY

0.98+

Lawrence Livermore National LaboratoriesORGANIZATION

0.98+

threeQUANTITY

0.98+

firstQUANTITY

0.98+

Tri-LabORGANIZATION

0.98+

SandiaORGANIZATION

0.97+

oneQUANTITY

0.97+

FirstQUANTITY

0.97+

two marketsQUANTITY

0.97+

SupercomputingORGANIZATION

0.96+

first systemsQUANTITY

0.96+

fourth generationQUANTITY

0.96+

this weekDATE

0.96+

LivermoreORGANIZATION

0.96+

Omni-Path NetworkORGANIZATION

0.95+

about 1600 nodesQUANTITY

0.95+

Lawrence Livermore National LaboratoryORGANIZATION

0.94+

LLNLORGANIZATION

0.93+

NDAORGANIZATION

0.93+

Brian Payne, Dell Technologies and Raghu Nambiar, AMD | SuperComputing 22


 

(upbeat music) >> We're back at SC22 SuperComputing Conference in Dallas. My name's Paul Gillan, my co-host, John Furrier, SiliconANGLE founder. And huge exhibit floor here. So much activity, so much going on in HPC, and much of it around the chips from AMD, which has been on a roll lately. And in partnership with Dell, our guests are Brian Payne, Dell Technologies, VP of Product Management for ISG mid-range technical solutions, and Raghu Nambiar, corporate vice president of data system, data center ecosystem, and application engineering, that's quite a mouthful, at AMD, And gentlemen, welcome. Thank you. >> Thanks for having us. >> This has been an evolving relationship between you two companies, obviously a growing one, and something Dell was part of the big general rollout, AMD's new chip set last week. Talk about how that relationship has evolved over the last five years. >> Yeah, sure. Well, so it goes back to the advent of the EPIC architecture. So we were there from the beginning, partnering well before the launch five years ago, thinking about, "Hey how can we come up with a way to solve customer problems? address workloads in unique ways?" And that was kind of the origin of the relationship. We came out with some really disruptive and capable platforms. And then it continues, it's continued till then, all the way to the launch of last week, where we've introduced four of the most capable platforms we've ever had in the PowerEdge portfolio. >> Yeah, I'm really excited about the partnership with the Dell. As Brian said, we have been partnering very closely for last five years since we introduced the first generation of EPIC. So we collaborate on, you know, system design, validation, performance benchmarks, and more importantly on software optimizations and solutions to offer out of the box experience to our customers. Whether it is HPC or databases, big data analytics or AI. >> You know, you guys have been on theCUBE, you guys are veterans 2012, 2014 back in the day. So much has changed over the years. Raghu, you were on the founding chair of the TPC for AI. We've talked about the different iterations of power service. So much has changed. Why the focus on these workloads now? What's the inflection point that we're seeing here at SuperComputing? It feels like we've been in this, you know run the ball, get, gain a yard, move the chains, you know, but we feel, I feel like there's a moment where the there's going to be an unleashing of innovation around new use cases. Where's the workloads? Why the performance? What are some of those use cases right now that are front and center? >> Yeah, I mean if you look at today, the enterprise ecosystem has become extremely complex, okay? People are running traditional workloads like Relational Database Management Systems, also new generation of workloads with the AI and HPC and actually like AI actually HPC augmented with some of the AI technologies. So what customers are looking for is, as I said, out of the box experience, or time to value is extremely critical. Unlike in the past, you know, people, the customers don't have the time and resources to run months long of POCs, okay? So that's one idea that we are focusing, you know, working closely with Dell to give out of the box experience. Again, you know, the enterprise applicate ecosystem is, you know, really becoming complex and the, you know, as you mentioned, some of the industry standard benchmark is designed to give the fair comparison of performance, and price performance for the, our end customers. And you know, Brian and my team has been working closely to demonstrate our joint capabilities in the AI space with, in a set of TPCx-AI benchmark cards last week it was the major highlight of our launch last week. >> Brian, you got showing the demo in the booth at Dell here. Not demo, the product, it's available. What are you seeing for your use cases that customers are kind of rallying around now, and what are they doubling down on. >> Yeah, you know, I, so Raghu I think teed it up well. The really data is the currency of business and all organizations today. And that's what's pushing people to figure out, hey, both traditional workloads as well as new workloads. So we've got in the traditional workload space, you still have ERP systems like SAP, et cetera, and we've announced world records there, a hundred plus percent improvements in our single socket system, 70% and dual. We actually posted a 40% advantage over the best Genoa result just this week. So, I mean, we're excited about that in the traditional space. But what's exciting, like why are we here? Why, why are people thinking about HPC and AI? It's about how do we make use of that data, that data being the currency and how do we push in that space? So Raghu mentioned the TPC AI benchmark. We launched, or we announced in collaboration you talk about how do we work together, nine world records in that space. In one case it's a 3x improvement over prior generations. So the workloads that people care about is like how can I process this data more effectively? How can I store it and secure it more effectively? And ultimately, how do I make decisions about where we're going, whether it's a scientific breakthrough, or a commercial application. That's what's really driving the use cases and the demand from our customers today. >> I think one of the interesting trends we've seen over the last couple of years is a resurgence in interest in task specific hardware around AI. In fact venture capital companies invested a $1.8 billion last year in AI hardware startups. I wonder, and these companies are not doing CPUs necessarily, or GPUs, they're doing accelerators, FPGAs, ASICs. But you have to be looking at that activity and what these companies are doing. What are you taking away from that? How does that affect your own product development plans? Both on the chip side and on the system side? >> I think the future of computing is going to be heterogeneous. Okay. I mean a CPU solving certain type of problems like general purpose computing databases big data analytics, GPU solving, you know, problems in AI and visualization and DPUs and FPGA's accelerators solving you know, offloading, you know, some of the tasks from the CPU and providing realtime performance. And of course, you know, the, the software optimizes are going to be critical to stitch everything together, whether it is HPC or AI or other workloads. You know, again, as I said, heterogeneous computing is going to be the future. >> And, and for us as a platform provider, the heterogeneous, you know, solutions mean we have to design systems that are capable of supporting that. So if as you think about the compute power whether it's a GPU or a CPU, continuing to push the envelope in terms of, you know, to do the computations, power consumption, things like that. How do we design a system that can be, you know, incredibly efficient, and also be able to support the scaling, you know, to solve those complex problems. So that gets into challenges around, you know, both liquid cooling, but also making the most out of air cooling. And so we're seeing not only are we we driving up you know, the capability of these systems, we're actually improving the energy efficiency. And those, the most recent systems that we launched around the CPU, which is still kind of at the heart of everything today, you know, are seeing 50% improvement, you know, gen to gen in terms of performance per watt capabilities. So it's, it's about like how do we package these systems in effective ways and make sure that our customers can get, you know, the advertised benefits, so to speak, of the new chip technologies. >> Yeah. To add to that, you know, performance, scalability total cost of ownership, these are the key considerations, but now energy efficiency has become more important than ever, you know, our commitment to sustainability. This is one of the thing that we have demonstrated last week was with our new generation of EPIC Genoa based systems, we can do a one five to one consolidation, significantly reducing the energy requirement. >> Power's huge costs are going up. It's a global issue. >> Raghu: Yeah, it is. >> How do you squeeze more performance too out of it at the same time, I mean, smaller, faster, cheaper. Paul, you wrote a story about, you know, this weekend about hardware and AI making hardware so much more important. You got more power requirements, you got the sustainability, but you need more horsepower, more compute. What's different in the architecture if you guys could share like today versus years ago, what's different in as these generations step function value increases? >> So one of the major drivers from the processor perspective is if you look at the latest generation of processors, the five nanometer technology, bringing efficiency and density. So we are able to pack 96 processor cores, you know, in a two socket system, we are talking about 196 processor cores. And of course, you know, other enhancements like IPC uplift, bringing DDR5 to the market PC (indistinct) for the market, offering overall, you know, performance uplift of more than 2.5x for certain workloads. And of course, you know, significantly reducing the power footprint. >> Also, I was just going to cut, I mean, architecturally speaking, you know, then how do we take the 96 cores and surround it, deliver a balanced ecosystem to make sure that we can get the, the IO out of the system, and make sure we've got the right data storage. So I mean, you'll see 60% improvements and total storage in the system. I think in 2012 we're talking about 10 gig ethernet. Well, you know, now we're on to 100 and 400 on the forefront. So it's like how do we keep up with this increased power, by having, or computing capabilities both offload and core computing and make sure we've got a system that can deliver the desired (indistinct). >> So the little things like the bus, the PCI cards, the NICs, the connectors have to be rethought through. Is that what you're getting at? >> Yeah, absolutely. >> Paul: And the GPUs, which are huge power consumers. >> Yeah, absolutely. So I mean, cooling, we introduce, and we call it smart cooling is a part of our latest generation of servers. I mean, the thermal design inside of a server is a is a complex, you know, complex system, right? And doing that efficiently because of course fans consume power. So I mean, yeah, those are the kind of considerations that we have to put through to make sure that you're not either throttling performance because you don't have you know, keeping the chips at the right temperature. And, and you know, ultimately when you do that, you're hurting the productivity of the investment. So I mean, it's, it's our responsibility to put our thoughts and deliver those systems that are (indistinct) >> You mention data too, if you bring in the data, one of the big discussions going into the big Amazon show coming up, re:Invent is egress costs. Right, So now you've got compute and how you design data latency you know, processing. It's not just contained in a machine. You got to think about outside that machine talking to other machines. Is there an intelligent (chuckles) network developing? I mean, what's the future look like? >> Well, I mean, this is a, is an area that, that's, you know, it's fun and, you know, Dell's in a unique position to work on this problem, right? We have 70% of the mission housed, 70% of the mission critical data that exists in the world. How do we bring that closer to compute? How do we deliver system level solutions? So server compute, so recently we announced innovations around NVMe over Fabrics. So now you've got the NVMe technology and the SAN. How do we connect that more efficiently across the servers? Those are the kinds, and then guide our customers to make use of that. Those are the kinds of challenges that we're trying to unlock the value of the data by making sure we're (indistinct). >> There are a lot of lessons learned from, you know, classic HPC and some of the, you know big data analytics. Like, you know, Hadoops of the world, you know, you know distributor processing for crunching a large amount of amount of data. >> With the growth of the cloud, you see, you know, some pundits saying that data centers will become obsolete in five years, and everything's going to move to the cloud. Obviously data center market that's still growing, and is projected to continue to grow. But what's the argument for captive hardware, for owning a data center these days when the cloud offers such convenience and allegedly cost benefit? >> I would say the reality is that we're, and I think the industry at large has acknowledged this, that we're living in a multicloud world and multicloud methods are going to be necessary to you know, to solve problems and compete. And so, I mean, you know, in some cases, whether it's security or latency, you know, there's a push to have things in your own data center. And then of course growth at the edge, right? I mean, that's, that's really turning, you know, things on their head, if you will, getting data closer to where it's being generated. And so I would say we're going to live in this edge cloud, you know, and core data center environment with multi, you know, different cloud providers providing solutions and services where it makes sense, and it's incumbent on us to figure out how do we stitch together that data platform, that data layer, and help customers, you know, synthesize this data to, to generate, you know, the results they need. >> You know, one of the things I want to get into on the cloud you mentioned that Paul, is that we see the rise of graph databases. And so is that on the radar for the AI? Because a lot of more graph data is being brought in, the database market's incredibly robust. It's one of the key areas that people want performance out of. And as cloud native becomes the modern application development, a lot more infrastructure as code's happening, which means that the internet and the networks and the process should be programmable. So graph database has been one of those things. Have you guys done any work there? What's some data there you can share on that? >> Yeah, actually, you know, we have worked closely with a company called TigerGraph, there in the graph database space. And we have done a couple of case studies, one on the healthcare side, and the other one on the financial side for fraud detection. Yeah, I think they have a, this is an emerging area, and we are able to demonstrate industry leading performance for graph databases. Very excited about it. >> Yeah, it's interesting. It brings up the vertical versus horizontal applications. Where is the AI HPC kind of shining? Is it like horizontal and vertical solutions or what's, what's your vision there. >> Yeah, well, I mean, so this is a case where I'm also a user. So I own our analytics platform internally. We actually, we have a chat box for our product development organization to figure out, hey, what trends are going on with the systems that we sell, whether it's how they're being consumed or what we've sold. And we actually use graph database technology in order to power that chat box. So I'm actually in a position where I'm like, I want to get these new systems into our environment so we can deliver. >> Paul: Graphs under underlie most machine learning models. >> Yeah, Yeah. >> So we could talk about, so much to talk about in this space, so little time. And unfortunately we're out of that. So fascinating discussion. Brian Payne, Dell Technologies, Raghu Nambiar, AMD. Congratulations on the successful launch of your new chip set and the growth of, in your relationship over these past years. Thanks so much for being with us here on theCUBE. >> Super. >> Thank you much. >> It's great to be back. >> We'll be right back from SuperComputing 22 in Dallas. (upbeat music)

Published Date : Nov 16 2022

SUMMARY :

and much of it around the chips from AMD, over the last five years. in the PowerEdge portfolio. you know, system design, So much has changed over the years. Unlike in the past, you know, demo in the booth at Dell here. Yeah, you know, I, so and on the system side? And of course, you know, the heterogeneous, you know, This is one of the thing that we It's a global issue. What's different in the And of course, you know, other Well, you know, now the connectors have to Paul: And the GPUs, which And, and you know, you know, processing. is an area that, that's, you know, the world, you know, you know With the growth of the And so, I mean, you know, in some cases, on the cloud you mentioned that Paul, Yeah, actually, you know, Where is the AI HPC kind of shining? And we actually use graph Paul: Graphs under underlie Congratulations on the successful launch SuperComputing 22 in Dallas.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrianPERSON

0.99+

Brian PaynePERSON

0.99+

PaulPERSON

0.99+

Paul GillanPERSON

0.99+

DallasLOCATION

0.99+

50%QUANTITY

0.99+

60%QUANTITY

0.99+

70%QUANTITY

0.99+

2012DATE

0.99+

RaghuPERSON

0.99+

John FurrierPERSON

0.99+

DellORGANIZATION

0.99+

96 coresQUANTITY

0.99+

two companiesQUANTITY

0.99+

40%QUANTITY

0.99+

100QUANTITY

0.99+

$1.8 billionQUANTITY

0.99+

400QUANTITY

0.99+

TigerGraphORGANIZATION

0.99+

AMDORGANIZATION

0.99+

last weekDATE

0.99+

Raghu NambiarPERSON

0.99+

2014DATE

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

96 processor coresQUANTITY

0.99+

last yearDATE

0.99+

BothQUANTITY

0.99+

AmazonORGANIZATION

0.98+

five yearsQUANTITY

0.98+

two socketQUANTITY

0.98+

3xQUANTITY

0.98+

this weekDATE

0.98+

five years agoDATE

0.98+

todayDATE

0.98+

first generationQUANTITY

0.98+

fourQUANTITY

0.98+

SiliconANGLEORGANIZATION

0.97+

more than 2.5xQUANTITY

0.97+

fiveQUANTITY

0.97+

one ideaQUANTITY

0.97+

ISGORGANIZATION

0.96+

one caseQUANTITY

0.95+

five nanometerQUANTITY

0.95+

SuperComputingORGANIZATION

0.94+

EPICORGANIZATION

0.93+

yearsDATE

0.93+

GenoaORGANIZATION

0.92+

Raghu NambiarORGANIZATION

0.92+

SC22 SuperComputing ConferenceEVENT

0.91+

last couple of yearsDATE

0.9+

hundred plus percentQUANTITY

0.89+

TPCORGANIZATION

0.88+

nine worldQUANTITY

0.87+

SuperComputing 22ORGANIZATION

0.87+

about 196 processor coresQUANTITY

0.85+

theCUBE Previews Supercomputing 22


 

(inspirational music) >> The history of high performance computing is unique and storied. You know, it's generally accepted that the first true supercomputer was shipped in the mid 1960s by Controlled Data Corporations, CDC, designed by an engineering team led by Seymour Cray, the father of Supercomputing. He left CDC in the 70's to start his own company, of course, carrying his own name. Now that company Cray, became the market leader in the 70's and the 80's, and then the decade of the 80's saw attempts to bring new designs, such as massively parallel systems, to reach new heights of performance and efficiency. Supercomputing design was one of the most challenging fields, and a number of really brilliant engineers became kind of quasi-famous in their little industry. In addition to Cray himself, Steve Chen, who worked for Cray, then went out to start his own companies. Danny Hillis, of Thinking Machines. Steve Frank of Kendall Square Research. Steve Wallach tried to build a mini supercomputer at Convex. These new entrants, they all failed, for the most part because the market at the time just wasn't really large enough and the economics of these systems really weren't that attractive. Now, the late 80's and the 90's saw big Japanese companies like NEC and Fujitsu entering the fray and governments around the world began to invest heavily in these systems to solve societal problems and make their nations more competitive. And as we entered the 21st century, we saw the coming of petascale computing, with China actually cracking the top 100 list of high performance computing. And today, we're now entering the exascale era, with systems that can complete a billion, billion calculations per second, or 10 to the 18th power. Astounding. And today, the high performance computing market generates north of $30 billion annually and is growing in the high single digits. Supercomputers solve the world's hardest problems in things like simulation, life sciences, weather, energy exploration, aerospace, astronomy, automotive industries, and many other high value examples. And supercomputers are expensive. You know, the highest performing supercomputers used to cost tens of millions of dollars, maybe $30 million. And we've seen that steadily rise to over $200 million. And today we're even seeing systems that cost more than half a billion dollars, even into the low billions when you include all the surrounding data center infrastructure and cooling required. The US, China, Japan, and EU countries, as well as the UK, are all investing heavily to keep their countries competitive, and no price seems to be too high. Now, there are five mega trends going on in HPC today, in addition to this massive rising cost that we just talked about. One, systems are becoming more distributed and less monolithic. The second is the power of these systems is increasing dramatically, both in terms of processor performance and energy consumption. The x86 today dominates processor shipments, it's going to probably continue to do so. Power has some presence, but ARM is growing very rapidly. Nvidia with GPUs is becoming a major player with AI coming in, we'll talk about that in a minute. And both the EU and China are developing their own processors. We're seeing massive densities with hundreds of thousands of cores that are being liquid-cooled with novel phase change technology. The third big trend is AI, which of course is still in the early stages, but it's being combined with ever larger and massive, massive data sets to attack new problems and accelerate research in dozens of industries. Now, the fourth big trend, HPC in the cloud reached critical mass at the end of the last decade. And all of the major hyperscalers are providing HPE, HPC as a service capability. Now finally, quantum computing is often talked about and predicted to become more stable by the end of the decade and crack new dimensions in computing. The EU has even announced a hybrid QC, with the goal of having a stable system in the second half of this decade, most likely around 2027, 2028. Welcome to theCUBE's preview of SC22, the big supercomputing show which takes place the week of November 13th in Dallas. theCUBE is going to be there. Dave Nicholson will be one of the co-hosts and joins me now to talk about trends in HPC and what to look for at the show. Dave, welcome, good to see you. >> Hey, good to see you too, Dave. >> Oh, you heard my narrative up front Dave. You got a technical background, CTO chops, what did I miss? What are the major trends that you're seeing? >> I don't think you really- You didn't miss anything, I think it's just a question of double-clicking on some of the things that you brought up. You know, if you look back historically, supercomputing was sort of relegated to things like weather prediction and nuclear weapons modeling. And these systems would live in places like Lawrence Livermore Labs or Los Alamos. Today, that requirement for cutting edge, leading edge, highest performing supercompute technology is bleeding into the enterprise, driven by AI and ML, artificial intelligence and machine learning. So when we think about the conversations we're going to have and the coverage we're going to do of the SC22 event, a lot of it is going to be looking under the covers and seeing what kind of architectural things contribute to these capabilities moving forward, and asking a whole bunch of questions. >> Yeah, so there's this sort of theory that the world is moving toward this connectivity beyond compute-centricity to connectivity-centric. We've talked about that, you and I, in the past. Is that a factor in the HPC world? How is it impacting, you know, supercomputing design? >> Well, so if you're designing an island that is, you know, tip of this spear, doesn't have to offer any level of interoperability or compatibility with anything else in the compute world, then connectivity is important simply from a speeds and feeds perspective. You know, lowest latency connectivity between nodes and things like that. But as we sort of democratize supercomputing, to a degree, as it moves from solely the purview of academia into truly ubiquitous architecture leverage by enterprises, you start asking the question, "Hey, wouldn't it be kind of cool if we could have this hooked up into our ethernet networks?" And so, that's a whole interesting subject to explore because with things like RDMA over converged ethernet, you now have the ability to have these supercomputing capabilities directly accessible by enterprise computing. So that level of detail, opening up the box of looking at the Nix, or the storage cards that are in the box, is actually critically important. And as an old-school hardware knuckle-dragger myself, I am super excited to see what the cutting edge holds right now. >> Yeah, when you look at the SC22 website, I mean, they're covering all kinds of different areas. They got, you know, parallel clustered systems, AI, storage, you know, servers, system software, application software, security. I mean, wireless HPC is no longer this niche. It really touches virtually every industry, and most industries anyway, and is really driving new advancements in society and research, solving some of the world's hardest problems. So what are some of the topics that you want to cover at SC22? >> Well, I kind of, I touched on some of them. I really want to ask people questions about this idea of HPC moving from just academia into the enterprise. And the question of, does that mean that there are architectural concerns that people have that might not be the same as the concerns that someone in academia or in a lab environment would have? And by the way, just like, little historical context, I can't help it. I just went through the upgrade from iPhone 12 to iPhone 14. This has got one terabyte of storage in it. One terabyte of storage. In 1997, I helped build a one terabyte NAS system that a government defense contractor purchased for almost $2 million. $2 million! This was, I don't even know, it was $9.99 a month extra on my cell phone bill. We had a team of seven people who were going to manage that one terabyte of storage. So, similarly, when we talk about just where are we from a supercompute resource perspective, if you consider it historically, it's absolutely insane. I'm going to be asking people about, of course, what's going on today, but also the near future. You know, what can we expect? What is the sort of singularity that needs to occur where natural language processing across all of the world's languages exists in a perfect way? You know, do we have the compute power now? What's the interface between software and hardware? But really, this is going to be an opportunity that is a little bit unique in terms of the things that we typically cover, because this is a lot about cracking open the box, the server box, and looking at what's inside and carefully considering all of the components. >> You know, Dave, I'm looking at the exhibitor floor. It's like, everybody is here. NASA, Microsoft, IBM, Dell, Intel, HPE, AWS, all the hyperscale guys, Weka IO, Pure Storage, companies I've never heard of. It's just, hundreds and hundreds of exhibitors, Nvidia, Oracle, Penguin Solutions, I mean, just on and on and on. Google, of course, has a presence there, theCUBE has a major presence. We got a 20 x 20 booth. So, it's really, as I say, to your point, HPC is going mainstream. You know, I think a lot of times, we think of HPC supercomputing as this just sort of, off in the eclectic, far off corner, but it really, when you think about big data, when you think about AI, a lot of the advancements that occur in HPC will trickle through and go mainstream in commercial environments. And I suspect that's why there are so many companies here that are really relevant to the commercial market as well. >> Yeah, this is like the Formula 1 of computing. So if you're a Motorsports nerd, you know that F1 is the pinnacle of the sport. SC22, this is where everybody wants to be. Another little historical reference that comes to mind, there was a time in, I think, the early 2000's when Unisys partnered with Intel and Microsoft to come up with, I think it was the ES7000, which was supposed to be the mainframe, the sort of Intel mainframe. It was an early attempt to use... And I don't say this in a derogatory way, commodity resources to create something really, really powerful. Here we are 20 years later, and we are absolutely smack in the middle of that. You mentioned the focus on x86 architecture, but all of the other components that the silicon manufacturers bring to bear, companies like Broadcom, Nvidia, et al, they're all contributing components to this mix in addition to, of course, the microprocessor folks like AMD and Intel and others. So yeah, this is big-time nerd fest. Lots of academics will still be there. The supercomputing.org, this loose affiliation that's been running these SC events for years. They have a major focus, major hooks into academia. They're bringing in legit computer scientists to this event. This is all cutting edge stuff. >> Yeah. So like you said, it's going to be kind of, a lot of techies there, very technical computing, of course, audience. At the same time, we expect that there's going to be a fair amount, as they say, of crossover. And so, I'm excited to see what the coverage looks like. Yourself, John Furrier, Savannah, I think even Paul Gillin is going to attend the show, because I believe we're going to be there three days. So, you know, we're doing a lot of editorial. Dell is an anchor sponsor, so we really appreciate them providing funding so we can have this community event and bring people on. So, if you are interested- >> Dave, Dave, I just have- Just something on that point. I think that's indicative of where this world is moving when you have Dell so directly involved in something like this, it's an indication that this is moving out of just the realm of academia and moving in the direction of enterprise. Because as we know, they tend to ruthlessly drive down the cost of things. And so I think that's an interesting indication right there. >> Yeah, as do the cloud guys. So again, this is mainstream. So if you're interested, if you got something interesting to talk about, if you have market research, you're an analyst, you're an influencer in this community, you've got technical chops, maybe you've got an interesting startup, you can contact David, david.nicholson@siliconangle.com. John Furrier is john@siliconangle.com. david.vellante@siliconangle.com. I'd be happy to listen to your pitch and see if we can fit you onto the program. So, really excited. It's the week of November 13th. I think November 13th is a Sunday, so I believe David will be broadcasting Tuesday, Wednesday, Thursday. Really excited. Give you the last word here, Dave. >> No, I just, I'm not embarrassed to admit that I'm really, really excited about this. It's cutting edge stuff and I'm really going to be exploring this question of where does it fit in the world of AI and ML? I think that's really going to be the center of what I'm really seeking to understand when I'm there. >> All right, Dave Nicholson. Thanks for your time. theCUBE at SC22. Don't miss it. Go to thecube.net, go to siliconangle.com for all the news. This is Dave Vellante for theCUBE and for Dave Nicholson. Thanks for watching. And we'll see you in Dallas. (inquisitive music)

Published Date : Oct 25 2022

SUMMARY :

And all of the major What are the major trends on some of the things that you brought up. that the world is moving or the storage cards that are in the box, solving some of the across all of the world's languages a lot of the advancements but all of the other components At the same time, we expect and moving in the direction of enterprise. Yeah, as do the cloud guys. and I'm really going to be go to siliconangle.com for all the news.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Danny HillisPERSON

0.99+

Steve ChenPERSON

0.99+

NECORGANIZATION

0.99+

FujitsuORGANIZATION

0.99+

IBMORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Steve WallachPERSON

0.99+

DavidPERSON

0.99+

DellORGANIZATION

0.99+

Dave NicholsonPERSON

0.99+

NASAORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Steve FrankPERSON

0.99+

NvidiaORGANIZATION

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

Seymour CrayPERSON

0.99+

John FurrierPERSON

0.99+

Paul GillinPERSON

0.99+

Dave VellantePERSON

0.99+

UnisysORGANIZATION

0.99+

1997DATE

0.99+

SavannahPERSON

0.99+

DallasLOCATION

0.99+

EUORGANIZATION

0.99+

Controlled Data CorporationsORGANIZATION

0.99+

IntelORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Penguin SolutionsORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

TuesdayDATE

0.99+

siliconangle.comOTHER

0.99+

AMDORGANIZATION

0.99+

21st centuryDATE

0.99+

iPhone 12COMMERCIAL_ITEM

0.99+

10QUANTITY

0.99+

CrayPERSON

0.99+

one terabyteQUANTITY

0.99+

CDCORGANIZATION

0.99+

thecube.netOTHER

0.99+

Lawrence Livermore LabsORGANIZATION

0.99+

BroadcomORGANIZATION

0.99+

Kendall Square ResearchORGANIZATION

0.99+

iPhone 14COMMERCIAL_ITEM

0.99+

john@siliconangle.comOTHER

0.99+

$2 millionQUANTITY

0.99+

November 13thDATE

0.99+

firstQUANTITY

0.99+

over $200 millionQUANTITY

0.99+

TodayDATE

0.99+

more than half a billion dollarsQUANTITY

0.99+

20QUANTITY

0.99+

seven peopleQUANTITY

0.99+

hundredsQUANTITY

0.99+

mid 1960sDATE

0.99+

three daysQUANTITY

0.99+

ConvexORGANIZATION

0.99+

70'sDATE

0.99+

SC22EVENT

0.99+

david.vellante@siliconangle.comOTHER

0.99+

late 80'sDATE

0.98+

80'sDATE

0.98+

ES7000COMMERCIAL_ITEM

0.98+

todayDATE

0.98+

almost $2 millionQUANTITY

0.98+

secondQUANTITY

0.98+

bothQUANTITY

0.98+

20 years laterDATE

0.98+

tens of millions of dollarsQUANTITY

0.98+

SundayDATE

0.98+

JapaneseOTHER

0.98+

90'sDATE

0.97+