Image Title

Search Results for NGC:

Ian Buck, NVIDIA | AWS re:Invent 2021


 

>>Well, welcome back to the cubes coverage of AWS reinvent 2021. We're here joined by Ian buck, general manager and vice president of accelerated computing at Nvidia I'm. John Ford, your host of the QB. And thanks for coming on. So in video, obviously, great brand congratulates on all your continued success. Everyone who has does anything in graphics knows the GPU's are hot and you guys get great brand great success in the company, but AI and machine learning was seeing the trend significantly being powered by the GPU's and other systems. So it's a key part of everything. So what's the trends that you're seeing, uh, in ML and AI, that's accelerating computing to the cloud. Yeah, >>I mean, AI is kind of drape bragging breakthroughs innovations across so many segments, so many different use cases. We see it showing up with things like credit card, fraud prevention and product and content recommendations. Really it's the new engine behind search engines is AI. Uh, people are applying AI to things like, um, meeting transcriptions, uh, virtual calls like this using AI to actually capture what was said. Um, and that gets applied in person to person interactions. We also see it in intelligence systems assistance for a contact center, automation or chat bots, uh, medical imaging, um, and intelligence stores and warehouses and everywhere. It's really, it's really amazing what AI has been demonstrated, what it can do. And, uh, it's new use cases are showing up all the time. >>Yeah. I'd love to get your thoughts on, on how the world's evolved just in the past few years, along with cloud, and certainly the pandemics proven it. You had this whole kind of full stack mindset initially, and now you're seeing more of a horizontal scale, but yet enabling this vertical specialization in applications. I mean, you mentioned some of those apps, the new enablers, this kind of the horizontal play with enablement for specialization, with data, this is a huge shift that's going on. It's been happening. What's your reaction to that? >>Yeah, it's the innovations on two fronts. There's a horizontal front, which is basically the different kinds of neural networks or AIS as well as machine learning techniques that are, um, just being invented by researchers for, uh, and the community at large, including Amazon. Um, you know, it started with these convolutional neural networks, which are great for image processing, but as it expanded more recently into, uh, recurrent neural networks, transformer models, which are great for language and language and understanding, and then the new hot topic graph neural networks, where the actual graph now is trained as a, as a neural network, you have this underpinning of great AI technologies that are being adventure around the world in videos role is try to productize that and provide a platform for people to do that innovation and then take the next step and innovate vertically. Um, take it, take it and apply it to two particular field, um, like medical, like healthcare and medical imaging applying AI, so that radiologists can have an AI assistant with them and highlight different parts of the scan. >>Then maybe troublesome worrying, or requires more investigation, um, using it for robotics, building virtual worlds, where robots can be trained in a virtual environment, their AI being constantly trained, reinforced, and learn how to do certain activities and techniques. So that the first time it's ever downloaded into a real robot, it works right out of the box, um, to do, to activate that we co we are creating different vertical solutions, vertical stacks for products that talk the languages of those businesses, of those users, uh, in medical imaging, it's processing medical data, which is obviously a very complicated large format data, often three-dimensional boxes in robotics. It's building combining both our graphics and simulation technologies, along with the, you know, the AI training capabilities and different capabilities in order to run in real time. Those are, >>Yeah. I mean, it's just so cutting edge. It's so relevant. I mean, I think one of the things you mentioned about the neural networks, specifically, the graph neural networks, I mean, we saw, I mean, just to go back to the late two thousands, you know, how unstructured data or object store created, a lot of people realize that the value out of that now you've got graph graph value, you got graph network effect, you've got all kinds of new patterns. You guys have this notion of graph neural networks. Um, that's, that's, that's out there. What is, what is a graph neural network and what does it actually mean for deep learning and an AI perspective? >>Yeah, we have a graph is exactly what it sounds like. You have points that are connected to each other, that established relationships and the example of amazon.com. You might have buyers, distributors, sellers, um, and all of them are buying or recommending or selling different products. And they're represented in a graph if I buy something from you and from you, I'm connected to those end points and likewise more deeply across a supply chain or warehouse or other buyers and sellers across the network. What's new right now is that those connections now can be treated and trained like a neural network, understanding the relationship. How strong is that connection between that buyer and seller or that distributor and supplier, and then build up a network that figure out and understand patterns across them. For example, what products I may like. Cause I have this connection in my graph, what other products may meet those requirements, or also identifying things like fraud when, when patterns and buying patterns don't match, what a graph neural networks should say would be the typical kind of graph connectivity, the different kind of weights and connections between the two captured by the frequency half I buy things or how I rate them or give them stars as she used cases, uh, this application graph neural networks, which is basically capturing the connections of all things with all people, especially in the world of e-commerce, it's very exciting to a new application, but applying AI to optimizing business, to reducing fraud and letting us, you know, get access to the products that we want, the products that they have, our recommendations be things that, that excited us and want us to buy things >>Great setup for the real conversation that's going on here at re-invent, which is new kinds of workloads are changing. The game. People are refactoring their business with not just replatform, but actually using this to identify value and see cloud scale allows you to have the compute power to, you know, look at a note on an arc and actually code that. It's all, it's all science, all computer science, all at scale. So with that, that brings up the whole AWS relationship. Can you tell us how you're working with AWS before? >>Yeah. 80 of us has been a great partner and one of the first cloud providers to ever provide GPS the cloud, uh, we most more recently we've announced two new instances, uh, the instance, which is based on the RA 10 G GPU, which has it was supports the Nvidia RTX technology or rendering technology, uh, for real-time Ray tracing and graphics and game streaming is their highest performance graphics, enhanced replicate without allows for those high performance graphics applications to be directly hosted in the cloud. And of course runs everything else as well, including our AI has access to our AI technology runs all of our AI stacks. We also announced with AWS, the G 5g instance, this is exciting because it's the first, uh, graviton or ARM-based processor connected to a GPU and successful in the cloud. Um, this makes, uh, the focus here is Android gaming and machine learning and France. And we're excited to see the advancements that Amazon is making and AWS is making with arm and the cloud. And we're glad to be part of that journey. >>Well, congratulations. I remember I was just watching my interview with James Hamilton from AWS 2013 and 2014. He was getting, he was teasing this out, that they're going to build their own, get in there and build their own connections, take that latency down and do other things. This is kind of the harvest of all that. As you start looking at these new new interfaces and the new servers, new technology that you guys are doing, you're enabling applications. What does, what do you see this enabling as this, as this new capability comes out, new speed, more, more performance, but also now it's enabling more capabilities so that new workloads can be realized. What would you say to folks who want to ask that question? >>Well, so first off I think arm is here to stay and you can see the growth and explosion of my arm, uh, led of course, by grab a tiny to be. I spend many others, uh, and by bringing all of NVIDIA's rendering graphics, machine learning and AI technologies to arm, we can help bring that innovation. That arm allows that open innovation because there's an open architecture to the entire ecosystem. Uh, we can help bring it forward, uh, to the state of the art in AI machine learning, the graphics. Um, we all have our software that we released is both supportive, both on x86 and an army equally, um, and including all of our AI stacks. So most notably for inference the deployment of AI models. We have our, the Nvidia Triton inference server. Uh, this is the, our inference serving software where after he was trained to model, he wanted to play it at scale on any CPU or GPU instance, um, for that matter. So we support both CPS and GPS with Triton. Um, it's natively integrated with SageMaker and provides the benefit of all those performance optimizations all the time. Uh, things like, uh, features like dynamic batching. It supports all the different AI frameworks from PI torch to TensorFlow, even a generalized Python code. Um, we're activating how activating the arm ecosystem as well as bringing all those AI new AI use cases and all those different performance levels, uh, with our partnership with AWS and all the different clouds. >>And you got to making it really easy for people to use, use the technology that brings up the next kind of question I want to ask you. I mean, a lot of people are really going in jumping in the big time into this. They're adopting AI. Either they're moving in from prototype to production. There's always some gaps, whether it's knowledge, skills, gaps, or whatever, but people are accelerating into the AI and leaning into it hard. What advancements have is Nvidia made to make it more accessible, um, for people to move faster through the, through the system, through the process? >>Yeah, it's one of the biggest challenges. The other promise of AI, all the publications that are coming all the way research now, how can you make it more accessible or easier to use by more people rather than just being an AI researcher, which is, uh, uh, obviously a very challenging and interesting field, but not one that's directly in the business. Nvidia is trying to write a full stack approach to AI. So as we make, uh, discover or see these AI technologies come available, we produce SDKs to help activate them or connect them with developers around the world. Uh, we have over 150 different STKs at this point, certain industries from gaming to design, to life sciences, to earth scientist. We even have stuff to help simulate quantum computing. Um, and of course all the, all the work we're doing with AI, 5g and robotics. So, uh, we actually just introduced about 65 new updates just this past month on all those SDKs. Uh, some of the newer stuff that's really exciting is the large language models. Uh, people are building some amazing AI. That's capable of understanding the Corpus of like human understanding, these language models that are trained on literally the continent of the internet to provide general purpose or open domain chatbots. So the customer is going to have a new kind of experience with a computer or the cloud. Uh, we're offering large language, uh, those large language models, as well as AI frameworks to help companies take advantage of this new kind of technology. >>You know, each and every time I do an interview with Nvidia or talk about Nvidia my kids and their friends, they first thing they said, you get me a good graphics card. Hey, I want the best thing in their rig. Obviously the gaming market's hot and known for that, but I mean, but there's a huge software team behind Nvidia. This is a well-known your CEO is always talking about on his keynotes, you're in the software business. And then you had, do have hardware. You were integrating with graviton and other things. So, but it's a software practices, software. This is all about software. Could you share kind of more about how Nvidia culture and their cloud culture and specifically around the scale? I mean, you, you hit every, every use case. So what's the software culture there at Nvidia, >>And it is actually a bigger, we have more software people than hardware people, people don't often realize this. Uh, and in fact that it's because of we create, uh, the, the, it just starts with the chip, obviously building great Silicon is necessary to provide that level of innovation, but as it expanded dramatically from then, from there, uh, not just the Silicon and the GPU, but the server designs themselves, we actually do entire server designs ourselves to help build out this infrastructure. We consume it and use it ourselves and build our own supercomputers to use AI, to improve our products. And then all that software that we build on top, we make it available. As I mentioned before, uh, as containers on our, uh, NGC container store container registry, which is accessible for me to bus, um, to connect to those vertical markets, instead of just opening up the hardware and none of the ecosystem in develop on it, they can with a low-level and programmatic stacks that we provide with Kuda. We believe that those vertical stacks are the ways we can help accelerate and advance AI. And that's why we make as well, >>Ram a little software is so much easier. I want to get that plug for, I think it's worth noting that you guys are, are heavy hardcore, especially on the AI side. And it's worth calling out, uh, getting back to the customers who are bridging that gap and getting out there, what are the metrics they should consider as they're deploying AI? What are success metrics? What does success look like? Can you share any insight into what they should be thinking about and looking at how they're doing? >>Yeah. Um, for training, it's all about time to solution. Um, it's not the hardware that that's the cost, it's the opportunity that AI can provide your business and many, and the productivity of those data scientists, which are developing, which are not easy to come by. So, uh, what we hear from customers is they need a fast time to solution to allow people to prototype very quickly, to train a model to convergence, to get into production quickly, and of course, move on to the next or continue to refine it often. So in training is time to solution for inference. It's about our, your ability to deploy at scale. Often people need to have real time requirements. They want to run in a certain amount of latency, a certain amount of time. And typically most companies don't have a single AI model. They have a collection of them. They want, they want to run for a single service or across multiple services. That's where you can aggregate some of your infrastructure leveraging the trading infant server. I mentioned before can actually run multiple models on a single GPU saving costs, optimizing for efficiency yet still meeting the requirements for latency and the real time experience so that your customers have a good, a good interaction with the AI. >>Awesome. Great. Let's get into, uh, the customer examples. You guys have obviously great customers. Can you share some of the use cases, examples with customers, notable customers? >>Yeah. I want one great part about working in videos as a technology company. You see, you get to engage with such amazing customers across many verticals. Uh, some of the ones that are pretty exciting right now, Netflix is using the G4 instances to CLA um, to do a video effects and animation content. And, you know, from anywhere in the world, in the cloud, uh, as a cloud creation content platform, uh, we work in the energy field that Siemens energy is actually using AI combined with, um, uh, simulation to do predictive maintenance on their energy plants, um, and, and, uh, doing preventing or optimizing onsite inspection activities and eliminating downtime, which is saving a lot of money for the engine industry. Uh, we have worked with Oxford university, uh, which is Oxford university actually has over two, over 20 million artifacts and specimens and collections across its gardens and museums and libraries. They're actually using convenient GPS and Amazon to do enhance image recognition, to classify all these things, which would take literally years with, um, uh, going through manually each of these artifacts using AI, we can click and quickly catalog all of them and connect them with their users. Um, great stories across graphics, about cross industries across research that, uh, it's just so exciting to see what people are doing with our technology together with, >>And thank you so much for coming on the cube. I really appreciate Greg, a lot of great content there. We probably going to go another hour, all the great stuff going on in the video, any closing remarks you want to share as we wrap this last minute up >>Now, the, um, really what Nvidia is about as accelerating cloud computing, whether it be AI, machine learning, graphics, or headphones, community simulation, and AWS was one of the first with this in the beginning, and they continue to bring out great instances to help connect, uh, the cloud and accelerated computing with all the different opportunities integrations with with SageMaker really Ks and ECS. Uh, the new instances with G five and G 5g, very excited to see all the work that we're doing together. >>Ian buck, general manager, and vice president of accelerated computing. I mean, how can you not love that title? We want more, more power, more faster, come on. More computing. No, one's going to complain with more computing know, thanks for coming on. Thank you. Appreciate it. I'm John Farrell hosted the cube. You're watching Amazon coverage reinvent 2021. Thanks for watching.

Published Date : Nov 30 2021

SUMMARY :

knows the GPU's are hot and you guys get great brand great success in the company, but AI and machine learning was seeing the AI. Uh, people are applying AI to things like, um, meeting transcriptions, I mean, you mentioned some of those apps, the new enablers, Yeah, it's the innovations on two fronts. technologies, along with the, you know, the AI training capabilities and different capabilities in I mean, I think one of the things you mentioned about the neural networks, You have points that are connected to each Great setup for the real conversation that's going on here at re-invent, which is new kinds of workloads And we're excited to see the advancements that Amazon is making and AWS is making with arm and interfaces and the new servers, new technology that you guys are doing, you're enabling applications. Well, so first off I think arm is here to stay and you can see the growth and explosion of my arm, I mean, a lot of people are really going in jumping in the big time into this. So the customer is going to have a new kind of experience with a computer And then you had, do have hardware. not just the Silicon and the GPU, but the server designs themselves, we actually do entire server I want to get that plug for, I think it's worth noting that you guys are, that that's the cost, it's the opportunity that AI can provide your business and many, Can you share some of the use cases, examples with customers, notable customers? research that, uh, it's just so exciting to see what people are doing with our technology together with, all the great stuff going on in the video, any closing remarks you want to share as we wrap this last minute up Uh, the new instances with G one's going to complain with more computing know, thanks for coming on.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ian buckPERSON

0.99+

John FarrellPERSON

0.99+

NvidiaORGANIZATION

0.99+

Ian BuckPERSON

0.99+

AWSORGANIZATION

0.99+

Ian buckPERSON

0.99+

GregPERSON

0.99+

2014DATE

0.99+

AmazonORGANIZATION

0.99+

John FordPERSON

0.99+

James HamiltonPERSON

0.99+

NetflixORGANIZATION

0.99+

G fiveCOMMERCIAL_ITEM

0.99+

NVIDIAORGANIZATION

0.99+

PythonTITLE

0.99+

bothQUANTITY

0.99+

G 5gCOMMERCIAL_ITEM

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

AndroidTITLE

0.99+

Oxford universityORGANIZATION

0.99+

2013DATE

0.98+

amazon.comORGANIZATION

0.98+

over twoQUANTITY

0.98+

twoQUANTITY

0.98+

first timeQUANTITY

0.97+

single serviceQUANTITY

0.97+

2021DATE

0.97+

two frontsQUANTITY

0.96+

singleQUANTITY

0.96+

over 20 million artifactsQUANTITY

0.96+

eachQUANTITY

0.95+

about 65 new updatesQUANTITY

0.93+

Siemens energyORGANIZATION

0.92+

over 150 different STKsQUANTITY

0.92+

single GPUQUANTITY

0.91+

two new instancesQUANTITY

0.91+

first thingQUANTITY

0.9+

FranceLOCATION

0.87+

two particular fieldQUANTITY

0.85+

SageMakerTITLE

0.85+

TritonTITLE

0.82+

first cloud providersQUANTITY

0.81+

NGCORGANIZATION

0.77+

80 ofQUANTITY

0.74+

past monthDATE

0.68+

x86COMMERCIAL_ITEM

0.67+

lateDATE

0.67+

two thousandsQUANTITY

0.64+

pandemicsEVENT

0.64+

past few yearsDATE

0.61+

G4ORGANIZATION

0.6+

RACOMMERCIAL_ITEM

0.6+

KudaORGANIZATION

0.59+

ECSORGANIZATION

0.55+

10 GOTHER

0.54+

SageMakerORGANIZATION

0.49+

TensorFlowOTHER

0.48+

KsORGANIZATION

0.36+

Paul Perez, Dell Technologies and Kit Colbert, VMware | Dell Technologies World 2020


 

>> Narrator: From around the globe, it's theCUBE! With digital coverage of Dell Technologies World Digital Experience. Brought to you by Dell Technologies. >> Hey, welcome back, everybody. Jeffrey here with theCUBE coming to you from our Palo Altos studios with continuing coverage of the Dell Technology World 2020, The Digital Experience. We've been covering this for over 10 years. It's virtual this year, but still have a lot of great content, a lot of great announcements, and a lot of technology that's being released and talked about. So we're excited. We're going to dig a little deep with our next two guests. First of all we have Paul Perez. He is the SVP and CTO of infrastructure solutions group for Dell technologies. Paul's great to see you. Where are you coming in from today? >> Austin, Texas. >> Austin Texas Awesome. And joining him returning to theCUBE on many times, Kit Colbert. He is the Vice President and CTO of VMware cloud for VMware Kit great to see you as well. Where are you joining us from? >> Yeah, thanks for having me again. I'm here in San Francisco. >> Awesome. So let's jump into it and talk about project Monterrey. You know, it's funny I was at Intel back in the day and all of our passwords used to go out and they became like the product names. It's funny how these little internal project names get a life of their own and this is a big one. And, you know, we had Pat Gelsinger on a few weeks back at VM-ware talking about how significant this is and kind of this evolution within the VMware cloud development. And, you know, it's kind of past Kubernetes and past Tanzu and past project Pacific and now we're into project Monterey. So first off, let's start with Kit, give us kind of the basic overview of what is project Monterey. >> Yep. Yeah, well, you're absolutely right. What we did last year, we announced project Pacific, which was really a fundamental rethinking of VMware cloud foundation with Kubernetes built in right. Kubernetes is still a core to core part of the architecture and the idea there was really to better support modern applications to enable developers and IT operations to come together to work collaboratively toward modernizing a company's application fleet. And as you look at companies starting to be successful, they're starting to run these modern applications. What you found is that the hardware architecture itself needed to evolve, needed to update, to support all the new requirements brought on by these modern apps. And so when you're looking at project Monterey, it's exactly that it's a rethinking of the VMware cloud foundation, underlying hardware architecture. And so you think about a project model or excuse me, product Pacific is really kind of the top half if you will, Kubernetes consumption experiences great for applications. Project Monterey comes along as the second step in that journey, really being the bottom half, fundamentally rethinking the hardware architecture and leveraging SmartNic technology to do that. >> It's pretty interesting, Paul, you know, there's a great shift in this whole move from, you know, infrastructure driving applications to applications driving infrastructure. And then we're seeing, you know, obviously the big move with big data. And again, I think as Pat talked about in his interview with NVIDIA being at the right time, at the right place with the right technology and this, you know, kind of groundswell of GPU, now DPU, you know, helping to move those workloads beyond just kind of where the CPU used to do all the work, this is even, you know, kind of taking it another level you guys are the hardware guys and the solutions guys, as you look at this kind of continuing evolution, both of workloads as well as their infrastructure, how does this fit in? >> Yeah, well, how all this fit it in is modern applications and modern workloads, require a modern infrastructure, right? And a Kit was talking about the infrastructure overlay. That VMware is awesome at that all being, I was coming at this from the emerging data centric workloads, and some of the implications for that, including Phillip and diversity has ever been used for computing. The need to this faculty could be able to combine maybe resources together, as opposed to trying to shoehorn something into a mechanical chassis. And, and if you do segregate, you have to be able to compose on demand. And when you start comparing those, we realized that we were humping it up on our conversion trajectory and we started to team up and partner. >> So it's interesting because part of the composable philosophy, if you will, is to, you know, just to break the components of compute store and networking down to a small pieces as possible, and then you can assemble the right amount when you need it to attack a particular problem. But when you're talking about it's a whole different level of, of bringing the right hardware to bear for the solution. When you talk about SmartNics and you talk about GPS in DPS data processing units, you're now starting to offload and even FPG is that some of these other things offload a lot of work from the core CPU to some of these more appropriate devices that said, how do people make sure that the right application ends up on the right infrastructure? This is that I'm, if it's appropriate using more of a, of a Monterey based solution versus more of a traditional one, depending on the workload, how is that going to get all kind of sorted out and, and routed within the actual cloud infrastructure itself? That was probably back to you a Kit? >> Yeah, sure. So I think it's important to understand kind of what a smart NIC is and how it works in order to answer that question, because what we're really doing is to kind of jump right to it. I guess it's, you know, giving an API into the infrastructure and this is how we're able to do all the things that you just mentioned, but what does a SmartNic? Well, SmartNic is essentially a NIC with a general purpose CPU on it, really a whole CPU complex, in fact, kind of a whole system on server right there on that, on that Nic. And so what that enables is a bunch of great things. So first of all, to your point, we can do a lot of offload. We can actually run ESX. >> SXI on that. Nic, we can take a lot of the functionality that we were doing before on the main server CPU, things like network virtualization, storage, virtualization, security functionality, we can move that all off on the Nic. And it makes a lot of sense because really what we're doing when we're doing all those things is really looking at different sort of IO data paths. You know, as, as the network traffic comes through looking at doing automatic load balancing firewall and for security, delivering storage, perhaps remotely. And so the NIC is actually a perfect place to place all of these functionalities, right? You can not only move it off the core server CPU, but you can get a lot better performance cause you're now right there on the data path. So I think that's the first really key point is that you can get that offload, but then once you have all of that functionality there, then you can start doing some really amazing things. And this ability to expose additional virtual devices onto the PCI bus, this is another great capability of a SmartNic. So when you plug it in physically into the motherboard, it's a Nic, right. You can see that. And when it starts up, it looks like a Nic to the motherboard, to the system, but then via software, you can have it expose additional devices. It could look like a storage controller, or it could look like an FPGA look really any sort of device. And you can do that. Not only for the local machine where it's plugged in, but potentially remote machines as well with the right sorts of interconnects. So what this creates is a whole new sort of cluster architecture. And that's why we're really so excited about it because you got all these great benefits in terms of offload performance improvement, security improvement, but then you get this great ability to get very dynamic, just aggregation. And composability. >> So Kit, how much of it is the routing of the workload to the right place, right? That's got the right amount of say, it's a super data intensive once a lot of GPU versus actually better executing the operation. Once it gets to the place where it's going to run. >> Yeah. It's a bit of a combination actually. So the powerful thing about it is that in a traditional world, where are you want an application? You know, the server that you run it, that app can really only use the local devices there. Yes, there is some newer stuff like NVMe over fabric where you can remote certain types of storage capabilities, but there's no real general purpose solution to that. Yet that generally speaking, that application is limited to the local hardware devices. Well, the great part about what we're doing with Monterey and with the SmartNic technology is that we can now dynamically remote or expose remote devices from other hosts. And so wherever that application runs matters a little bit less now, in a sense that we can give it the right sorts of hardware it needs in order to operate. You know, if you have, let's say a few machines with a FPGA is normally if you have needed that a Fiji had to run locally, but now can actually run remotely and you can better balance out things like compute requirements versus, you know, specialized Accella Requirements. And so I think what we're looking at is, especially in the context of VMware cloud foundation, is bringing that all together. We can look through the scheduling, figure out what the best host for it to let run on based on all these considerations. And that's it, we are missing, let's say a physical device that needs, well, we can remote that and sort of a deal at that, a missing gap there. >> Right, right. That's great. Paul, I want to go back to you. You just talked about, you know, kind of coming at this problem from a data centric point of view, and you're running infrastructure and you're the poor guy that's got to catch all the ASAM Todd i the giant exponential curves up into the right on the data flow and the data quantity. How is that impacting the way you think about infrastructure and designing infrastructure and changing infrastructure and kind of future proofing infrastructure when, you know, just around the corners, 5g and IOT and, Oh, you ain't seen nothing yet in terms of the data flow. >> Yeah. So I come at this from two angles. One that we talked about briefly is the evolution of the workloads themselves. The other angle, which is just as important is the operating model that customers are wanting to evolve to. And in that context, we thought a lot about how cloud, if an operating model, not necessarily a destination, right? So what I, and when way we laid out, what Kit was talking about is that in data center computing, you have operational control and data plane. Where did data plane run from the optimized solution? GPU's, PGA's, offload engines? And the control plane can run on stuff like it could be safe and are then I'm thinking about SmartNic is back codes have arm boards, so you can implement some data plane and some control plane, and they can also be the gateway. Cause, you know, you've talked about composability, what has been done up until now is early for sprint, right? We're carving out software defined infrastructure out of predefined hardware blocks. What we're talking about is making, you know, a GPUs residents in our fabric consistent memory residence of a fabric NVME over fabric and being able to tile computing topologies on demand to realize and applications intent. And we call that intent based computer. >> Right. Well, just, and to follow up on that too, as the, you know, cloud is an attitude or as an operating model or whatever you want to say, you know, not necessarily a place or a thing has changed. I mean, how has that had to get you to shift your infrastructure approach? Cause you've got to support, you know, old school, good old data centers. We've got, you know, some stuff running on public clouds. And then now you've got hybrid clouds and you have multi clouds, right. So we know, you know, you're out in the field that people have workloads running all over the place. So, but they got to control it and they've got compliance issues and they got a whole bunch of other stuff. So from your point of view, as you see the desire for more flexibility, the desire for more infrastructure centric support for the workloads that I want to buy and the increasing amount of those that are more data centric, as we move to hopefully more data driven decisions, how's it changed your strategy. And what does it mean to partner and have a real nice formal relationship with the folks over at VMR or excuse me, VMware? >> Well, I think that regardless of how big a company is, it's always prudent. As I say, when I approached my job, right, architecture is about balance and efficiency and it's about reducing contention. And we like to leverage industry R and D, especially in cases where one plus one equals two, right? In the case of, project Monterey for example, one of the collaboration areas is in improving the security model and being able to provide more air gap isolation, especially when you consider that enterprise wants to behave as service providers is concerned or to their companies. And therefore this is important. And because of that, I think that there's a lot of things that we can do between VMware and Dell lending hardware, and for example, assets like NSX and a different way that will give customers higher scalability and performance and more control, you know, beyond VMware and Dell EMC i think that we're partnering with obviously the SmartNic vendors, cause they're smart interprets and the gateway to those that are clean. They're not really analysis, but also companies that are innovating in data center computing, for example, NVIDIA. >> Right. Right. >> And I think that what we're seeing is while, you know, ambivalent has done an awesome job of targeting their capability, AIML type of workloads, what we realized this applications today depend on platform services, right. And up until recently, those platform services have been debases messaging PI active directory, moving forward. I think that within five years, most applications will depend on some form of AIML service. So I can see an opportunity to go mainstream with this >> Right. Right. Well, it's great. You bring up in NVIDIA and I'm just going to quote one of Pat's lines from, from his interview. And he talked about Jensen from NVIDIA actually telling Pat, Hey Pat, I think you're thinking too small. I love it. You know, let's do the entire AI landscape together and make AI and enterprise class workloads from being more in TANZU, you know, first class citizens. So I, I love the fact, you know, Pat's been around a long time industry veteran, but still, kind of accepted the challenge from Jensen to really elevate AI and machine learning via GPS to first class citizen status. And the other piece, obviously this coming up is ed. So I, you know, it's a nice shot of a, of adrenaline and Kit I wonder if you can share your thoughts on that, you know, in kind of saying, Hey, let's take it up a notch, a significant notch by leveraging a whole another class of compute power within these solutions. >> Yeah. So, I mean, I'll, I'll go real quick. I mean, I, it's funny cause like not many people really ever challenged Pat to say he doesn't think big enough, cause usually he's always blown us away with what he wants to do next, but I think it's, I think it's a, you know, it's good though. It's good to keep us on our toes and push us a bit. Right. All of us within the industry. And so I think a couple of things you have to go back to your previous point around this is like cloud as a model. I think that's exactly what we're doing is trying to bring cloud as a model, even on prem. And it's a lot of these kinds of core hardware architecture capabilities that we do enable the biggest one in my mind, just being enabling an API into the hardware. So the applications can get what they need. And going back to Paul's point, this notion of these AI and ML services, you know, they have to be rooted in the hardware, right? We know that in order for them to be performing for them to run, to support what our customers want to do, we need to have that deeply integrated into the hardware all the way up. But that also becomes a software problem. Once we got the hardware solved, once we get that architecture locked in, how can we as easy as possible, as seamlessly as possible, deliver all those great capabilities, software capabilities. And so, you know, you look at what we've done with the NVIDIA partnership, things around the NVIDIA GPU cloud, and really bringing that to bear. And so then you start having this, this really great full stack integration all the way from the hardware, very powerful hardware architecture that, you know, again, driven by API, the infrastructure software on top of that. And then all these great AI tools, tool chains, capabilities with things like the NVIDIA NGC. So that's really, I think where the vision's going. And we got a lot of the basic parts there, but obviously a lot more work to do going forward. >> I would say that, you know, initially we had dream, we wanted this journey to happen very fast and initially we're baiting infrastructure services. So there's no contention with applications, customer full workload applications, and also in enabling how productive it is to get the data over time, have to have sufficient control over a wide area. there's an opportunity to do something like that to make sure that you think about the probation from bare metal vms (conversation fading) environments are way more dynamic and more spreadable. Right. And they expect hardware. It could be as dynamic and compostable to suit their needs. And I think that's where we're headed. >> Right. So let me, so let me throw a monkey wrench in, in terms of security, right? So now this thing is much more flexible. It's much more software defined. How is that changing the way you think about security and basic security and throughout the stack go to you first, Paul. >> Yeah. Yeah. So like it's actually enables a lot of really powerful things. So first of all, from an architecture and implementation standpoint, you have to understand that we're really running two copies of VXI on each physical server. Now we've got the one running on the X86 side, just like normal, and now we've got one running on the SmartNIC as well. And so, as I mentioned before, we can move a lot of that networking security, et cetera, capabilities off to the SmartNic. And so what does this going toward as what we call a zero trust security architecture, this notion of having really defense in depth at many different layers and many different areas while obviously the hypervisor and the virtualization layer provides a really strong level of security. even when we were doing it completely on the X86 side, now that we're running on a SmartNic that's additional defense in depth because the X86 ESX doesn't really know it doesn't have direct access to the ESX. I run it on the SmartNic So the ESXI running on the SmartNic, it can be this kind of more well defended position. Moreover, now that we're running the security functionality is directly on the data path. In the SmartNic. We can do a lot more with that. We can run a lot deeper analysis, can talk about AI and ML, bring a lot of those capabilities to bear here to actually improve the security profile. And so finally I'd say this notion of kind of distributed security as well, that you don't, that's what I want to have these individual points on the physical network, but I actually distribute the security policies and enforcement to everywhere where a server's running, I everywhere where a SmartNic is, and that's what we can do here. And so it really takes a lot of what we've been doing with things like NSX, but now connects it much more deeply into hardware, allowing for better performance and security. >> A common attack method is to intercept the boot of the server physical server. And, you know, I'm actually very proud of our team because the us national security agency recently published a white paper on best practices for secure group. And they take our implementation across and secure boot as the reference standard. >> Right? Moving forward, imagine an environment that even if you gain control of the server, that doesn't allow you to change bios or update it. So we're moving the root of trust to be in that air gap, domain that Kit talked about. And that gives us a way more capability for zero across the operations. Right. >> Right, right. Paul, I got to ask you, I had Sam bird on the other day, your peer who runs the P the PC group. >> I'm telling you, he is not a peer He's a little bit higher up. >> Higher than you. Okay. Well, I just promoted you so that's okay. But, but it's really interesting. Cause we were talking about, it was literally like 10 years ago, the death of the PC article that came out when, when Apple introduced the tablet and, you know, he's talked about what phenomenal devices that PCs continue to be and evolve. And then it's just funny how, now that dovetails with this whole edge conversation, when people don't necessarily think of a PC as a piece of the edge, but it is a great piece of the edge. So from an infrastructure point of view, you know, to have that kind of presence within the PCs and kind of potentially that intelligence and again, this kind of whole another layer of interaction with the users and an opportunity to define how they work with applications and prioritize applications. I just wonder if you can share how nice it is to have that kind of in your back pocket to know that you've got a whole another, you know, kind of layer of visibility and connection with the users beyond just simply the infrastructure. >> So I actually, within the company we've developed within a framework that we call four edge multicloud, right. Core data centers and enterprise edge IOP, and then off premise. it is a multicloud world. And, and within that framework, we consider our client solutions group products to be part of the yes. And we see a lot of benefit. I'll give an example about a healthcare company that wants to develop real time analytics, regardless of whether it's on a laptop or maybe move into a backend data center, right? Whether it's at a hospital clinic or a patient's home, it gives us a broader innovation surface and a little sooner to get actually the, a lot of people may not appreciate that the most important function within Centene, I considered to be the experienced design thing. So being able to design user flows and customer experience looked at all of use is a variable. >> That's great. That's great. So we're running out of time. I want to give you each the last word you both been in this business for a long time. This is brand new stuff, right? Container aren't new, Kubernetes is still relatively new and exciting. And project Pacific was relatively new and now project Monterrey, but you guys are, you know, you're, multi-decade veterans in this thing. as you look forward, what does this moment represent compared to some of the other shifts that we've seen in IT? You know, generally, but you know, kind of consumption of compute and you know, kind of this application centric world that just continues to grow. I mean, as a software is eating everything, we know it, you guys live it every day. What is, where are we now? And you know, what do you see? Maybe I don't want to go too far out, but the next couple of years within the Monterey framework. And then if you have something else, generally you can add as well. Paul, why don't we start with you? >> Well, I think on a personal level, ingenuity aside I have a long string of very successful endeavor in my career when I came back couple years ago, one of the things that I told Jeff, our vice chairman is a big canvas and I intend to paint my masterpiece and I think, you know, Monterey and what we're doing in support of Monterey is also part of that. I think that you will see, you will see our initial approach focus on, on coordinator. I can tell you that you know how to express it. And we know also how to express even in a multicloud world. So I'm very excited and I know that I'm going to be busy for the next few years. (giggling) >> A Kit to you. >> Yeah. So, you know, it's funny you talk to people about SmartNic and especially those folks that have been around for awhile. And what you hear is like, Hey, you know, people were talking about SmartNic 10 years ago, 20 years ago, that sort of thing. Then they kind of died off. So what's different now. And I think the big difference now is a few things, you know, first of all, it's the core technology of sworn and has dramatically improved. We now have a powerful software infrastructure layer that can take advantage of it. And, you know, finally, you know, applications have a really strong need for it, again, with all the things we've talked about, the need for offload. So I think there's some real sort of fundamental shifts that have happened over the past. Let's say decade that have driven the need for this. And so this is something that I believe strongly as here to last, you know, both ourselves at VMware, as well as Dell are making a huge bet on this, but not only that, and not only is it good for customers, it's actually good for all the operators as well. So whether this is part of VCF that we deliver to customers for them to operate themselves, just like they always have, or if it's part of our own cloud solutions, things like being more caught on Dell, this is going to be a core part about how we deliver our cloud services and infrastructure going forward. So we really do believe this is kind of a foundational transition that's taking place. And as we talked about, there is a ton of additional innovation that's going to come out of it. So I'm really, really excited for the next few years, because I think we're just at the start of a very long and very exciting journey. >> Awesome. Well, thank you both for spending some time with us and sharing the story and congratulations. I'm sure a whole bunch of work for, from a whole bunch of people in, into getting to getting where you are now. And, and as you said, Paul, the work is barely just begun. So thanks again. All right. He's Paul's He's Kit. I'm Jeff. You're watching the cubes, continuing coverage of Dell tech world 2020, that digital experience. Thanks for watching. We'll see you next time. (Upbeat music)

Published Date : Oct 21 2020

SUMMARY :

Brought to you by Dell Technologies. coming to you from our Palo Altos studios Kit great to see you as well. I'm here in San Francisco. And, you know, it's of the top half if you will, and this, you know, kind And when you start comparing those, how is that going to get So first of all, to your point, really key point is that you can Once it gets to the place You know, the server that you run it, How is that impacting the way is making, you know, how has that had to get you you know, beyond VMware and Dell EMC i think Right. seeing is while, you know, So I, I love the fact, you know, and really bringing that to bear. sure that you think about the the stack go to you first, is directly on the data And, you know, server, that doesn't allow you Sam bird on the other day, He's a little bit higher up. the tablet and, you know, of the yes. of compute and you know, that I'm going to be busy for And what you hear is like, Hey, you know, and as you said, Paul, the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Paul PerezPERSON

0.99+

DellORGANIZATION

0.99+

PaulPERSON

0.99+

Kit ColbertPERSON

0.99+

JeffreyPERSON

0.99+

NVIDIAORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

PatPERSON

0.99+

twoQUANTITY

0.99+

last yearDATE

0.99+

AppleORGANIZATION

0.99+

firstQUANTITY

0.99+

VMwareORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

two anglesQUANTITY

0.99+

second stepQUANTITY

0.99+

Pat GelsingerPERSON

0.99+

oneQUANTITY

0.99+

couple years agoDATE

0.99+

JensenPERSON

0.99+

five yearsQUANTITY

0.99+

Palo AltosLOCATION

0.99+

SmartNicsORGANIZATION

0.98+

MontereyLOCATION

0.98+

MontereyORGANIZATION

0.98+

IntelORGANIZATION

0.98+

20 years agoDATE

0.98+

10 years agoDATE

0.98+

ESXTITLE

0.98+

OneQUANTITY

0.98+

VCFORGANIZATION

0.98+

bothQUANTITY

0.98+

eachQUANTITY

0.98+

over 10 yearsQUANTITY

0.98+

VMRORGANIZATION

0.97+

Austin TexasLOCATION

0.97+

todayDATE

0.97+

this yearDATE

0.97+

NSXORGANIZATION

0.97+

FirstQUANTITY

0.96+

X86COMMERCIAL_ITEM

0.96+

two guestsQUANTITY

0.95+

Dell Technology World 2020EVENT

0.95+

two copiesQUANTITY

0.95+

zeroQUANTITY

0.95+

SmartNicORGANIZATION

0.95+

Sam birdPERSON

0.94+

CenteneORGANIZATION

0.94+

each physical serverQUANTITY

0.92+

SmartNicTITLE

0.92+

theCUBEORGANIZATION

0.92+

VMware cloudORGANIZATION

0.9+

PacificORGANIZATION

0.9+

SmartNicCOMMERCIAL_ITEM

0.9+

Paresh Kharya & Kevin Deierling, NVIDIA | HPE Discover 2020


 

>> Narrator: From around the global its theCUBE, covering HPE Discover Virtual Experience, brought to you by HPE. >> Hi, I'm Stu Miniman and this is theCUBE's coverage of HPE, discover the virtual experience for 2020, getting to talk to Hp executives, their partners, the ecosystem, where they are around the globe, this session we're going to be digging in about artificial intelligence, obviously a super important topic these days. And to help me do that, I've got two guests from Nvidia, sitting in the window next to me, we have Paresh Kharya, he's director of product marketing and sitting next to him in the virtual environment is Kevin Deierling, who is this senior vice president of marketing as I mentioned both with Nvidia. Thank you both so much for joining us. >> Thank you, so great to be here. >> Great to be here. >> All right, so Paresh when you set the stage for us? AI, obviously, one of those mega trends to talk about but just, give us the stages, where Nvidia sits, where the market is, and your customers today, that they think about AI. >> Yeah, so we are basically witnessing a massive changes that are happening across every industry. And it's basically the confluence of three things. One is of course, AI, the second is 5G and IOT, and the third is the ability to process all of the data that we have, that's now possible. For AI we are now seeing really advanced models, from computer vision, to understanding natural language, to the ability to speak in conversational terms. In terms of IOT and 5G, there are billions of devices that are sensing and inferring information. And now we have the ability to act, make decisions in various industries, and finally all of the processing capabilities that we have today, at the data center, and in the cloud, as well as at the edge with the GPUs as well as advanced networking that's available, we can now make sense all of this data to help industrial transformation. >> Yeah, Kevin, you know it's interesting when you look at some of these waves of technology and we say, "Okay, there's a lot of new pieces here." You talk about 5G, it's the next generation but architecturally some of these things remind us of the past. So when I look at some of these architectures, I think about, what we've done for high performance computing for a long time, obviously, you know, Mellanox, where you came from through NVIDIA's acquisition, strong play in that environment. So, maybe give us a little bit compare, contrast, what's the same, and what's different about this highly distributed, edge compute AI, IOT environment and what's the same with what we were doing with HPC in the past. >> Yeah, so we've--Mellanox has now been a part of Nvidia for a little over a month and it's great to be part of that. We were both focused on accelerated computing and high performance computing. And to do that, what it means is the scale and the type of problems that we're trying to solve are just simply too large to fit into a single computer. So if that's the case, then you connect a lot of computers. And Jensen talked about this recently at the GTC keynote where he said that the new unit computing, it's really the data center. So it's no longer the box that sits on your desk or even in Iraq, it's the entire data center because that's the scale of the types of problems that we're solving. And so the notion of scale up and scale out, the network becomes really, really critical. And we're doing high-performance networking for a long time. When you move to the edge, instead of having, a single data center with 10,000 computers, you have 10,000 data centers, each of which as a small number of servers that is processing all of that information that's coming in. But in a sense, the problems are very, very similar, whether you're at the edge or you're doing massive HPC, scientific computing or cloud computing. And so we're excited to be part of bringing together the AI and the networking because they are really optimizing at the data center scale across the entire stack. >> All right, so it's interesting. You mentioned, Nvidia CEO, Jensen. I believe if I saw right in there, he actually could, wrote a term which I had not run across, it was the data processing unit or DPU in that, data center, as you talked about. Help us wrap our heads around this a little bit. I know my CPU, when I think about GPUs, I obviously think of Nvidia. TPUs, in the cloud and everything we're doing. So, what is DPUs? Is this just some new AI thing or, is this kind of a new architectural model? >> Yeah. I think what Jensen highlighted is that there's three key elements of this accelerated disaggregated infrastructure that the data center has becoming. And so that's the CPU, which is doing traditional single threaded workloads but for all of the accelerated workloads, you need the GPU. And that does massive parallelism deals with massive amounts of data, but to get that data into the GPU and also into the CPU, you need really an intelligent data processing because the scale and scope of GPUs and CPUs today, these are not single core entities. These are hundreds or even thousands of cores in a big system. And you need to steer the traffic exactly to the right place. You need to do it securely. You need to do it virtualized. You need to do it with containers and to do all of that, you need a programmable data processing unit. So we have something called our BlueField, which combines our latest, greatest, 100 gig and 200 gig network connectivity with Arm processors and a whole bunch of accelerators for security, for virtualization, for storage. And all of those things then feed these giant parallel engines which are the GPU. And of course the CPU, which is really the workload at the application layer for non-accelerated outs. >> Great, so Paresh, Kevin talked about, needing similar types of services, wherever the data is. I was wondering if you could really help expand for us a little bit, the implications of it AI at the edge. >> Sure, yeah, so AI is basically not just one workload. AI is many different types of models and AI also means training as well as inferences, which are very different workloads or AI printing, for example, we are seeing the models growing exponentially, think of any AI model, like a brain of a computer or like a brain, solving a particular use case a for simple models like computer vision, we have models that are smaller, bugs have computer vision but advanced models like natural language processing, they require larger brains or larger models, so on one hand we are seeing the size of the AI models increasing tremendously and in order to train these models, you need to look at computing at the scale of data center, many processors, many different servers working together to train a single model, on the other hand because of these AI models, they are so accurate today from understanding languages to speaking languages, to providing the right recommendations whether it's for products or for content that you may want to consume or advertisements and so on. These models are so effective and efficient that they are being powered by AI today. These applications are being powered by AI and each application requires a small amount of acceleration, so you need the ability to scale out or, and support many different applications. So with our newly launched MPR architecture, just couple of weeks to go that Jensen announced, in the virtual keynote for the first time, we are now able to provide both, scale up and scale out both training data analytics as well as imprints on the single architecture and that's very exciting. >> Yeah, so look at that. The other thing that's interesting is you're talking about at the edge and scale out versus scale up, the networking is critical for both of those. And there's a lot of different workloads. And as Paresh was describing, you've got different workloads that require different amounts of GPU or storage or networking. And so part of that vision of this data center as the computer is that, the DPU lets you scale independently, everything. So you can compose, you desegregate into DPUs and storage and CPUs, and then you compose exactly the computer that you need on the fly container, right, to solve the problem that you're solving right now. So these new way of programming is programming the entire data center at once and you'll go grab all of it and it'll run for a few hundred milliseconds even and then it'll come back down and recompose itself onsite. And to do that, you need this very highly efficient networking infrastructure. And the good news is we're here at HPE Discover. We've got a great partner with HPE. You know, they have our M series switches that uses the Mellanox hundred gig and now even 200 and 400 gig ethernet switches, we have all of our adapters and they have great platforms. The Apollo platform for example, is break for HPC and they have other great platforms that we're looking at with the new telco that we're doing or 5G and accelerating that. >> Yeah, and on the edge computing side, there's the edge line set of products which are very interesting, the other sort of aspect that I wanted to touch upon, is the whole software stack that's needed for the edge. So edge is different in the sense that it's not centrally managed, the edge computing devices are distributed remote locations. And so managing the workflow of running and updating software on it is important and needs to be done in a very secure manner. The second thing that's, that's very different again, for the edges, these devices are going to require connectivity. As Kevin was pointing out, the importance of networking so we also announced, a couple of weeks ago at our GTC, our EGX product that combines the Mellanox NIC and our GPUs into a single a processor, Mellanox NIC provides a fast connectivity, security, as well as the encryption and decryption capabilities, GPUs provide acceleration to run the advanced DI models, that are required for applications at the edge. >> Okay, and if I understood that, right. So, you've got these throughout the HPE the product line, HPE's got long history of making, flexible configurations, I remember when they first came out with a Blade server it was, different form factors, different connectivity options, they pushed heavily into composable infrastructure. So it sounds like this is just a kind of extending, you know, what HP has been doing for a couple of decades. >> Yeah, I think HP is a great partner there and these new platforms, the EGX, for example that was just announced, a great workload there is a 5G telco. So we'll be working with our friends at HPE to take that to market as well. And, you know, really, there's a lot of different workloads and they've got a great portfolio of products across the spectrum from regular servers. And 1U, 2U, and then all the way up to their big Apollo platform. >> Well I'm glad you brought up telco, I'm curious, are there any specific, applications or workloads that, where the low hanging fruit or the kind of the first targets that you use for AI acceleration? >> Yeah, so you know, the 5G workload is just awesome. We're introduced with the EGX, a new platform called Ariel which is a programming framework and there were lots of partners there that were part of that, including, folks like Ericsson. And the idea there is that you have a software defined hardware accelerated radio area network, so a cloud RAM and it really has all of the right attributes of the cloud and what's nice there is now you can change on the fly, the algorithms that you're using for the baseband codex without having to go climb a radio tower and change the actual physical infrastructure. So that's a critical part. Our role in that, on the networking side, we introduced the technology that's part of EGX then are connected, It's like the DX adapter, it's called 5T for 5G. And one of the things that happens is you need this time triggered transport or a telco technology. That's the 5T's for 5G. And the reason is because you're doing distributed baseband unit, distributed radio processing and the timing between each of those server nodes needs to be super precise, 20 nanosecond. It's something that simply can't be done in software. And so we did that in hardware. So instead of having an expensive FPGA, I try to synchronize all of these boxes together. We put it into our NIC and now we put that into industry standard servers HP has some fantastic servers. And then with the EGX platform, with that we can build, really scale out software to client cloud RAM. >> Awesome, Paresh, anything else on the application side you'd like to add in just about what Kevin spoke about. >> Oh yeah, so from application perspective, every industry has applications that touch on edge. If you take a look at the retail, for example, there is, you know, all the way from supply chain to inventory management, to keeping the right stock units in the shelves, making sure there is a there is no slippage or shrinkage. So to telecom, to healthcare, we are re-looking at constantly monitoring patients and taking actions for the best outcomes to manufacturing. We are looking to automate production detecting failures much early on in the production cycle and so on every industry has different applications but they all use AI. They can all leverage the computing capabilities and high-speed networking at the edge to transform their business processes. >> All right, well, it's interesting almost every time we've talked about AI, networking has come up. So, you know, Kevin, I think that probably ease up a little bit why, Nvidia, spent around $7 billion for the acquisition of Mellanox and not only was it the Mellanox acquisition, Cumulus Networks, very known in the network space for software defined really, operating system for networking but give us strategically, does this change the direction of Nvidia, how should we be thinking about Nvidia in the overall network? >> Yeah, I think the way to think about it is going back to that data center as the computer. And if you're thinking about the data center as computer then networking becomes the back plane, if you will of that data center computer and having a high performance network is really critical. And Mellanox has been a leader in that for 20 years now with our InfiniBand and our Ethernet product. But beyond that, you need a programmatic interface because one of the things that's really important in the cloud is that everything is software defined and it's containerized now and there is no better company in the world then Cumulus, really the pioneer and building Cumulus clinics, taking the Linux operating system and running that on multiple homes. So not just hardware from Mellanox but hardware from other people as well. And so that whole notion of an open networking platform more committed to, you need to support that and now you have a programmatic interface that you can drop containers on top of, Cumulus has been the leader in the Linux FRR, it's Free Range Routing, which is the core routing algorithm. And that really is at the heart of other open source network operating systems like Sonic and DENT so we see a lot of synergy here, all the analytics that Cumulus is bringing to bear with NetQ. So it's really great that they're going to be part here of the Nvidia team. >> Excellent, well thank you both much. Want to give you the final word, what should they do, HPE customers in their ecosystem know about the Nvidia and HPE partnership? >> Yeah, so I'll start you know, I think HPE has been a longtime partner and a customer of ours. If you have accelerated workloads, you need to connect those together. The HPE server portfolio is an ideal place. We can combine some of the work we're doing with our new amp years and existing GPUs and then also to connect those together with the M series, which is their internet switches that are based on our spectrum switch platforms and then all of the HPC related activities on InfiniBand, they're a great partner there. And so all of that, pulling it together, and now as at the edge, as edge becomes more and more important, security becomes more and more important and you have to go to this zero trust model, if you plug in a camera that's somebody has at the edge, even if it's on a car, you can't trust it. So everything has to become, validated authenticated, all the data needs to be encrypted. And so they're going to be a great partner because they've been a leader and building the most secure platforms in the world. >> Yeah and on the data center, server, portfolio side, we really work very closely with HP on various different lines of products and really fantastic servers from the Apollo line of a scale up servers to synergy and ProLiant line, as well as the Edgeline for the edge and on the super computing side with the pre side of things. So we really work to the fullest spectram of solutions with HP. We also work on the software side, wehere a lot of these servers, are also certified to run a full stack under a program that we call NGC-Ready so customers get phenomenal value right off the bat, they're guaranteed, to have accelerated workloads work well when they choose these servers. >> Awesome, well, thank you both for giving us the updates, lots happening, obviously in the AI space. Appreciate all the updates. >> Thanks Stu, great to talk to you, stay well. >> Thanks Stu, take care. >> All right, stay with us for lots more from HPE Discover Virtual Experience 2020. I'm Stu Miniman and thank you for watching theCUBE. (bright upbeat music)

Published Date : Jun 24 2020

SUMMARY :

the global its theCUBE, in the virtual environment that they think about AI. and finally all of the processing the next generation And so the notion of TPUs, in the cloud and And of course the CPU, which of it AI at the edge. for the first time, we are And the good news is we're Yeah, and on the edge computing side, the product line, HPE's across the spectrum from regular servers. and it really has all of the else on the application side and high-speed networking at the edge in the network space for And that really is at the heart about the Nvidia and HPE partnership? all the data needs to be encrypted. Yeah and on the data Appreciate all the updates. Thanks Stu, great to I'm Stu Miniman and thank

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kevin DeierlingPERSON

0.99+

KevinPERSON

0.99+

Paresh KharyaPERSON

0.99+

NvidiaORGANIZATION

0.99+

200 gigQUANTITY

0.99+

HPORGANIZATION

0.99+

100 gigQUANTITY

0.99+

hundredsQUANTITY

0.99+

10,000 computersQUANTITY

0.99+

MellanoxORGANIZATION

0.99+

200QUANTITY

0.99+

NVIDIAORGANIZATION

0.99+

PareshPERSON

0.99+

CumulusORGANIZATION

0.99+

Cumulus NetworksORGANIZATION

0.99+

IraqLOCATION

0.99+

20 yearsQUANTITY

0.99+

HPEORGANIZATION

0.99+

EricssonORGANIZATION

0.99+

2020DATE

0.99+

two guestsQUANTITY

0.99+

OneQUANTITY

0.99+

thirdQUANTITY

0.99+

StuPERSON

0.99+

first timeQUANTITY

0.99+

around $7 billionQUANTITY

0.99+

telcoORGANIZATION

0.99+

each applicationQUANTITY

0.99+

Stu MinimanPERSON

0.99+

secondQUANTITY

0.99+

20 nanosecondQUANTITY

0.99+

LinuxTITLE

0.99+

bothQUANTITY

0.99+

NetQORGANIZATION

0.99+

400 gigQUANTITY

0.99+

eachQUANTITY

0.99+

10,000 data centersQUANTITY

0.98+

second thingQUANTITY

0.98+

three key elementsQUANTITY

0.98+

oneQUANTITY

0.98+

thousands of coresQUANTITY

0.98+

three thingsQUANTITY

0.97+

JensenPERSON

0.97+

ApolloORGANIZATION

0.97+

JensenORGANIZATION

0.96+

single computerQUANTITY

0.96+

HPE DiscoverORGANIZATION

0.95+

single modelQUANTITY

0.95+

firstQUANTITY

0.95+

hundred gigQUANTITY

0.94+

InfiniBandORGANIZATION

0.94+

DENTORGANIZATION

0.93+

GTCEVENT

0.93+