Image Title

Search Results for Scott Tease:

Kevin Deierling, NVIDIA and Scott Tease, Lenovo | CUBE Conversation, September 2020


 

>> Narrator: From theCUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hi, I'm Stu Miniman, and welcome to a CUBE conversation. I'm coming to you from our Boston Area studio. And we're going to be digging into some interesting news regarding networking. Some important use cases these days, in 2020, of course, AI is a big piece of it. So happy to welcome to the program. First of all, I have one of our CUBE alumni, Kevin Deierling. He's the Senior Vice President of Marketing with Nvidia, part of the networking team there. And joining him is Scott Tease, someone we've known for a while, but first time on the program, who's the General Manager of HPC and AI, for the Lenovo Data Center Group. Scott and Kevin, thanks so much for joining us. >> It's great to be here Stu. >> Yeah, thank you. >> Alright, so Kevin, as I said, you you've been on the program a number of times, first when it was just Mellanox, now of course the networking team, there's some other acquisitions that have come in. If you could just set us up with the relationship between Nvidia and Lenovo. And there's some news today that we're here to talk about too. So let's start getting into that. And then Scott, you'll jump in after Kevin. >> Yeah, so we've been a long time partner with Lenovo, on our high performance computing. And so that's the InfiniBand piece of our business. And more and more, we're seeing that AI workloads are very, very similar to HPC workloads. And so that's been a great partnership that we've had for many, many years. And now we're expanding that, and we're launching a OEM relationship with Lenovo, for our Ethernet switches. And again, with our Ethernet switches, we really take that heritage of low latency, high performance networking that we built over many years in HPC, and we bring that to Ethernet. And of course that can be with HPC, because frequently in an HPC supercomputing environment, or in an AI supercomputing environment, you'll also have an Ethernet network, either for management, or sometimes for storage. And now we can offer that together with Lenovo. So it's a great partnership. We talked about it briefly last month, and now we're coming to market, and we'll be able to offer this to the market. >> Yeah, yeah, Kevin, we're super excited about it here in Lenovo as well. We've had a great relationship over the years with Mellanox, with Nvidia Mellanox. And this is just the next step. We've shown in HPC that the days of just taking an Ethernet card, or an InfiniBand card, plugging it in the system, and having it work properly are gone. You really need a system that's engineered for whatever task the customer is going to use. And we've known that in HPC for a long time, as we move into workloads, like artificial intelligence, where networking is a critical aspect of getting these systems to communicate with one another, and work properly together. We love from HPC perspective, to use InfiniBand, but most enterprise clients are using Ethernet. So where do we go? We go to a partner that we've trusted for a very long time. And we selected the Nvidia Mellanox Ethernet switch family. And we're really excited to be able to bring that end-to-end solution to our enterprise clients, just like we've been doing for HPC for a while. >> Yeah, well Scott, maybe if you could. I'd love to hear a little bit more about kind of that customer demand that those usages there. So you think traditionally, of course, is supercomputing, as you both talked about that move from InfiniBand, to leveraging Ethernet, is something that's been talked about for quite a while now in the industry. But maybe that AI specifically, could you talk about what are the networking requirements, how similar is it? Is it 95% of the same architecture, as what you see in HPC environments? And also, I guess the big question there is, how fast are customers adopting, and rolling out those AI solutions? And what kind of scale are they getting them to today? >> So yeah, there's a lot there of good things we can talk about. So I'd say in HPC, the thing that we've learned, is that you've got to have a fabric that's up to the task. When you're testing an HPC solution, you're not looking at a single node, you're looking at a combination of servers, and storage, management, all these things have to come together, and they come together over InfiniBand fabric. So we've got this nearly a purpose built fabric that's been fantastic for the HPC community for a long time. As we start to do some of that same type of workload, but in an enterprise environment, many of those customers are not used to InfiniBand, they're used to an Ethernet fabric, something that they've got all throughout their data center. And we want to try to find a way to do was, bring a lot of that rock solid interoperability, and pre-tested capability, and bring it to our enterprise clients for these AI workloads. Anything high performance GPUs, lots of inner internode communications, worries about traffic and congestion, abnormalities in the network that you need to spot. Those things happen quite often, when you're doing these enterprise AI solutions. You need a fabric that's able to keep up with that. And the Nvidia networking is definitely going to be able to do that for us. >> Yeah well, Kevin I heard Scott mention GPUs here. So this kind of highlights one of the reasons why we've seen Nvidia expand its networking capabilities. Could you talk a little bit about that kind of expansion, the portfolio, and how these use cases really are going to highlight what Nvidia helped bring to the market? >> Yeah, we like to really focus on accelerated computing applications. And whether those are HPC applications, or now they're becoming much more broadly adopted in the enterprise. And one of the things we've done is, tight integration at a product level, between GPUs, and the networking components in our business. Whether that's the adapters, or the DPU, the data processing unit, which we've talked about before. And now even with the switches here, with our friends at Lenovo, and really bringing that all together. But most importantly, is at a platform level. And by that I mean the software. And the enterprise here has all kinds of different verticals that are going after. And we invest heavily in the software ecosystem that's built on top of the GPU, and the networking. And by integrating all of that together on a platform, we can really accelerate the time to market for enterprises that wants to leverage these modern workloads, sort of cloud native workloads. >> Yeah, please Scott, if you have some follow up there. >> Yeah, if you don't mind Stu, I just like to say, five years ago, the roadmap that we followed was the processor roadmap. We all could tell you to the week when the next Xeon processor was going to come out. And that's what drove all of our roadmaps. Since that time what we found is that the items that are making the radical, the revolutionary improvements in performance, they're attached to the processor, but they're not the processor itself. It's things like, the GPU. It's things like that, especially networking adapters. So trying to design a platform that's solely based on a CPU, and then jam these other items on top of it. It no longer works, you have to design these systems in a holistic manner, where you're designing for the GPU, you're designing for the network. And that's the beauty of having a deep partnership, like we share with Nvidia, on both the GPU side, and on the networking side, is we can do all that upfront engineering to make sure that the platform, the systems, the solution, as a whole works exactly how the customer is going to expect it to. >> Kevin, you mentioned that a big piece of this is software now. I'm curious, there's an interesting piece that your networking team has picked up, relatively recently, that the Cumulus Linux, so help us understand how that fits into the Ethernet portfolio? And would it show up in these kind of applications that we're talking about? >> Yeah, that's a great question. So you're absolutely right, Cumulus is integral to what we're doing here with Lenovo. If you looked at the heritage that Mellanox had, and Cumulus, it's all about open networking. And what we mean by that, is we really decouple the hardware, and the software. So we support multiple network operating systems on top of our hardware. And so if it's, for example, Sonic, or if it's our Onyx or Dents, which is based on switch def. But Cumulus who we just recently acquired, has been also on that same access of open networking. And so they really support multiple platforms. Now we've added a new platform with our friends at Lenovo. And really they've adopted Cumulus. So it is very much centered on, Enterprise, and really a cloud like experience in the Enterprise, where it's Linux, but it's highly automated. Everything is operationalized and automated. And so as a result of that, you get sort of the experience of the cloud, but with the economics that you get in the Enterprise. So it's kind of the best of both worlds in terms of network analytic, and all of the ability to do things that the cloud guys are doing, but fully automated, and for an Enterprise environment. >> Yeah, so Kevin, I mean, I just want to say a few things about this. We're really excited about the Cumulus acquisition here. When we started our negotiations with Mellanox, we were still planning to use Onyx. We love Onyx, it's been our IB nodes of choice. Our users love, our are architects love it. But we were trying to lean towards a more open kind of futuristic, node as we got started with this. And Cumulus is really perfect. I mean it's a Linux open source based system. We love open source in HPC. The great thing about it is, we're going to be able to take all the great learnings that we've had with Onyx over the years, and now be able to consolidate those inside of Cumulus. We think it's the perfect way to start this relationship with Nvidia networking. >> Well Scott, help us understand a little more. What you know what does this expansion of the partnership mean? If you're talking about really the full solutions that Lenovo opens in the think agile brand, as well as the hybrid and cloud solutions. Is this something then that, is it just baked into the solution, is it a reseller, what should customers, and your your channel partners understand about this? >> Yeah, so any of the Lenovo solutions that require a switch to perform the functionality needed across the solution, are going to show up with the networking from Nvidia inside of it. Reasons for that, a couple of reasons. One is even something as simple as solution management for HPC, the switch is so integral to how we do all that, how we push all those functions down, how we deploy systems. So you've got to have a switch, in a connectivity methodology, that ensures that we know how to deploy these systems. And no matter what scale they are, from a few systems up, to literally thousands of systems, we've got something that we know how to do. Then when we're we're selling these solutions, like an SAP solution, for instance. The customer is not buying a server anymore, they're buying a solution, they're buying a functionality. And we want to be able to test that in our labs to ensure that that system, that rack, leaves our factory ready to do exactly what the customer is looking for. So any of the systems that are going to be coming from us, pre configured, pre tested, are all going to have Nvidia networking inside of them. >> Yeah, and I think that's, you mentioned the hybrid cloud. I think that's really important. That's really where we cut our teeth first in InfiniBand, but also with our Ethernet solutions. And so today, we're really driving a bunch of the big hyper scalars, as well as the big clouds. And as you see things like SAP or Azure, it's really important now that you're seeing Azure stack coming into a hybrid environment, that you have the known commodity here. So we're something that we're built in to many of those different platforms, with our Spectrum ASIC, as well as our adapters. And so now the ability with Nvidia, and Lenovo together, to bring that to enterprise customers, is really important. I think it's a proven set of components that together forms a solution. And that's the real key, as Scott said, is delivering a solution, not just piece parts, we have a platform, that software, hardware, all of it integrated. >> Well, it's great to see you. We've had an existing partnership for a while. I want to give you both the opportunity, anything specific, you've been hearing kind of the customer demand leading up this. Is it people that might be transitioning from InfiniBand to Ethernet? Or is it just general market adoption of new solutions that you have out there? (speakers talk over each other) >> You go ahead and start. >> Okay, so I think that there's different networks for different workloads, is what we've seen. And InfiniBand certainly is going to continue to be the best platform out there for HPC, and often for AI. But as Scott said, the enterprise frequently is not familiar with that, and for various reasons, would like to leverage Ethernet. So I think we'll see two different cases, one where there's Ethernet with an InfiniBand network. And the other is for new enterprise workloads that are coming, that are very AI centric, modern workloads, sort of cloud native workloads. You have all of the infrastructure in place with our Spectrum ASICs, and our Connectx adapters, and now integrated with GPUs, that we'll be able to deliver solutions rather than just compliments. And that's the key. >> Yeah, I think Stu, a great example, I think of where you need that networking, like we've been used to an HPC, is when you start looking at deep learning in training, scale out training. A lot of companies have been stuck on a single workstation, because they haven't been able to figure out how to spread that workload out, and chop it up, like we've been doing in HPC, because they've been running into networking issues. They can't run over an unoptimized network. With this new technology, we're hoping to be able to do a lot of the same things that HPC customers take for granted every day, about workload management, distribution of workload, chopping jobs up into smaller portions, and feeding them out to a cluster. We're hoping that we're going to be able to do those exact same things for our enterprise clients. And it's going to look magical to them, but it's the same kind of thing we've been doing forever. With Mellanox, in the past, now Nvidia networking, we're just going to take that to the enterprise. I'm really excited about it. >> Well, it's so much flexibility. We used to look at, it would take a decade to roll out some new generations. Kevin, if you could just give us latest speeds and feeds. If I look at Ethernet, did I see that this has from n gig, all the way up to 400 gig? I think I lose track a little bit of some of the pieces. I know the industry as a whole is driving it. But where are we with the general customer adoption of some of the some of the speeds today? >> Yeah indeed, we're coming up on the 40th anniversary of the first specification of Ethernet. And we're about 4000 times faster now, 40,000 times faster at 400 gigabits, versus 10 megabits. So yeah, we're shipping today at the adapter level, 100 gig, and even 200 gig. And then at the switch level, 400 gig. And people sort of ask, "Do we really need all that performance?" The answer is absolutely. So the amount of data that the GPU can crunch, and these AI workloads, these giant neural networks, it needs massive amounts of data. And then as you're scaling out, as Scott was talking about, much along the lines of InfiniBand Ethernet needs that same level of performance, throughput, latency and offloads, and we're able to deliver. >> Yeah, so Kevin, thank you so much. Scott, I want to give you a final word here. Anything else you want your customers to understand regarding this partnerships? >> Yeah, just a quick one Stu, quick one. So we've been really fortunate in working really closely with Mellanox over the years, and with Nvidia. And now the two together, we're just excited about what the future holds. We've done some really neat things in HPC, with being one of the first watercool an InfiniBand card. We're one of the first companies to deploy Dragonfly topology. We've done some unique things where we can share a single IP adapter, across multiple users. We're looking forward to doing a lot of that same exact kind of innovation, inside of our systems as we look to Ethernet. We often think that as speeds of Ethernet continue to go higher, we may see more and more people move from InfiniBand to Ethernet. I think that now having both of these offerings inside of our lineup, is going to make it really easy for customers to choose what's best for them over time. So I'm excited about the future. >> Alright, well Kevin and Scott, thank you so much. Deep integration and customer choice, important stuff. Thank you so much for joining us. >> Thank you Stu. >> Thanks Stu. >> Alright, I'm Stu Miniman, and thank you. Thanks for watching theCUBE. (upbeat music)

Published Date : Sep 15 2020

SUMMARY :

leaders all around the world, for the Lenovo Data Center Group. now of course the networking team, And of course that can be with HPC, We've shown in HPC that the days Is it 95% of the same architecture, And the Nvidia networking that kind of expansion, the portfolio, And by that I mean the software. Yeah, please Scott, if you And that's the beauty of that the Cumulus Linux, and all of the ability to do things that we've had with Onyx over the years, of the partnership mean? So any of the systems that And so now the ability with Nvidia, of the customer demand leading up this. And that's the key. do a lot of the same things of some of the some of the speeds today? that the GPU can crunch, Yeah, so Kevin, thank you so much. And now the two together, Scott, thank you so much. Miniman, and thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ScottPERSON

0.99+

LenovoORGANIZATION

0.99+

KevinPERSON

0.99+

Kevin DeierlingPERSON

0.99+

NvidiaORGANIZATION

0.99+

2020DATE

0.99+

40,000 timesQUANTITY

0.99+

OnyxORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

Lenovo Data Center GroupORGANIZATION

0.99+

100 gigQUANTITY

0.99+

Stu MinimanPERSON

0.99+

10 megabitsQUANTITY

0.99+

95%QUANTITY

0.99+

400 gigQUANTITY

0.99+

NVIDIAORGANIZATION

0.99+

September 2020DATE

0.99+

200 gigQUANTITY

0.99+

MellanoxORGANIZATION

0.99+

400 gigabitsQUANTITY

0.99+

Scott TeasePERSON

0.99+

CumulusORGANIZATION

0.99+

firstQUANTITY

0.99+

Stu MinimanPERSON

0.99+

LinuxTITLE

0.99+

bothQUANTITY

0.99+

StuPERSON

0.99+

HPCORGANIZATION

0.99+

oneQUANTITY

0.98+

twoQUANTITY

0.98+

CUBEORGANIZATION

0.98+

todayDATE

0.98+

five years agoDATE

0.98+

last monthDATE

0.98+

InfiniBandORGANIZATION

0.98+

two different casesQUANTITY

0.98+

BostonLOCATION

0.97+

first timeQUANTITY

0.97+