Image Title

Search Results for BlueField:

Ami Badani, NVIDIA & Mike Capuano, Pluribus Networks


 

(upbeat music) >> Let's kick things off. We're here at Mike Capuano the CMO of Pluribus Networks, and Ami Badani VP of Networking, Marketing, and Developer of Ecosystem at NVIDIA. Great to have you welcome folks. >> Thank you. >> Thanks. >> So let's get into the the problem situation with cloud unified networking. What problems are out there? What challenges do cloud operators have Mike? Let's get into it. >> The challenges that we're looking at are for non hyperscalers that's enterprises, governments Tier 2 service providers, cloud service providers. And the first mandate for them is to become as agile as a hyperscaler. So they need to be able to deploy services and security policies in seconds. They need to be able to abstract the complexity of the network and define things in software while it's accelerated in hardware. Really ultimately they need a single operating model everywhere. And then the second thing is they need to distribute networking and security services out to the edge of the host. We're seeing a growth cyber attacks. It's not slowing down. It's only getting worse and solving for this security problem across clouds is absolutely critical. And the way to do it is to move security out to the host. >> With that goal in mind, what's the Pluribus vision how does this tie together? >> So basically what we see is that this demands a new architecture and that new architecture has four tenets. The first tenet is unified and simplified cloud networks. If you look at cloud networks today, there's sort of like discreet bespoke cloud networks per hypervisor, per private cloud, edge cloud, public cloud. Each of the public clouds have different networks, that needs to be unified. If we want these folks to be able to be agile they need to be able to issue a single command or instantiate a security policy across all of those locations with one command and not have to go to each one. The second is, like I mentioned distributed security. Distributed security without compromise, extended out to the host is absolutely critical. So micro segmentation and distributed firewalls. But it doesn't stop there. They also need pervasive visibility. It's sort of like with security you really can't see you can't protect you can't see. So you need visibility everywhere. The problem is visibility to date has been very expensive. Folks have had to basically build a separate overlay network of taps, packet brokers, tap aggregation infrastructure, that really needs to be built in to this unified network I'm talking about. And the last thing is automation. All of this needs to be SDN enabled. So this is related to my comment about abstraction. Abstract the complexity of all these discreet networks whatever's down there in the physical layer. I don't want to see it. I want to abstract it. I want to define things in software but I do want to leverage the power of hardware to accelerate that. So that's the fourth tenet is SDN automation. >> Mike, we've been talking on theCUBE a lot about this architectural shift and customers are looking at this. This is a big part of everyone who's looking at cloud operations, NextGen. How do we get there? How do customer customers get this vision realized? >> That's a great question. And I appreciate the tee up. We're here today for that reason. We're introducing two things today. The first is a unified cloud networking vision. And that is a vision of where Pluribus is headed with our partners like NVIDIA long term. And that is about deploying a common operating model SDN enabled, SDN automated, hardware accelerated across all clouds. And whether that's underlay and overlay switch or server, any hypervisor infrastructure containers, any workload doesn't matter. So that's ultimately where we want to get. And that's what we talked about earlier. The first step in that vision is what we call the unified cloud fabric. And this is the next generation of our adaptive cloud fabric. And what's nice about this is we're not starting from scratch. We have an award-winning adaptive cloud fabric product that is deployed globally. And in particular, we're very proud of the fact that it's deployed in over 100 Tier 1 mobile operators as the network fabric for their 4G and 5G virtualized cores. We know how to build carrier grade networking infrastructure. What we're doing now to realize this next generation unified cloud fabric is we're extending from the switch to this NVIDIA BlueField-2 DPU. We know there's. >> Hold that up real quick. That's a good prop. That's the BlueField NVIDIA card. >> It's the NVIDIA BlueField-2 DPU, data processing unit. What we're doing fundamentally is extending our SDN automated fabric, the unified cloud fabric, out to the host. But it does take processing power. So we knew that we didn't want to do we didn't want to implement that running on the CPUs which is what some other companies do. Because it consumes revenue generating CPUs from the application. So a DPU is a perfect way to implement this. And we knew that NVIDIA was the leader with this BlueField-2. And so that is the first, that's the first step into getting, into realizing this vision. >> NVIDIA has always been powering some great workloads of GPUs, now you got DPUs. Networking and NVIDIA as here. What is the relationship with Pluribus? How did that come together? Tell us the story. >> We've been working with Pluribus for quite some time. I think the last several months was really when it came to fruition. And what Pluribus is trying to build and what NVIDIA has. So we have, this concept of a blue field data processing unit, which, if you think about it, conceptually does really three things, offload, accelerate, and isolate. So offload your workloads from your CPU to your data processing unit, infrastructure workloads that is. Accelerate, so there's a bunch of acceleration engines. You can run infrastructure workloads much faster than you would otherwise. And then isolation, So you have this nice security isolation between the data processing unit and your other CPU environment. And so you can run completely isolated workloads directly on the data processing unit. So we introduced this, a couple years ago. And with Pluribus we've been talking to the Pluribus team for quite some months now. And I think really the combination of what Pluribus is trying to build, and what they've developed around this unified cloud fabric fits really nicely with the DPU and running that on the DPU and extending it really from your physical switch all the way to your host environment, specifically on the data processing unit. So if you think about what's happening as you add data processing units to your environment. So every server we believe over time is going to have data processing units. So now you'll have to manage that complexity from the physical network layer to the host layer. And so what Pluribus is really trying to do is extending the network fabric from the host from the switch to the host and really have that single pane of glass for network operators to be able to configure, provision, manage all of the complexity of the network environment. So that's really how the partnership truly started. And so it started really with extending the network fabric and now we're also working with them on security. If you sort of take that concept of isolation and security isolation, what Pluribus has within their fabric is the concept of micro segmentation. And so now you can take that extend it to the data processing unit and really have isolated micro segmentation workloads whether it's bare metal, cloud native environments, whether it's virtualized environments, whether it's public cloud, private cloud, hybrid cloud. So it really is a magical partnership between the two companies with their unified cloud fabric running on the DPU. >> You know what I love about this conversation is it reminds me of when you have these changing markets. The product gets pulled out of the market and you guys step up and create these new solutions. And I think this is a great example. So I have to ask you how do you guys differentiate what sets this apart for customers? What's in it for the customer? >> So I mentioned three things in terms of the value of what the BlueField brings. There's offloading, accelerating and isolating. And that's sort of the key core tenets of BlueField. So that, if you sort of think about what BlueField what we've done, in terms of the differentiation. We're really a robust platform for innovation. So we introduced BlueField-2 last year. We're introducing BlueField-3 which is our next generation of blue field. It'll have 5X the ARM compute capacity. It will have 400 gig line rate acceleration, 4X better crypto acceleration. So it will be remarkably better than the previous generation. And we'll continue to innovate and add, chips to our portfolio every 18 months to two years. So that's sort of one of the key areas of differentiation. The other is that if you look at NVIDIA, what we're sort of known for is really known for our AI, our artificial intelligence and our artificial intelligence software, as well as our GPU. So you look at artificial intelligence and the combination of artificial intelligence plus data processing. This really creates faster, more efficient secure AI systems from, the core of your data center, all the way out to the edge. And so with NVIDIA we really have these converged accelerators where we've combined the GPU, which does all your AI processing with your data processing with the DPU. So we have this convergence really nice convergence of that area. And I would say the third area is really around our developer environment. One of the key, one of our key motivations at NVIDIA is really to have our partner ecosystem embrace our technology and build solutions around our technology. So if you look at what we've done with the DPU we've created an SDK, which is an open SDK called DOCA. And it's an open SDK for our partners to really build and develop solutions using BlueField and using all these accelerated libraries that we expose through DOCA. And so part of our differentiation is really building this open ecosystem for our partners to take advantage and build solutions around our technology. >> What's exciting is when I hear you talk it's like you realize that there's no one general purpose network anymore. Everyone has their own super environment, super cloud or these new capabilities. They can really craft their own I'd say custom environment at scale with easy tools. And it's all kind of that again this is the new architecture Mike, you were talking about. How does customers run this effectively, cost effectively? And how do people migrate? >> I think that is the key question. So we've got this beautiful architecture. Amazon Nitro is a good example of a SmartNIC architecture that has been successfully deployed but, enterprises and Tier 2 service providers and Tier 1 service providers and governments are not Amazon. So they need to migrate there and they need this architecture to be cost of effective. And that's super key. I mean, the reality is DPU are moving fast but they're not going to be deployed everywhere on day one. Some servers will have have DPUs right away. Some servers will have DPUs in a year or two. And then there are devices that may never have DPUs. IOT gateways, or legacy servers, even mainframes. So that's the beauty of a solution that creates a fabric across both the switch and the DPU. And by leveraging the NVIDIA BlueField DPU what we really like about it is, it's open and that drives cost efficiencies. And then, with this our architectural approach effectively you get a unified solution across switch and DPU, workload independent. It doesn't matter what hypervisor it is. Integrated visibility, integrated security and that can create tremendous cost efficiencies and really extract a lot of the expense from a capital perspective out of the network as well as from an operational perspective because now I have an SDN automated solution where I'm literally issuing a command to deploy a network service, or to deploy a security policy and is deployed everywhere automatically saving the network operations team and the security operations team time. >> So let me rewind that 'cause that's super important. Got the unified cloud architecture. I'm the customer, it's implemented. What's the value again, take me through the value to me. I have a unified environment. What's the value? >> I mean the value is effectively, there's a few pieces of value. The first piece of value is I'm creating this clean demark. I'm taking networking to the host. And like I mentioned, we're not running it on the CPU. So in implementations that run networking on the CPU there's some conflict between the DevOps team who own the server, and the NetOps team who own the network because they're installing software on the CPU stealing cycles from what should be revenue generating CPUs. So now by terminating the networking on the DPU we create this real clean demark. So the DevOps folks are happy because they don't necessarily have the skills to manage network and they don't necessarily want to spend the time managing networking. They've got their network counterparts who are also happy the NetOps team because they want to control the networking. And now we've got this clean demark where the DevOps folks get the services they need and the NetOps folks get the control and agility they need. So that's a huge value. The next piece of value is distributed security. This is essential I mentioned it earlier, pushing out micro segmentation and distributed firewall basically at the application level, where I create these small segments on an application by application basis. So if a bad actor does penetrate the perimeter firewall they're contained once they get inside. 'Cause the worst thing is a bad actor penetrates perimeter firewall and can go wherever they want in wreak havoc. And so that's why this is so essential. And the next benefit obviously is this unified networking operating model. Having an operating model across switch and server, underlay and overlay, workload agnostic, making the life of the NetOps teams much easier so they can focus their time on really strategy instead of spending an afternoon deploying a single VLAN for example. >> Awesome, and I think also for my stand point I mean perimeter security is pretty much, that out there, I guess the firewall still out there exists but pretty much they're being breached all the time the perimeter. You have to have this new security model. And I think the other thing that you mentioned the separation between DevOps is cool because the infrastructure is code is about making the developers be agile and build security in from day one. So this policy aspect is huge new control plan. I think you guys have a new architecture that enables the security to be handled more flexible. That seems to be the killer feature here. >> If you look at the data processing unit, I think one of the great things about sort of this new architecture it's really the foundation for zero trust. So like you talked about the perimeter is getting breached. And so now each and every compute node has to be protected. And I think that's sort of what you see with the partnership between Pluribus and NVIDIA is the DPU is really the foundation of zero trust and Pluribus is really building on that vision with allowing sort of micro-segmentation and being able to protect each and every compute node as well as the underlying network. >> This is super exciting. This is illustration of how the market's evolving architectures are being reshaped and refactored for cloud scale and all this new goodness with data. So I got to ask how you guys go into market together. Michael, start with you. What's the relationship look like in the go to market with NVIDIA? >> We're super excited about the partnership. Obviously we're here together. We think we've got a really good solution for the market so we're jointly marketing it. Obviously we appreciate that NVIDIA's open that's sort of in our DNA, we're about a open networking. They've got other ISVs who are going to run on BlueField-2. We're probably going to run on other DPUs in the future. But right now we feel like we're partnered with the number one provider of DPUs in the world and super excited about making a splash with it. >> Oh man NVIDIA got the hot product. >> So BlueField-2 as I mentioned was GA last year, we're introducing, well we now also have the converged accelerator. So I talked about artificial intelligence our artificial intelligence software with the BlueField DPU, all of that put together on a converged accelerator. The nice thing there is you can either run those workloads, so if you have an artificial intelligence workload and an infrastructure workload, you can work on them separately on the same platform or you can actually use you can actually run artificial intelligence applications on the BlueField itself. So that's what the converged accelerator really brings to the table. So that's available now. Then we have BlueField-3 which will be available late this year. And I talked about sort of, how much better that next generation of BlueField is in comparison to BlueField-2. So we'll see BlueField-3 shipping later on this year. And then our software stack which I talked about, which is called DOCA. We're on our second version, our DOCA 1.2 we're releasing DOCA 1.3 in about two months from now. And so that's really our open ecosystem framework. So allow you to program the BlueField. So we have all of our acceleration libraries, security libraries, that's all packed into this SDK called DOCA. And it really gives that simplicity to our partners to be able to develop on top of BlueField. So as we add new generations of BlueField, next year we'll have another version and so on and so forth. DOCA is really that unified layer that allows BlueField to be both forwards compatible and backwards compatible. So partners only really have to think about writing to that SDK once. And then it automatically works with future generations of BlueField. So that's sort of the nice thing around DOCA. And then in terms of our go to market model we're working with every major OEM. Later on this year you'll see, major server manufacturers releasing BlueField enabled servers, so more to come. >> Awesome, save money, make it easier, more capabilities, more workload power. This is the future of cloud operations. >> And one thing I'll add is we are, we have a number of customers as you'll hear in the next segment that are already signed up and will be working with us for our early field trial starting late April early May. We are accepting registrations. You can go to www.pluribusnetworks.com/eft. If you're interested in signing up for being part of our field trial and providing feedback on the product >> Awesome innovation and networking. Thanks so much for sharing the news. Really appreciate, thanks so much. In a moment we'll be back to look deeper in the product the integration, security, zero trust use cases. You're watching theCUBE, the leader in enterprise tech coverage. (upbeat music)

Published Date : Mar 16 2022

SUMMARY :

the CMO of Pluribus Networks, So let's get into the And the way to do it is to So that's the fourth and customers are looking at this. And I appreciate the tee up. That's the BlueField NVIDIA card. And so that is the first, What is the relationship with Pluribus? DPU and running that on the DPU So I have to ask you how So that's sort of one of the And it's all kind of that again So that's the beauty of a solution that Got the unified cloud architecture. and the NetOps team who own the network that enables the security is the DPU is really the in the go to market with NVIDIA? on other DPUs in the future. So that's sort of the This is the future of cloud operations. and providing feedback on the product Thanks so much for sharing the news.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TomPERSON

0.99+

StefaniePERSON

0.99+

JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

MichaelPERSON

0.99+

NVIDIAORGANIZATION

0.99+

AWSORGANIZATION

0.99+

ManasiPERSON

0.99+

LisaPERSON

0.99+

PluribusORGANIZATION

0.99+

John FurrierPERSON

0.99+

Stephanie ChirasPERSON

0.99+

2015DATE

0.99+

Ami BadaniPERSON

0.99+

Stefanie ChirasPERSON

0.99+

AmazonORGANIZATION

0.99+

2008DATE

0.99+

Mike CapuanoPERSON

0.99+

two companiesQUANTITY

0.99+

two yearsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

90%QUANTITY

0.99+

yesterdayDATE

0.99+

MikePERSON

0.99+

RHELTITLE

0.99+

ChicagoLOCATION

0.99+

2021DATE

0.99+

Pluribus NetworksORGANIZATION

0.99+

second versionQUANTITY

0.99+

last yearDATE

0.99+

next yearDATE

0.99+

AnsibleORGANIZATION

0.99+

Pete Lumbis, NVIDIA & Alessandro Barbieri, Pluribus Networks


 

(upbeat music) >> Okay, we're back. I'm John Furrier with theCUBE and we're going to go deeper into a deep dive into unified cloud networking solution from Pluribus and NVIDIA. And we'll examine some of the use cases with Alessandro Barbieri, VP of product management at Pluribus Networks and Pete Lumbis, the director of technical marketing and video remotely. Guys thanks for coming on, appreciate it. >> Yeah thanks a lot. >> I'm happy to be here. >> So a deep dive, let's get into the what and how. Alessandro, we heard earlier about the Pluribus and NVIDIA partnership and the solution you're working together in. What is it? >> Yeah, first let's talk about the what. What are we really integrating with the NVIDIA BlueField the DPU technology? Pluribus has been shipping in volume in multiple mission critical networks, this Netvisor ONE network operating systems. It runs today on merchant silicon switches and effectively it's standard based open network operating system for data center. And the novelty about this operating system is that it integrates distributed the control plane to automate effect with SDN overlay. This automation is completely open and interoperable and extensible to other type of clouds. It's not enclosed. And this is actually what we're now porting to the NVIDIA DPU. >> Awesome, so how does it integrate into NVIDIA hardware and specifically how is Pluribus integrating its software with the NVIDIA hardware? >> Yeah, I think we leverage some of the interesting properties of the BlueField DPU hardware which allows actually to integrate our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the DPU card is completely agnostic to the hypervisor layer or OS layer running on the host. Even more, we can also independently manage this network node this switch on a NIC effectively, managed completely independently from the host. You don't have to go through the network operating system running on X86 to control this network node. So you truly have the experience effectively top of rack for virtual machine or a top of rack for Kubernetes spots, where if you allow me with analogy, instead of connecting a server NIC directly to a switchboard, now we are connecting a VM virtual interface to a virtual interface on the switch on an niche. And also as part of this integration, we put a lot of effort, a lot of emphasis in accelerating the entire data plan for networking and security. So we are taking advantage of the NVIDIA DOCA API to program the accelerators. And these you accomplish two things with that. Number one, you have much better performance. They're running the same network services on an X86 CPU. And second, this gives you the ability to free up I would say around 20, 25% of the server capacity to be devoted either to additional workloads to run your cloud applications or perhaps you can actually shrink the power footprint and compute footprint of your data center by 20% if you want to run the same number of compute workloads. So great efficiencies in the overall approach. >> And this is completely independent of the server CPU, right? >> Absolutely, there is zero code from Pluribus running on the X86. And this is why we think this enables a very clean demarcation between compute and network. >> So Pete, I got to get you in here. We heard that the DPU enable cleaner separation of DevOps and NetOps. Can you explain why that's important because everyone's talking DevSecOps, right? Now, you've got NetSecOps. This separation, why is this clean separation important? >> Yeah, I think, it's a pragmatic solution in my opinion. We wish the world was all kind of rainbows and unicorns, but it's a little messier than that. I think a lot of the DevOps stuff and that mentality and philosophy. There's a natural fit there. You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers. Well, the network has always been this other thing and the network operators have always had a very different approach to things than compute operators. And I think that we in the networking industry have gotten closer together but there's still a gap, there's still some distance. And I think that distance isn't going to be closed and so, again, it comes down to pragmatism. And I think one of my favorite phrases is look, good fences make good neighbors. And that's what this is. >> Yeah, and it's a great point 'cause DevOps has become kind of the calling car for cloud, right? But DevOps is a simply infrastructures code and infrastructure is networking, right? So if infrastructure is code you're talking about that part of the stack under the covers, under the hood if you will. This is super important distinction and this is where the innovation is. Can you elaborate on how you see that because this is really where the action is right now? >> Yeah, exactly. And I think that's where one from the policy, the security, the zero trust aspect of this, right? If you get it wrong on that network side, all of a sudden you can totally open up those capabilities. And so security's part of that. But the other part is thinking about this at scale, right? So we're taking one top of rack switch and adding up to 48 servers per rack. And so that ability to automate, orchestrate and manage its scale becomes absolutely critical. >> Alessandro, this is really the why we're talking about here and this is scale. And again, getting it right. If you don't get it right, you're going to be really kind of up you know what? So this is a huge deal. Networking matters, security matters, automation matters, DevOps, NetOps, all coming together clean separation. Help us understand how this joint solution with NVIDIA fits into the Pluribus unified cloud networking vision because this is what people are talking about and working on right now. >> Yeah, absolutely. So I think here with this solution we're attacking two major problems in cloud networking. One, is operation of cloud networking and the second, is distributing security services in the cloud infrastructure. First, let me talk about first what are we really unifying? If we're unifying something, something must be at least fragmented or disjointed. And what is disjointed is actually the network in the cloud. If you look wholistically how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers. You build your IP clause, fabric leaf and spine topologies. This is actually a well understood problem I would say. There are multiple vendors with let's say similar technologies, very well standardized, very well understood and almost a commodity I would say building an IP fabric these days, but this is not the place where you deploy most of your services in the cloud particularly from a security standpoint. Those services are actually now moved into the compute layer where cloud builders have to instrument a separate network virtualization layer where they deploy segmentation and security closer to the workloads. And this is where the complication arise. This high value part of the cloud network is where you have a plethora of options that they don't talk to each other and they're very dependent on the kind of hypervisor or compute solution you choose. For example, the networking API between an ESXi environment or an Hyper-V or a Zen are completely disjointed. You have multiple orchestration layers. And then when you throw in also Kubernetes in this type of architecture, you are introducing yet another level of networking. And when Kubernetes runs on top of VMs which is a prevalent approach, you actually are stuck in multiple networks on the compute layer that they eventually ran on the physical fabric infrastructure. Those are all ships in the knights effectively, right? They operate as completely disjointed and we're trying to tackle this problem first with the notion of a unified fabric which is independent from any workloads whether this fabric spans on a switch which can be connected to bare metal workload or can span all the way inside the DPU where you have your multi hypervisor compute environment. It's one API, one common network control plane and one common set of segmentation services for the network. That's problem number one. >> It's interesting I hear you talking and I hear one network among different operating models. Reminds me of the old serverless days. There's still servers but they call it serverless. Is there going to be a term network-less because at the end of the day it should be one network, not multiple operating models. This is a problem that you guys are working on, is that right? I'm just joking serverless and network-less, but the idea is it should be one thing. >> Yeah, effectively what we're trying to do is we're trying to recompose this fragmentation in terms of network cooperation across physical networking and server networking. Server networking is where the majority of the problems are because as much as you have standardized the ways of building physical networks and cloud fabrics with IP protocols and internet, you don't have that sort of operational efficiency at the server layer. And this is what we're trying to attack first with this technology. The second aspect we're trying to attack is how we distribute security services throughout the infrastructure more efficiently whether it's micro-segmentation is a stateful firewall services or even encryption. Those are all capabilities enabled by the BlueField DPU technology. And we can actually integrate those capabilities directly into the network fabric limiting dramatically at least for east west traffic the sprawl of security appliances whether virtual or physical. That is typically the way people today segment and secure the traffic in the cloud. >> Awesome. Pete, all kidding aside about network-less and serverless kind of fun play on words there, the network is one thing it's basically distributed computing, right? So I'd love to get your thoughts about this distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >> Yeah, I think what's beautiful and kind of what the DPU brings that's new to this model is completely isolated compute environment inside. So it's the, yo dog, I heard you like a server so I put a server inside your server. And so we provide ARM CPUs, memory and network accelerators inside and that is completely isolated from the host. The actual X86 host just thinks it has a regular niche in there, but you actually have this full control plane thing. It's just like taking your top of rack switch and shoving it inside of your compute node. And so you have not only this separation within the data plane, but you have this complete control plane separation so you have this element that the network team can now control and manage, but we're taking all of the functions we used to do at the top of rack switch and we're distributing them now. And as time has gone on we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, the other option is centralized appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that NVIDIA's good enough or we hope that the VXLAN tunnel's good enough. And we can't actually apply more advanced techniques there because we can't financially afford that appliance to see all of the traffic. And now that we have a distributed model with this accelerator, we could do it. >> So what's in it for the customer real quick and I think this is an interesting point you mentioned policy. Everyone in networking knows policy is just a great thing. And as you hear it being talked about up the stack as well when you start getting to orchestrating microservices and whatnot all that good stuff going on there, containers and whatnot and modern applications. What's the benefit to the customers with this approach because what I heard was more scale, more edge, deployment flexibility relative to security policies and application enablement? What's the customer get out of this architecture? What's the enablement? >> It comes down to taking again the capabilities that we're in that top of rack switch and distributing them down. So that makes simplicity smaller, blast radius' for failures smaller failure domains, maintenance on the networks and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. And again, we always want to kind of separate each one of those layers so just as in say a VXLAN network, my leaf in spine don't have to be tightly coupled together. I can now do this at a different layer and so you can run a DPU with any networking in the core there. And so you get this extreme flexibility. You can start small, you can scale large. To me the possibilities are endless. >> It's a great security control plan. Really flexibility is key and also being situationally aware of any kind of threats or new vectors or whatever's happening in the network. Alessandro, this is huge upside, right? You've already identified some successes with some customers on your early field trials. What are they doing and why are they attracted to the solution? >> Yeah, I think the response from customer has been the most encouraging and exciting for us to sort of continue and work and develop this product. And we have actually learned a lot in the process. We talked to tier two, tier three cloud providers. We talked to SP, Soft Telco type of networks as well as inter large enterprise customers. In one particular case one, let me call out a couple of examples here just to give you a flavor. There is a cloud provider in Asia who is actually managing a cloud where they're offering services based on multiple hypervisors. They are native services based on Zen, but they also on ramp into the cloud workloads based on ESXi and KVM depending on what the customer picks from the menu. And they have the problem of now orchestrating through their orchestrate or integrating with Zen center, with vSphere, with OpenStack to coordinate this multiple environments. And in the process to provide security, they actually deploy virtual appliances everywhere which has a lot of cost complication and eats up into the server CPU. The promise that they saw in this technology, they call it actually game changing is actually to remove all this complexity, having a single network and distribute the micro segmentation service directly into the fabric. And overall they're hoping to get out it tremendous OPEX benefit and overall operational simplification for the cloud infrastructure. That's one important use case. Another global enterprise customer is running both ESXi and Hyper-V environment and they don't have a solution to do micro segmentation consistently across hypervisors. So again, micro segmentation is a huge driver security. Looks like it's a recurring theme talking to most of these customers. And in the Telco space, we're working with few Telco customers on the CFT program where the main goal is actually to harmonize network cooperation. They typically handle all the VNFs with their own homegrown DPDK stack. This is overly complex. It is frankly also slow and inefficient. And then they have a physical network to manage. The idea of having again one network to coordinate the provisioning of cloud services between the Telco VNFs and the rest of the infrastructure is extremely powerful on top of the offloading capability opted by the BlueField DPUs. Those are just some examples. >> That was a great use case. A lot more potential I see that with the unified cloud networking, great stuff, Pete, shout out to you 'cause at NVIDIA we've been following your success us for a long time and continuing to innovate as cloud scales and Pluribus with unified networking kind of bring it to the next level. Great stuff, great to have you guys on and again, software keeps driving the innovation and again, networking is just a part of it and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in this new architecture and solution learn more because this is an architectural shift? People are working on this problem, they're try to think about multiple clouds, they're try to think about unification around the network and giving more security, more flexibility to their teams. How can people learn more? >> Yeah, so Alessandro and I have a talk at the upcoming NVIDIA GTC conference. So it's the week of March 21st through 24th. You can go and register for free nvidia.com/gtc. You can also watch recorded sessions if you end up watching this on YouTube a little bit after the fact. And we're going to dive a little bit more into the specifics and the details and what we're providing in the solution. >> Alessandro, how can we people learn more? >> Yeah, absolutely. People can go to the Pluribus website, www.pluribusnetworks.com/eft and they can fill up the form and they will contact Pluribus to either know more or to know more and actually to sign up for the actual early field trial program which starts at the end of April. >> Okay, well, we'll leave it there. Thank you both for joining, appreciate it. Up next you're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. I'm John Furrier with theCUBE, thanks for watching. (upbeat music)

Published Date : Mar 16 2022

SUMMARY :

Pete Lumbis, the director and NVIDIA partnership and the solution And the novelty about So the first byproduct of this approach on the X86. We heard that the DPU and the network operators have of the calling car for cloud, right? And so that ability to into the Pluribus unified and the second, is Reminds me of the old serverless days. and secure the traffic in the cloud. as the driver for this the data plane, but you have this complete What's the benefit to the and the systems become easier. to the solution? And in the process to provide security, and it's the key solution. and the details and what we're at the end of April. and review some of the research from

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alessandro BarbieriPERSON

0.99+

AlessandroPERSON

0.99+

AsiaLOCATION

0.99+

NVIDIAORGANIZATION

0.99+

PluribusORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

Pluribus NetworksORGANIZATION

0.99+

John FurrierPERSON

0.99+

20%QUANTITY

0.99+

Pete LumbisPERSON

0.99+

FirstQUANTITY

0.99+

ESXiTITLE

0.99+

March 21stDATE

0.99+

ESGORGANIZATION

0.99+

PetePERSON

0.99+

www.pluribusnetworks.com/eftOTHER

0.99+

second aspectQUANTITY

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

24thDATE

0.99+

bothQUANTITY

0.99+

OneQUANTITY

0.99+

two thingsQUANTITY

0.98+

one networkQUANTITY

0.98+

DevOpsTITLE

0.98+

end of AprilDATE

0.98+

secondQUANTITY

0.97+

vSphereTITLE

0.97+

Soft TelcoORGANIZATION

0.97+

KubernetesTITLE

0.97+

todayDATE

0.97+

YouTubeORGANIZATION

0.97+

tier threeQUANTITY

0.96+

nvidia.com/gtcOTHER

0.96+

two major problemsQUANTITY

0.95+

ZenTITLE

0.94+

around 20, 25%QUANTITY

0.93+

zero codeQUANTITY

0.92+

each oneQUANTITY

0.92+

X86COMMERCIAL_ITEM

0.92+

OpenStackTITLE

0.92+

NetOpsTITLE

0.92+

single networkQUANTITY

0.92+

ARMORGANIZATION

0.91+

one common setQUANTITY

0.89+

one APIQUANTITY

0.88+

BlueFieldORGANIZATION

0.87+

one important use caseQUANTITY

0.86+

zero trustQUANTITY

0.86+

tier twoQUANTITY

0.85+

Hyper-VTITLE

0.85+

one common network control planeQUANTITY

0.83+

BlueFieldOTHER

0.82+

Number oneQUANTITY

0.81+

48 serversQUANTITY

0.8+

Changing the Game for Cloud Networking | Pluribus Networks


 

>>Everyone wants a cloud operating model. Since the introduction of the modern cloud. Last decade, the entire technology landscape has changed. We've learned a lot from the hyperscalers, especially from AWS. Now, one thing is certain in the technology business. It's so competitive. Then if a faster, better, cheaper idea comes along, the industry will move quickly to adopt it. They'll add their unique value and then they'll bring solutions to the market. And that's precisely what's happening throughout the technology industry because of cloud. And one of the best examples is Amazon's nitro. That's AWS has custom built hypervisor that delivers on the promise of more efficiently using resources and expanding things like processor, optionality for customers. It's a secret weapon for Amazon. As, as we, as we wrote last year, every infrastructure company needs something like nitro to compete. Why do we say this? Well, Wiki Bon our research arm estimates that nearly 30% of CPU cores in the data center are wasted. >>They're doing work that they weren't designed to do well, specifically offloading networking, storage, and security tasks. So if you can eliminate that waste, you can recapture dollars that drop right to the bottom line. That's why every company needs a nitro like solution. As a result of these developments, customers are rethinking networks and how they utilize precious compute resources. They can't, or won't put everything into the public cloud for many reasons. That's one of the tailwinds for tier two cloud service providers and why they're growing so fast. They give options to customers that don't want to keep investing in building out their own data centers, and they don't want to migrate all their workloads to the public cloud. So these providers and on-prem customers, they want to be more like hyperscalers, right? They want to be more agile and they do that. They're distributing, networking and security functions and pushing them closer to the applications. >>Now, at the same time, they're unifying their view of the network. So it can be less fragmented, manage more efficiently with more automation and better visibility. How are they doing this? Well, that's what we're going to talk about today. Welcome to changing the game for cloud networking made possible by pluribus networks. My name is Dave Vellante and today on this special cube presentation, John furrier, and I are going to explore these issues in detail. We'll dig into new solutions being created by pluribus and Nvidia to specifically address offloading, wasted resources, accelerating performance, isolating data, and making networks more secure all while unifying the network experience. We're going to start on the west coast and our Palo Alto studios, where John will talk to Mike of pluribus and AMI, but Donnie of Nvidia, then we'll bring on Alessandra Bobby airy of pluribus and Pete Lummus from Nvidia to take a deeper dive into the technology. And then we're gonna bring it back here to our east coast studio and get the independent analyst perspective from Bob Liberte of the enterprise strategy group. We hope you enjoy the program. Okay, let's do this over to John >>Okay. Let's kick things off. We're here at my cafe. One of the TMO and pluribus networks and NAMI by Dani VP of networking, marketing, and developer ecosystem at Nvidia. Great to have you welcome folks. >>Thank you. Thanks. >>So let's get into the, the problem situation with cloud unified network. What problems are out there? What challenges do cloud operators have Mike let's get into it. >>Yeah, it really, you know, the challenges we're looking at are for non hyperscalers that's enterprises, governments, um, tier two service providers, cloud service providers, and the first mandate for them is to become as agile as a hyperscaler. So they need to be able to deploy services and security policies. And second, they need to be able to abstract the complexity of the network and define things in software while it's accelerated in hardware. Um, really ultimately they need a single operating model everywhere. And then the second thing is they need to distribute networking and security services out to the edge of the host. Um, we're seeing a growth in cyber attacks. Um, it's, it's not slowing down. It's only getting worse and, you know, solving for this security problem across clouds is absolutely critical. And the way to do it is to move security out to the host. >>Okay. With that goal in mind, what's the pluribus vision. How does this tie together? >>Yeah. So, um, basically what we see is, uh, that this demands a new architecture and that new architecture has four tenants. The first tenant is unified and simplified cloud networks. If you look at cloud networks today, there's, there's sort of like discreet bespoke cloud networks, you know, per hypervisor, per private cloud edge cloud public cloud. Each of the public clouds have different networks that needs to be unified. You know, if we want these folks to be able to be agile, they need to be able to issue a single command or instantiate a security policy across all those locations with one command and not have to go to each one. The second is like I mentioned, distributed security, um, distributed security without compromise, extended out to the host is absolutely critical. So micro-segmentation and distributed firewalls, but it doesn't stop there. They also need pervasive visibility. >>You know, it's, it's, it's sort of like with security, you really can't see you can't protect what you can't see. So you need visibility everywhere. The problem is visibility to date has been very expensive. Folks have had to basically build a separate overlay network of taps, packet brokers, tap aggregation infrastructure that really needs to be built into this unified network I'm talking about. And the last thing is automation. All of this needs to be SDN enabled. So this is related to my comment about abstraction abstract, the complexity of all of these discreet networks, physic whatever's down there in the physical layer. Yeah. I don't want to see it. I want to abstract it. I wanted to find things in software, but I do want to leverage the power of hardware to accelerate that. So that's the fourth tenant is SDN automation. >>Mike, we've been talking on the cube a lot about this architectural shift and customers are looking at this. This is a big part of everyone who's looking at cloud operations next gen, how do we get there? How do customers get this vision realized? >>That's a great question. And I appreciate the tee up. I mean, we're, we're here today for that reason. We're introducing two things today. Um, the first is a unified cloud networking vision, and that is a vision of where pluribus is headed with our partners like Nvidia longterm. Um, and that is about, uh, deploying a common operating model, SDN enabled SDN, automated hardware, accelerated across all clouds. Um, and whether that's underlying overlay switch or server, um, hype, any hypervisor infrastructure containers, any workload doesn't matter. So that's ultimately where we want to get. And that's what we talked about earlier. Um, the first step in that vision is what we call the unified cloud fabric. And this is the next generation of our adaptive cloud fabric. Um, and what's nice about this is we're not starting from scratch. We have a, a, an award-winning adaptive cloud fabric product that is deployed globally. Um, and in particular, uh, we're very proud of the fact that it's deployed in over a hundred tier one mobile operators as the network fabric for their 4g and 5g virtualized cores. We know how to build carrier grade, uh, networking infrastructure, what we're doing now, um, to realize this next generation unified cloud fabric is we're extending from the switch to this Nvidia Bluefield to DPU. We know there's a, >>Hold that up real quick. That's a good, that's a good prop. That's the blue field and video. >>It's the Nvidia Bluefield two DPU data processing unit. And, um, uh, you know, what we're doing, uh, fundamentally is extending our SDN automated fabric, the unified cloud fabric out to the host, but it does take processing power. So we knew that we didn't want to do, we didn't want to implement that running on the CPU, which is what some other companies do because it consumes revenue generating CPU's from the application. So a DPU is a perfect way to implement this. And we knew that Nvidia was the leader with this blue field too. And so that is the first that's, that's the first step in the getting into realizing this vision. >>I mean, Nvidia has always been powering some great workloads of GPU. Now you've got DPU networking and then video is here. What is the relationship with clothes? How did that come together? Tell us the story. >>Yeah. So, you know, we've been working with pluribus for quite some time. I think the last several months was really when it came to fruition and, uh, what pluribus is trying to build and what Nvidia has. So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, conceptually does really three things, offload, accelerate an isolate. So offload your workloads from your CPU to your data processing unit infrastructure workloads that is, uh, accelerate. So there's a bunch of acceleration engines. So you can run infrastructure workloads much faster than you would otherwise, and then isolation. So you have this nice security isolation between the data processing unit and your other CPU environment. And so you can run completely isolated workloads directly on the data processing unit. So we introduced this, you know, a couple of years ago, and with pluribus, you know, we've been talking to the pluribus team for quite some months now. >>And I think really the combination of what pluribus is trying to build and what they've developed around this unified cloud fabric, uh, is fits really nicely with the DPU and running that on the DPU and extending it really from your physical switch, all the way to your host environment, specifically on the data processing unit. So if you think about what's happening as you add data processing units to your environment. So every server we believe over time is going to have data processing units. So now you'll have to manage that complexity from the physical network layer to the host layer. And so what pluribus is really trying to do is extending the network fabric from the host, from the switch to the host, and really have that single pane of glass for network operators to be able to configure provision, manage all of the complexity of the network environment. >>So that's really how the partnership truly started. And so it started really with extending the network fabric, and now we're also working with them on security. So, you know, if you sort of take that concept of isolation and security isolation, what pluribus has within their fabric is the concept of micro-segmentation. And so now you can take that extended to the data processing unit and really have, um, isolated micro-segmentation workloads, whether it's bare metal cloud native environments, whether it's virtualized environments, whether it's public cloud, private cloud hybrid cloud. So it really is a magical partnership between the two companies with their unified cloud fabric running on, on the DPU. >>You know, what I love about this conversation is it reminds me of when you have these changing markets, the product gets pulled out of the market and, and you guys step up and create these new solutions. And I think this is a great example. So I have to ask you, how do you guys differentiate what sets this apart for customers with what's in it for the customer? >>Yeah. So I mentioned, you know, three things in terms of the value of what the Bluefield brings, right? There's offloading, accelerating, isolating, that's sort of the key core tenants of Bluefield. Um, so that, you know, if you sort of think about what, um, what Bluefields, what we've done, you know, in terms of the differentiation, we're really a robust platform for innovation. So we introduced Bluefield to, uh, last year, we're introducing Bluefield three, which is our next generation of Bluefields, you know, we'll have five X, the arm compute capacity. It will have 400 gig line rate acceleration, four X better crypto acceleration. So it will be remarkably better than the previous generation. And we'll continue to innovate and add, uh, chips to our portfolio every, every 18 months to two years. Um, so that's sort of one of the key areas of differentiation. The other is the, if you look at Nvidia and, and you know, what we're sort of known for is really known for our AI artificial intelligence and our artificial intelligence software, as well as our GPU. >>So you look at artificial intelligence and the combination of artificial intelligence plus data processing. This really creates the, you know, faster, more efficient, secure AI systems from the core of your data center, all the way out to the edge. And so with Nvidia, we really have these converged accelerators where we've combined the GPU, which does all your AI processing with your data processing with the DPU. So we have this convergence really nice convergence of that area. And I would say the third area is really around our developer environment. So, you know, one of the key, one of our key motivations at Nvidia is really to have our partner ecosystem, embrace our technology and build solutions around our technology. So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called Doka, and it's an open SDK for our partners to really build and develop solutions using Bluefield and using all these accelerated libraries that we expose through Doka. And so part of our differentiation is really building this open ecosystem for our partners to take advantage and build solutions around our technology. >>You know, what's exciting is when I hear you talk, it's like you realize that there's no one general purpose network anymore. Everyone has their own super environment Supercloud or these new capabilities. They can really craft their own, I'd say, custom environment at scale with easy tools. Right. And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers run this effectively? Cost-effectively and how do people migrate? >>Yeah, I, I think that is the key question, right? So we've got this beautiful architecture. You, you know, Amazon nitro is a, is a good example of, of a smart NIC architecture that has been successfully deployed, but enterprises and serve tier two service providers and tier one service providers and governments are not Amazon, right? So they need to migrate there and they need this architecture to be cost-effective. And, and that's, that's super key. I mean, the reality is deep user moving fast, but they're not going to be, um, deployed everywhere on day one. Some servers will have DPS right away, some servers will have use and a year or two. And then there are devices that may never have DPS, right. IOT gateways, or legacy servers, even mainframes. Um, so that's the beauty of a solution that creates a fabric across both the switch and the DPU, right. >>Um, and by leveraging the Nvidia Bluefield DPU, what we really like about it is it's open. Um, and that drives, uh, cost efficiencies. And then, um, uh, you know, with this, with this, our architectural approach effectively, you get a unified solution across switch and DPU workload independent doesn't matter what hypervisor it is, integrated visibility, integrated security, and that can, uh, create tremendous cost efficiencies and, and really extract a lot of the expense from, from a capital perspective out of the network, as well as from an operational perspective, because now I have an SDN automated solution where I'm literally issuing a command to deploy a network service or to create or deploy our security policy and is deployed everywhere, automatically saving the oppor, the network operations team and the security operations team time. >>All right. So let me rewind that because that's super important. Get the unified cloud architecture, I'm the customer guy, but it's implemented, what's the value again, take, take me through the value to me. I have a unified environment. What's the value. >>Yeah. So I mean, the value is effectively, um, that, so there's a few pieces of value. The first piece of value is, um, I'm creating this clean D mark. I'm taking networking to the host. And like I mentioned, we're not running it on the CPU. So in implementations that run networking on the CPU, there's some conflict between the dev ops team who owned the server and the NetApps team who own the network because they're installing software on the, on the CPU stealing cycles from what should be revenue generating. Uh CPU's. So now by, by terminating the networking on the DPU, we click create this real clean DMARC. So the dev ops folks are happy because they don't necessarily have the skills to manage network and they don't necessarily want to spend the time managing networking. They've got their network counterparts who are also happy the NetApps team, because they want to control the networking. >>And now we've got this clean DMARC where the DevOps folks get the services they need and the NetApp folks get the control and agility they need. So that's a huge value. Um, the next piece of value is distributed security. This is essential. I mentioned earlier, you know, put pushing out micro-segmentation and distributed firewall, basically at the application level, right, where I create these small, small segments on an by application basis. So if a bad actor does penetrate the perimeter firewall, they're contained once they get inside. Cause the worst thing is a bad actor, penetrates a perimeter firewall and can go wherever they want and wreak havoc. Right? And so that's why this, this is so essential. Um, and the next benefit obviously is this unified networking operating model, right? Having, uh, uh, uh, an operating model across switch and server underlay and overlay, workload agnostic, making the life of the NetApps teams much easier so they can focus their time on really strategy instead of spending an afternoon, deploying a single villain, for example. >>Awesome. And I think also from my standpoint, I mean, perimeter security is pretty much, I mean, they're out there, it gets the firewall still out there exists, but pretty much they're being breached all the time, the perimeter. So you have to have this new security model. And I think the other thing that you mentioned, the separation between dev ops is cool because the infrastructure is code is about making the developers be agile and build security in from day one. So this policy aspect is, is huge. Um, new control points. I think you guys have a new architecture that enables the security to be handled more flexible. >>Right. >>That seems to be the killer feature here, >>Right? Yeah. If you look at the data processing unit, I think one of the great things about sort of this new architecture, it's really the foundation for zero trust it's. So like you talked about the perimeter is getting breached. And so now each and every compute node has to be protected. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the foundation of zero trust. And pluribus is really building on that vision with, uh, allowing sort of micro-segmentation and being able to protect each and every compute node as well as the underlying network. >>This is super exciting. This is an illustration of how the market's evolving architectures are being reshaped and refactored for cloud scale and all this new goodness with data. So I gotta ask how you guys go into market together. Michael, start with you. What's the relationship look like in the go to market with an Nvidia? >>Sure. Um, I mean, we're, you know, we're super excited about the partnership, obviously we're here together. Um, we think we've got a really good solution for the market, so we're jointly marketing it. Um, uh, you know, obviously we appreciate that Nvidia is open. Um, that's, that's sort of in our DNA, we're about open networking. They've got other ISV who are gonna run on Bluefield too. We're probably going to run on other DPS in the, in the future, but right now, um, we're, we feel like we're partnered with the number one, uh, provider of DPS in the world and, uh, super excited about, uh, making a splash with it. >>I'm in get the hot product. >>Yeah. So Bluefield too, as I mentioned was GA last year, we're introducing, uh, well, we now also have the converged accelerator. So I talked about artificial intelligence or artificial intelligence with the Bluefield DPU, all of that put together on a converged accelerator. The nice thing there is you can either run those workloads. So if you have an artificial intelligence workload and an infrastructure workload, you can warn them separately on the same platform or you can actually use, uh, you can actually run artificial intelligence applications on the Bluefield itself. So that's what the converged accelerator really brings to the table. Uh, so that's available now. Then we have Bluefield three, which will be available late this year. And I talked about sort of, you know, uh, how much better that next generation of Bluefield is in comparison to Bluefield two. So we will see Bluefield three shipping later on this year, and then our software stack, which I talked about, which is called Doka we're on our second version are Doka one dot two. >>We're releasing Doka one dot three, uh, in about two months from now. And so that's really our open ecosystem framework. So allow you to program the Bluefields. So we have all of our acceleration libraries, um, security libraries, that's all packed into this STK called Doka. And it really gives that simplicity to our partners to be able to develop on top of Bluefield. So as we add new generations of Bluefield, you know, next, next year, we'll have, you know, another version and so on and so forth Doka is really that unified unified layer that allows, um, Bluefield to be both forwards compatible and backwards compatible. So partners only really have to think about writing to that SDK once, and then it automatically works with future generations of Bluefields. So that's sort of the nice thing around, um, around Doka. And then in terms of our go to market model, we're working with every, every major OEM. So, uh, later on this year, you'll see, you know, major server manufacturers, uh, releasing Bluefield enabled servers. So, um, more to come >>Awesome, save money, make it easier, more capabilities, more workload power. This is the future of, of cloud operations. >>Yeah. And, and, and, uh, one thing I'll add is, um, we are, um, we have a number of customers as you'll hear in the next segment, um, that are already signed up and we'll be working with us for our, uh, early field trial starting late April early may. Um, we are accepting registrations. You can go to www.pluribusnetworks.com/e F T a. If you're interested in signing up for, um, uh, being part of our field trial and providing feedback on the product, >>Awesome innovation and network. Thanks so much for sharing the news. Really appreciate it. Thanks so much. Okay. In a moment, we'll be back to look deeper in the product, the integration security zero trust use cases. You're watching the cube, the leader in enterprise tech coverage, >>Cloud networking is complex and fragmented slowing down your business. How can you simplify and unify your cloud networks to increase agility and business velocity? >>Pluribus unified cloud networking provides a unified simplify and agile network fabric across all clouds. It brings the simplicity of a public cloud operation model to private clouds, dramatically reducing complexity and improving agility, availability, and security. Now enterprises and service providers can increase their business philosophy and delight customers in the distributed multi-cloud era. We achieve this with a new approach to cloud networking, pluribus unified cloud fabric. This open vendor, independent network fabric, unifies, networking, and security across distributed clouds. The first step is extending the fabric to servers equipped with data processing units, unifying the fabric across switches and servers, and it doesn't stop there. The fabric is unified across underlay and overlay networks and across all workloads and virtualization environments. The unified cloud fabric is optimized for seamless migration to this new distributed architecture, leveraging the power of the DPU for application level micro-segmentation distributed fireball and encryption while still supporting those servers and devices that are not equipped with a DPU. Ultimately the unified cloud fabric extends seamlessly across distributed clouds, including central regional at edge private clouds and public clouds. The unified cloud fabric is a comprehensive network solution. That includes everything you need for clouds, networking built in SDN automation, distributed security without compromises, pervasive wire speed, visibility and application insight available on your choice of open networking switches and DP use all at the lowest total cost of ownership. The end result is a dramatically simplified unified cloud networking architecture that unifies your distributed clouds and frees your business to move at cloud speed, >>To learn more, visit www.pluribusnetworks.com. >>Okay. We're back I'm John ferry with the cube, and we're going to go deeper into a deep dive into unified cloud networking solution from Clovis and Nvidia. And we'll examine some of the use cases with Alessandra Burberry, VP of product management and pullovers networks and Pete Bloomberg who's director of technical marketing and video remotely guys. Thanks for coming on. Appreciate it. >>Yeah. >>So deep dive, let's get into the what and how Alexandra we heard earlier about the pluribus Nvidia partnership and the solution you're working together on what is it? >>Yeah. First let's talk about the water. What are we really integrating with the Nvidia Bluefield, the DPO technology, uh, plugable says, um, uh, there's been shipping, uh, in, uh, in volume, uh, in multiple mission critical networks. So this advisor one network operating systems, it runs today on a merchant silicone switches and effectively it's a standard open network operating system for data center. Um, and the novelty about this system that integrates a distributed control plane for, at water made effective in SDN overlay. This automation is a completely open and interoperable and extensible to other type of clouds is not enclosed them. And this is actually what we're now porting to the Nvidia DPO. >>Awesome. So how does it integrate into Nvidia hardware and specifically how has pluribus integrating its software with the Nvidia hardware? >>Yeah, I think, uh, we leverage some of the interesting properties of the Bluefield, the DPO hardware, which allows actually to integrate, uh, um, uh, our software, our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the DPU card that is completely agnostic to the hypervisor layer or OSTP layer running on, uh, on the host even more, um, uh, we can also independently manage this network, know that the switch on a Neek effectively, um, uh, managed completely independently from the host. You don't have to go through the network operating system, running on x86 to control this network node. So you throw yet the experience effectively of a top of rack for virtual machine or a top of rack for, uh, Kubernetes bots, where instead of, uh, um, if you allow me with the analogy instead of connecting a server knee directly to a switchboard, now you're connecting a VM virtual interface to a virtual interface on the switch on an ache. >>And, uh, also as part of this integration, we, uh, put a lot of effort, a lot of emphasis in, uh, accelerating the entire, uh, data plane for networking and security. So we are taking advantage of the DACA, uh, Nvidia DACA API to program the accelerators. And these accomplished two things with that. Number one, uh, you, uh, have much greater performance, much better performance. They're running the same network services on an x86 CPU. And second, this gives you the ability to free up, I would say around 20, 25% of the server capacity to be devoted either to, uh, additional workloads to run your cloud applications, or perhaps you can actually shrink the power footprint and compute footprint of your data center by 20%, if you want to run the same number of compute workloads. So great efficiencies in the overall approach, >>And this is completely independent of the server CPU, right? >>Absolutely. There is zero code from running on the x86, and this is what we think this enables a very clean demarcation between computer and network. >>So Pete, I gotta get, I gotta get you in here. We heard that, uh, the DPU is enabled cleaner separation of dev ops and net ops. Can you explain why that's important because everyone's talking DevSecOps right now, you've got net ops, net, net sec ops, this separation. Why is this clean separation important? >>Yeah, I think it's a, you know, it's a pragmatic solution in my opinion. Um, you know, we wish the world was all kind of rainbows and unicorns, but it's a little, a little messier than that. And I think a lot of the dev ops stuff and that, uh, mentality and philosophy, there's a natural fit there. Right? You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers. Well, the network has always been this other thing and the network operators have always had a very different approach to things than compute operators. And, you know, I think that we, we in the networking industry have gotten closer together, but there's still a gap there's still some distance. And I think in that distance, isn't going to be closed. And so, you know, again, it comes down to pragmatism and I think, you know, one of my favorite phrases is look good fences, make good neighbors. And that's what this is. >>Yeah. That's a great point because dev ops has become kind of the calling card for cloud, right. But dev ops is as simply infrastructure as code and infrastructure is networking, right? So if infrastructure is code, you know, you're talking about, you know, that part of the stack under the covers under the hood, if you will, this is super important distinction. And this is where the innovation is. Can you elaborate on how you see that? Because this is really where the action is right now. >>Yeah, exactly. And I think that's where, um, one from, from the policy, the security that the zero trust aspect of this, right? If you get it wrong on that network side, all of a sudden you, you can totally open up that those capabilities. And so security is part of that. But the other part is thinking about this at scale, right? So we're taking one top of rack switch and adding, you know, up to 48 servers per rack. And so that ability to automate, orchestrate and manage at scale becomes absolutely critical. >>I'll Sandra, this is really the why we're talking about here, and this is scale. And again, getting it right. If you don't get it right, you're going to be really kind of up, you know what you know, so this is a huge deal. Networking matters, security matters, automation matters, dev ops, net ops, all coming together, clean separation, um, help us understand how this joint solution with Nvidia fits into the pluribus unified cloud networking vision, because this is what people are talking about and working on right now. >>Yeah, absolutely. So I think here with this solution, we're attacking two major problems in cloud networking. One is, uh, operation of, uh, cloud networking. And the second is a distributing security services in the cloud infrastructure. First, let me talk about the first water. We really unifying. If we're unifying something, something must be at least fragmented or this jointed and the, what is this joint that is actually the network in the cloud. If you look holistically, how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers, you'll build your IP clause fabric leaf in spine typologies. This is actually a well understood the problem. I, I would say, um, there are multiple vendors, uh, uh, with, uh, um, uh, let's say similar technologies, um, very well standardized, whether you will understood, um, and almost a commodity, I would say building an IP fabric these days, but this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint, two services are actually now moved into the compute layer where you actually were called builders, have to instrument the, a separate, uh, network virtualization layer, where they deploy segmentation and security closer to the workloads. >>And this is where the complication arise. These high value part of the cloud network is where you have a plethora of options that they don't talk to each other. And they are very dependent on the kind of hypervisor or compute solution you choose. Um, for example, the networking API to be between an GSXI environment or an hyper V or a Zen are completely disjointed. You have multiple orchestration layers. And when, and then when you throw in also Kubernetes in this, in this, in this type of architecture, uh, you're introducing yet another level of networking. And when Kubernetes runs on top of VMs, which is a prevalent approach, you actually just stacking up multiple networks on the compute layer that they eventually run on the physical fabric infrastructure. Those are all ships in the nights effectively, right? They operate as completely disjointed. And we're trying to attack this problem first with the notion of a unified fabric, which is independent from any workloads, whether it's this fabric spans on a switch, which can be con connected to a bare metal workload, or can span all the way inside the DPU, uh, where, um, you have, uh, your multi hypervisor compute environment. >>It's one API, one common network control plane, and one common set of segmentation services for the network. That's probably the number one, >>You know, it's interesting you, man, I hear you talking, I hear one network month, different operating models reminds me of the old serverless days. You know, there's still servers, but they call it serverless. Is there going to be a term network list? Because at the end of the day, it should be one network, not multiple operating models. This, this is a problem that you guys are working on. Is that right? I mean, I'm not, I'm just joking server listen network list, but the idea is it should be one thing. >>Yeah, it's effectively. What we're trying to do is we are trying to recompose this fragmentation in terms of network operation, across physical networking and server networking server networking is where the majority of the problems are because of the, uh, as much as you have standardized the ways of building, uh, physical networks and cloud fabrics with IP protocols and internet, you don't have that kind of, uh, uh, sort of, uh, um, um, uh, operational efficiency, uh, at the server layer. And, uh, this is what we're trying to attack first. The, with this technology, the second aspect we're trying to attack is are we distribute the security services throughout the infrastructure, more efficiently, whether it's micro-segmentation is a stateful firewall services, or even encryption. Those are all capabilities enabled by the blue field, uh, uh, the Butte technology and, uh, uh, we can actually integrate those capabilities directly into the nettle Fabrica, uh, limiting dramatically, at least for east-west traffic, the sprawl of, uh, security appliances, whether virtual or physical, that is typically the way the people today, uh, segment and secure the traffic in the cloud. >>Awesome. Pete, all kidding aside about network lists and serverless kind of fun, fun play on words there, the network is one thing it's basically distributed computing, right? So I love to get your thoughts about this distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >>Yeah, I think what's, what's beautiful and kind of what the DPU brings. That's new to this model is a completely isolated compute environment inside. So, you know, it's the, uh, yo dog, I heard you like a server, so I put a server inside your server. Uh, and so we provide, uh, you know, armed CPU's memory and network accelerators inside, and that is completely isolated from the host. So the server, the, the actual x86 host just thinks it has a regular Nick in there, but you actually have this full control plane thing. It's just like taking your top of rack switch and shoving it inside of your compute node. And so you have not only the separation, um, within the data plane, but you have this complete control plane separation. So you have this element that the network team can now control and manage, but we're taking all of the functions we used to do at the top of rack switch, and we're just shooting them now. >>And, you know, as time has gone on we've, we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, um, the other option is centralized appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that of aliens good enough, or we hope that if the excellent tunnel is good enough and we can actually apply more advanced techniques there because we can't physically, you know, financially afford that appliance to see all of the traffic. And now that we have a distributed model with this accelerator, we could do it. >>So what's the what's in it for the customer. I real quick, cause I think this is interesting point. You mentioned policy, everyone in networking knows policy is just a great thing and it adds, you hear it being talked about up the stack as well. When you start getting to orchestrating microservices and whatnot, all that good stuff going on there, containers and whatnot and modern applications. What's the benefit to the customers with this approach? Because what I heard was more scale, more edge deployment, flexibility, relative to security policies and application enablement. I mean, is that what what's the customer get out of this architecture? What's the enablement. >>It comes down to, uh, taking again the capabilities that were in that top of rack switch and asserting them down. So that makes simplicity smaller blast radiuses for failure, smaller failure domains, maintenance on the networks, and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. Um, and again, you know, we always want to kind of separate each one of those layers. So just as in say, a VX land network, my leaf and spine don't have to be tightly coupled together. I can now do this at a different layer. And so you can run a DPU with any networking in the core there. And so you get this extreme flexibility. You can start small, you can scale large. Um, you know, to me, the, the possibilities are endless. Yes, >>It's a great security control plan. Really flexibility is key. And, and also being situationally aware of any kind of threats or new vectors or whatever's happening in the network. Alessandra, this is huge upside, right? You've already identified some successes with some customers on your early field trials. What are they doing and why are they attracted to the solution? >>Yeah, I think the response from customers has been, uh, the most, uh, encouraging and, uh, exciting, uh, for, uh, for us to, uh, to sort of continue and work and develop this product. And we have actually learned a lot in the process. Um, we talked to tier two tier three cloud providers. Uh, we talked to, uh, SP um, software Tyco type of networks, uh, as well as a large enterprise customers, um, in, uh, one particular case. Um, uh, one, uh, I think, um, let me, let me call out a couple of examples here, just to give you a flavor. Uh, there is a service provider, a cloud provider, uh, in Asia who is actually managing a cloud, uh, where they are offering services based on multiple hypervisors. They are native services based on Zen, but they also are on ramp into the cloud, uh, workloads based on, uh, ESI and, uh, uh, and KVM, depending on what the customer picks from the piece on the menu. >>And they have the problem of now orchestrating through their orchestrate or integrating with the Zen center with vSphere, uh, with, uh, open stack to coordinate these multiple environments and in the process to provide security, they actually deploy virtual appliances everywhere, which has a lot of costs, complication, and eats up into the server CPU. The problem is that they saw in this technology, they call it actually game changing is actually to remove all this complexity of in a single network and distribute the micro-segmentation service directly into the fabric. And overall, they're hoping to get out of it, uh, uh, tremendous, uh, um, opics, uh, benefit and overall, um, uh, operational simplification for the cloud infrastructure. That's one potent a use case. Uh, another, uh, large enterprise customer global enterprise customer, uh, is running, uh, both ESI and hyper V in that environment. And they don't have a solution to do micro-segmentation consistently across hypervisors. >>So again, micro-segmentation is a huge driver security looks like it's a recurring theme, uh, talking to most of these customers and in the Tyco space, um, uh, we're working with a few types of customers on the CFT program, uh, where the main goal is actually to our Monet's network operation. They typically handle all the VNF search with their own homegrown DPDK stack. This is overly complex. It is frankly also as low and inefficient, and then they have a physical network to manage the, the idea of having again, one network, uh, to coordinate the provision in our cloud services between the, the take of VNF, uh, and, uh, the rest of the infrastructure, uh, is extremely powerful on top of the offloading capability of the, by the bluefin DPOs. Those are just some examples. >>That was a great use case, a lot more potential. I see that with the unified cloud networking, great stuff, feed, shout out to you guys at Nvidia had been following your success for a long time and continuing to innovate as cloud scales and pluribus here with the unified networking, kind of bring it to the next level. Great stuff. Great to have you guys on. And again, software keeps driving the innovation again, networking is just a part of it, and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in, in this, uh, new architecture and solution, uh, learn more because this is an architectural shift. People are working on this problem. They're trying to think about multiple clouds of trying to think about unification around the network and giving more security, more flexibility, uh, to their teams. How can people learn more? >>Yeah, so, uh, all Sandra and I have a talk at the upcoming Nvidia GTC conference. Um, so that's the week of March 21st through 24th. Um, you can go and register for free and video.com/at GTC. Um, you can also watch recorded sessions if you ended up watching us on YouTube a little bit after the fact. Um, and we're going to dive a little bit more into the specifics and the details and what we're providing in the solution. >>Alexandra, how can people learn more? >>Yeah, absolutely. People can go to the pluribus, a website, www boost networks.com/eft, and they can fill up the form and, uh, they will contact durables to either know more or to know more and actually to sign up for the actual early field trial program, which starts at the end of April. >>Okay. Well, we'll leave it there. Thanks. You both for joining. Appreciate it up next. You're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. I'm John ferry with the >>Cube. Thanks for watching. >>Okay. We've heard from the folks at networks and Nvidia about their effort to transform cloud networking and unify bespoke infrastructure. Now let's get the perspective from an independent analyst and to do so. We welcome in ESG, senior analysts, Bob LA Liberte, Bob. Good to see you. Thanks for coming into our east coast studios. >>Oh, thanks for having me. It's great to be >>Here. Yeah. So this, this idea of unified cloud networking approach, how serious is it? What's what's driving it. >>Yeah, there's certainly a lot of drivers behind it, but probably the first and foremost is the fact that application environments are becoming a lot more distributed, right? So the, it pendulum tends to swing back and forth. And we're definitely on one that's swinging from consolidated to distributed. And so applications are being deployed in multiple private data centers, multiple public cloud locations, edge locations. And as a result of that, what you're seeing is a lot of complexity. So organizations are having to deal with this highly disparate environment. They have to secure it. They have to ensure connectivity to it and all that's driving up complexity. In fact, when we asked in one of our last surveys and last year about network complexity, more than half 54% came out and said, Hey, our network environment is now either more or significantly more complex than it used to be. >>And as a result of that, what you're seeing is it's really impacting agility. So everyone's moving to these modern application environments, distributing them across areas so they can improve agility yet it's creating more complexity. So a little bit counter to the fact and, you know, really counter to their overarching digital transformation initiatives. From what we've seen, you know, nine out of 10 organizations today are either beginning in process or have a mature digital transformation process or initiative, but their top goals, when you look at them, it probably shouldn't be a surprise. The number one goal is driving operational efficiency. So it makes sense. I've distributed my environment to create agility, but I've created a lot of complexity. So now I need these tools that are going to help me drive operational efficiency, drive better experience. >>I mean, I love how you bring in the data yesterday. Does a great job with that. Uh, questions is, is it about just unifying existing networks or is there sort of a need to rethink kind of a do-over network, how networks are built? >>Yeah, that's a, that's a really good point because certainly unifying networks helps right. Driving any kind of operational efficiency helps. But in this particular case, because we've made the transition to new application architectures and the impact that's having as well, it's really about changing and bringing in new frameworks and new network architectures to accommodate those new application architectures. And by that, what I'm talking about is the fact that these new modern application architectures, microservices, containers are driving a lot more east west traffic. So in the old days, it used to be easier in north south coming out of the server, one application per server, things like that. Right now you've got hundreds, if not thousands of microservices communicating with each other users communicating to them. So there's a lot more traffic and a lot of it's taking place within the servers themselves. The other issue that you starting to see as well from that security perspective, when we were all consolidated, we had those perimeter based legacy, you know, castle and moat security architectures, but that doesn't work anymore when the applications aren't in the castle, right. >>When everything's spread out that that no longer happens. So we're absolutely seeing, um, organizations trying to, trying to make a shift. And, and I think much, like if you think about the shift that we're seeing with all the remote workers and the sassy framework to enable a secure framework there, this it's almost the same thing. We're seeing this distributed services framework come up to support the applications better within the data centers, within the cloud data centers, so that you can drive that security closer to those applications and make sure they're, they're fully protected. Uh, and that's really driving a lot of the, you know, the zero trust stuff you hear, right? So never trust, always verify, making sure that everything is, is, is really secure micro-segmentation is another big area. So ensuring that these applications, when they're connected to each other, they're, they're fully segmented out. And that's again, because if someone does get a breach, if they are in your data center, you want to limit the blast radius, you want to limit the amount of damage that's done. So that by doing that, it really makes it a lot harder for them to see everything that's in there. >>You know, you mentioned zero trust. It used to be a buzzword, and now it's like become a mandate. And I love the mode analogy. You know, you build a moat to protect the queen and the castle, the Queens left the castles, it's just distributed. So how should we think about this, this pluribus and Nvidia solution. There's a spectrum, help us understand that you've got appliances, you've got pure software solutions. You've got what pluribus is doing with Nvidia, help us understand that. >>Yeah, absolutely. I think as organizations recognize the need to distribute their services to closer to the applications, they're trying different models. So from a legacy approach, you know, from a security perspective, they've got these centralized firewalls that they're deploying within their data centers. The hard part for that is if you want all this traffic to be secured, you're actually sending it out of the server up through the rack, usually to in different location in the data center and back. So with the need for agility, with the need for performance, right, that adds a lot of latency. Plus when you start needing to scale, that means adding more and more network connections, more and more appliances. So it can get very costly as well as impacting the performance. The other way that organizations are seeking to solve this problem is by taking the software itself and deploying it on the servers. Okay. So that's a, it's a great approach, right? It brings it really close to the applications, but the things you start running into there, there's a couple of things. One is that you start seeing that the DevOps team start taking on that networking and security responsibility, which they >>Don't want to >>Do, they don't want to do right. And the operations teams loses a little bit of visibility into that. Um, plus when you load the software onto the server, you're taking up precious CPU cycles. So if you're really wanting your applications to perform at an optimized state, having additional software on there, isn't going to, isn't going to do it. So, you know, when we think about all those types of things, right, and certainly the other side effects of that is the impact of the performance, but there's also a cost. So if you have to buy more servers because your CPU's are being utilized, right, and you have hundreds or thousands of servers, right, those costs are going to add up. So what, what Nvidia and pluribus have done by working together is to be able to take some of those services and be able to deploy them onto a smart Nick, right? >>To be able to deploy the DPU based smart SMARTNICK into the servers themselves. And then pluribus has come in and said, we're going to unify create that unified fabric across the networking space, into those networking services all the way down to the server. So the benefits of having that are pretty clear in that you're offloading that capability from the server. So your CPU's are optimized. You're saving a lot of money. You're not having to go outside of the server and go to a different rack somewhere else in the data center. So your performance is going to be optimized as well. You're not going to incur any latency hit for every trip round trip to the, to the firewall and back. So I think all those things are really important. Plus the fact that you're going to see from a, an organizational aspect, we talked about the dev ops and net ops teams. The network operations teams now can work with the security teams to establish the security policies and the networking policies. So that they've dev ops teams. Don't have to worry about that. So essentially they just create the guardrails and let the dev op team run. Cause that's what they want. They want that agility and speed. >>Yeah. Your point about CPU cycles is key. I mean, it's estimated that 25 to 30% of CPU cycles in the data center are wasted. The cores are wasted doing storage offload or, or networking or security offload. And, you know, I've said many times everybody needs a nitro like Amazon nugget, but you can't go, you can only buy Amazon nitro if you go into AWS. Right. Everybody needs a nitro. So is that how we should think about this? >>Yeah. That's a great analogy to think about this. Um, and I think I would take it a step further because it's, it's almost the opposite end of the spectrum because pluribus and video are doing this in a very open way. And so pluribus has always been a proponent of open networking. And so what they're trying to do is extend that now to these distributed services. So leverage working with Nvidia, who's also open as well, being able to bring that to bear so that organizations can not only take advantage of these distributed services, but also that unified networking fabric, that unified cloud fabric across that environment from the server across the switches, the other key piece of what pluribus is doing, because they've been doing this for a while now, and they've been doing it with the older application environments and the older server environments, they're able to provide that unified networking experience across a host of different types of servers and platforms. So you can have not only the modern application supported, but also the legacy environments, um, you know, bare metal. You could go any type of virtualization, you can run containers, et cetera. So a wide gambit of different technologies hosting those applications supported by a unified cloud fabric from pluribus. >>So what does that mean for the customer? I don't have to rip and replace my whole infrastructure, right? >>Yeah. Well, think what it does for, again, from that operational efficiency, when you're going from a legacy environment to that modern environment, it helps with the migration helps you accelerate that migration because you're not switching different management systems to accomplish that. You've got the same unified networking fabric that you've been working with to enable you to run your legacy as well as transfer over to those modern applications. Okay. >>So your people are comfortable with the skillsets, et cetera. All right. I'll give you the last word. Give us the bottom line here. >>So yeah, I think obviously with all the modern applications that are coming out, the distributed application environments, it's really posing a lot of risk on these organizations to be able to get not only security, but also visibility into those environments. And so organizations have to find solutions. As I said, at the beginning, they're looking to drive operational efficiency. So getting operational efficiency from a unified cloud networking solution, that it goes from the server across the servers to multiple different environments, right in different cloud environments is certainly going to help organizations drive that operational efficiency. It's going to help them save money for visibility, for security and even open networking. So a great opportunity for organizations, especially large enterprises, cloud providers who are trying to build that hyperscaler like environment. You mentioned the nitro card, right? This is a great way to do it with an open solution. >>Bob, thanks so much for, for coming in and sharing your insights. Appreciate it. >>You're welcome. Thanks. >>Thanks for watching the program today. Remember all these videos are available on demand@thekey.net. You can check out all the news from today@siliconangle.com and of course, pluribus networks.com many thanks diplomas for making this program possible and sponsoring the cube. This is Dave Volante. Thanks for watching. Be well, we'll see you next time.

Published Date : Mar 16 2022

SUMMARY :

And one of the best examples is Amazon's nitro. So if you can eliminate that waste, and Pete Lummus from Nvidia to take a deeper dive into the technology. Great to have you welcome folks. Thank you. So let's get into the, the problem situation with cloud unified network. and the first mandate for them is to become as agile as a hyperscaler. How does this tie together? Each of the public clouds have different networks that needs to be unified. So that's the fourth tenant How do customers get this vision realized? And I appreciate the tee up. That's the blue field and video. And so that is the first that's, that's the first step in the getting into realizing What is the relationship with clothes? So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, the host, from the switch to the host, and really have that single pane of glass for So it really is a magical partnership between the two companies with pulled out of the market and, and you guys step up and create these new solutions. Um, so that, you know, if you sort of think about what, So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers So they need to migrate there and they need this architecture to be cost-effective. And then, um, uh, you know, with this, with this, our architectural approach effectively, Get the unified cloud architecture, I'm the customer guy, So now by, by terminating the networking on the DPU, Um, and the next benefit obviously So you have to have this new security model. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the the go to market with an Nvidia? in the future, but right now, um, we're, we feel like we're partnered with the number one, And I talked about sort of, you know, uh, how much better that next generation of Bluefield So as we add new generations of Bluefield, you know, next, This is the future of, of cloud operations. You can go to www.pluribusnetworks.com/e Thanks so much for sharing the news. How can you simplify and unify your cloud networks to increase agility and business velocity? Ultimately the unified cloud fabric extends seamlessly across And we'll examine some of the use cases with Alessandra Burberry, Um, and the novelty about this system that integrates a distributed control So how does it integrate into Nvidia hardware and specifically So the first byproduct of this approach is that whatever And second, this gives you the ability to free up, I would say around 20, and this is what we think this enables a very clean demarcation between computer and So Pete, I gotta get, I gotta get you in here. And so, you know, again, it comes down to pragmatism and I think, So if infrastructure is code, you know, you're talking about, you know, that part of the stack And so that ability to automate, into the pluribus unified cloud networking vision, because this is what people are talking but this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint, on the kind of hypervisor or compute solution you choose. That's probably the number one, I mean, I'm not, I'm just joking server listen network list, but the idea is it should the Butte technology and, uh, uh, we can actually integrate those capabilities directly So I love to get your thoughts about Uh, and so we provide, uh, you know, armed CPU's memory scale large enough, the question is, can you afford it? What's the benefit to the customers with this approach? And so you can run a DPU You've already identified some successes with some customers on your early field trials. couple of examples here, just to give you a flavor. And overall, they're hoping to get out of it, uh, uh, tremendous, and then they have a physical network to manage the, the idea of having again, one network, So I got to ask both of you to wrap this up. Um, so that's the week of March 21st through 24th. more or to know more and actually to sign up for the actual early field trial program, You're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. Now let's get the perspective It's great to be What's what's driving it. So organizations are having to deal with this highly So a little bit counter to the fact and, you know, really counter to their overarching digital transformation I mean, I love how you bring in the data yesterday. So in the old days, it used to be easier in north south coming out of the server, So that by doing that, it really makes it a lot harder for them to see And I love the mode analogy. but the things you start running into there, there's a couple of things. So if you have to buy more servers because your CPU's are being utilized, the server and go to a different rack somewhere else in the data center. So is that how we should think about this? environments and the older server environments, they're able to provide that unified networking experience across environment, it helps with the migration helps you accelerate that migration because you're not switching different management I'll give you the last word. that it goes from the server across the servers to multiple different environments, right in different cloud environments Bob, thanks so much for, for coming in and sharing your insights. You're welcome. You can check out all the news from today@siliconangle.com and of course,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DonniePERSON

0.99+

Bob LibertePERSON

0.99+

Dave VellantePERSON

0.99+

AmazonORGANIZATION

0.99+

Alessandra BurberryPERSON

0.99+

SandraPERSON

0.99+

Dave VolantePERSON

0.99+

NvidiaORGANIZATION

0.99+

Pete BloombergPERSON

0.99+

MichaelPERSON

0.99+

AsiaLOCATION

0.99+

AlexandraPERSON

0.99+

hundredsQUANTITY

0.99+

Pete LummusPERSON

0.99+

AWSORGANIZATION

0.99+

Bob LA LibertePERSON

0.99+

MikePERSON

0.99+

JohnPERSON

0.99+

ESGORGANIZATION

0.99+

BobPERSON

0.99+

two companiesQUANTITY

0.99+

25QUANTITY

0.99+

Alessandra BobbyPERSON

0.99+

two yearsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

thousandsQUANTITY

0.99+

BluefieldORGANIZATION

0.99+

NetAppsORGANIZATION

0.99+

demand@thekey.netOTHER

0.99+

20%QUANTITY

0.99+

last yearDATE

0.99+

a yearQUANTITY

0.99+

March 21stDATE

0.99+

FirstQUANTITY

0.99+

www.pluribusnetworks.com/eOTHER

0.99+

TycoORGANIZATION

0.99+

late AprilDATE

0.99+

DokaTITLE

0.99+

400 gigQUANTITY

0.99+

yesterdayDATE

0.99+

second versionQUANTITY

0.99+

two servicesQUANTITY

0.99+

first stepQUANTITY

0.99+

third areaQUANTITY

0.99+

oneQUANTITY

0.99+

second aspectQUANTITY

0.99+

OneQUANTITY

0.99+

EachQUANTITY

0.99+

www.pluribusnetworks.comOTHER

0.99+

PetePERSON

0.99+

last yearDATE

0.99+

one applicationQUANTITY

0.99+

two thingsQUANTITY

0.99+

Alessandro Barbieri and Pete Lumbis


 

>>mhm. Okay, we're back. I'm John. Fully with the Cuban. We're going to go deeper into a deep dive into unified cloud networking solution from Pluribus and NVIDIA. And we'll examine some of the use cases with Alexandra Barberry, VP of product Management and Pluribus Networks. And Pete Lambasts, the director of technical market and video. Remotely guys, thanks for coming on. Appreciate it. >>I think >>so. Deep dive. Let's get into the what and how Alexandra, we heard earlier about the pluribus and video partnership in the solution you're working together on. What is it? >>Yeah. First, let's talk about the what? What are we really integrating with the NVIDIA Bluefield deep You Technology pluribus says, uh, has been shipping, uh, in volume in multiple mission critical networks. So this adviser, one network operating systems it runs today on merchant silicon switches and effectively, it's a standard based open network computing system for data centre. Um, and the novelty about this operating system is that it integrates a distributed the control plane for Atwater made effective in STN overlay. This automation is completely open and interoperable, and extensible to other type of clouds is nothing closed and this is actually what we're now porting to the NVIDIA GPU. >>Awesome. So how does it integrate into video hardware? And specifically, how is plural is integrating its software within video hardware? >>Yeah, I think we leverage some of the interesting properties of the blue field the GPU hardware, which allows actually to integrate, um, our soft our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the GPU card is completely agnostic to the hyper visor layer or OS layer running on on the host even more. Um, uh, we can also independently manage this network. Note this switch on a nick effectively, uh, managed completely independently from the host. You don't have to go through the network operating system running on X 86 to control this network node. So you truly have the experience effectively of a top of rack for virtual machine or a top of rack for kubernetes spots. Where instead of, uh, um, if you allow me with analogy instead of connecting a server nique directly to a switchboard now you're connecting a VM virtual interface to a virtual interface on the switch on a nick. And also as part of this integration, we, uh, put a lot of effort, a lot of emphasis in accelerating the entire day to play in for networking and security. So we are taking advantage of the DACA, uh, video DACA api to programme the accelerators and this your accomplished two things with that number one, you, uh, have much greater performance, much better performance than running the same network services on an X 86 CPU. And second, this gives you the ability to free up. I would say around 2025% of the server capacity to be devoted either to additional war close to run your cloud applications. Or perhaps you can actually shrink the power footprint and compute footprint of your data centre by 20% if you want to run. The same number of computer work was so great efficiencies in the overall approach. >>And this is completely independent of the server CPU, right? >>Absolutely. There is zero quote from pluribus running on the X 86 this is what why we think this enables a very clean demarcation between computer and network. >>So, Pete, I gotta get I gotta get you in here. We heard that the GPUS enable cleaner separation of devops and net ops. Can you explain why that's important? Because everybody's talking. Def SEC ops, right now you've got Net ops. Net net SEC ops, this separation. Why is this clean separation important? >>Yeah, I think it's, uh, you know, it's a pragmatic solution, in my opinion, Um, you know, we wish the world was all kind of rainbows and unicorns, but it's a little a little messier than that. And I think a lot of the devops stuff in that, uh, mentality and philosophy. There's a natural fit there, right? You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers? Well, the network has always been this other thing, and the network operators have always had a very different approach to things than compute operators. And, you know, I think that we we in the networking industry have gotten closer together. But there's still a gap. There's still some distance, and I think in that distance isn't going to be closed and So again it comes down to pragmatism. And I think, you know, one of my favourite phrases is look, good fences make good neighbours. And that's what this is. Yeah, >>it's a great point because devops has become kind of the calling card for cloud. Right? But devops is a simply infrastructure as code infrastructure is networking, right? So if infrastructure as code, you know, you're talking about, you know, that part of the stack under the covers under the hood, if you will. This is super important distinction. And this is where the innovation is. Can you elaborate on how you see that? Because this is really where the action is right now. >>Yeah, exactly. And I think that's where one from from the policy, the security, the zero trust aspect of this right. If you get it wrong on that network side, all of a sudden, you you can totally open up that those capabilities and so security is part of that. But the other part is thinking about this at scale, right. So we're taking one top of rack switch and adding, you know, up to 48 servers per rack, and so that ability to automate orchestrate and manage its scale becomes absolutely critical. >>Alexandra, this is really the why we're talking about here. And this is scale and again getting it right. If you don't get it right, you're gonna be really kind of up. You know what you know. So this is a huge deal. Networking matters. Security matters. Automation matters. DEVOPS. Net ops all coming together. Clean separation. Help us understand how this joint solution within video gets into the pluribus unified cloud networking vision. Because this is what people are talking about and working on right now. >>Yeah, absolutely. So I think here with this solution, we're talking to major problems in cloud networking. One is the operation of cloud networking, and the second is distributing security services in the cloud infrastructure. First, let me talk about first. What are we really unifying? If you really find something, something must be at least fragmented or disjointed. And what is this? Joint is actually the network in the cloud. If you look holistically how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers. You build your I P clause fabric leaf and spine to apologies. this is actually well understood the problem. I would say, um, there are multiple vendors with a similar technologies. Very well, standardised. Very well understood. Um, and almost a commodity, I would say building an I P fabric these days. But this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint. Those services are actually now moved into the compute layer where you actually were called. Builders have to instrument a separate network virtualisation layer, where they deploy segmentation and security closer to the workloads. And this is where the complication arise. This high value part of the cloud network is where you have a plethora of options, that they don't talk to each other, and they are very dependent on the kind of hyper visor or compute solution you choose. Um, for example, the networking API between an SX I environment or and hyper V or a Zen are completely disjointed. You have multiple orchestration layers and when and then when you throw in Also kubernetes in this In this in this type of architecture, uh, you're introducing yet another level of networking, and when you burn it, it runs on top of the M s, which is a prevalent approach. You actually just stuck in multiple networks on the compute layer that they eventually run on the physical fabric infrastructure. Those are all ships in the night effectively, right? They operate as completely disjointed. And we're trying to attack this problem first with the notion of a unified fabric, which is independent from any work clothes. Uh, whether it's this fabric spans on a switch which can become connected to a bare metal workload or can spend all the way inside the deep You where you have your multi hypervisors computer environment. It's one a P I one common network control plane and one common set of segmentation services for the network. That's probably number one. >>You know, it's interesting you I hear you talking. I hear one network different operating models reminds me the old server list days. You know there's still servers, but they called server list. Is there going to be a term network list? Because at the end of the, it should be one network, not multiple operating models. This this is like a problem that you guys are working on. Is that right? I mean, I'm not I'm just joking. Server, Listen, network list. But the idea is it should be one thing. >>Yeah, it's effectively. What we're trying to do is we're trying to recompose this fragmentation in terms of network operations across physical networking and server networking. Server networking is where the majority of the problems are because of the as much as you have standardised the ways of building, uh, physical networks and cloud fabrics with high people articles on the Internet. And you don't have that kind of, uh, sort of, uh, operational efficiency at the server layer. And this is what we're trying to attack first with this technology. The second aspect we're trying to attack is how we distribute the security services throughout the infrastructure more efficiently. Whether it's micro segmentation is a state, full firewall services or even encryption, those are all capabilities enabled by the blue field deep you technology and, uh, we can actually integrate those capabilities directly into the network fabric. Limiting dramatically, at least for is to have traffic, the sprawl of security appliances with a virtual or physical that is typically the way people today segment and secured the traffic in the >>cloud. All kidding aside about network. Listen, Civil is kind of fun. Fun play on words There the network is one thing is basically distributed computing, right? So I love to get your thoughts about this Distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >>Yeah, I think. What's what's beautiful and kind of what the deep you brings that's new to this model is completely isolated. Compute environment inside. So you know, it's the yo dog. I heard you like a server, So I put a server inside your server. Uh, and so we provide, you know, arm CPUs, memory and network accelerators inside, and that is completely isolated from the host. So the server, the the actual X 86 host just thinks it has a regular nick in there. But you actually have this full control plane thing. It's just like taking your top of rack, switch and shovel. Get inside of your compute node. And so you have not only the separation, um, within the data plane, but you have this complete control plane separation. So you have this element that the network team can now control and manage. But we're taking all of the functions we used to do at the top of rack Switch, and we distribute them now. And, you know, as time has gone on, we've we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, um, the other option is centralised appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that if aliens good enough or we hope that if you excellent tunnel is good enough, and we can actually apply more advanced techniques there because we can't physically, financially afford that appliance to see all of the traffic, and now that we have a distributed model with this accelerator, we could do it. >>So what's the what's in it for the customer real quick. I think this is an interesting point. You mentioned policy. Everyone in networking those policies just a great thing. And it has. You hear it being talked about up the stack as well. When you start getting to orchestrate microservices and what not all that good stuff going on their containers and whatnot and modern applications. What's the benefit to the customers with this approach? Because what I heard was more scale, more edge deployment, flexibility relative to security policies and application. Enablement. I mean, is that what what's the customer get out of this architecture? What's the enablement? >>It comes down to taking again the capabilities that were that top of rack switch and distracting them down. So that makes simplicity smaller. Blast Radius is for failure, smaller failure domains, maintenance on the networks and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. Um, and again, you know, we always want to kind of separate each one of those layers. So, just as in, say, a Vieques land network, my leaf and spine don't have to be tightly coupled together. I can now do this at a different layer and so you can run a deep You with any networking in the core there. And so you get this extreme flexibility, you can start small. You can scale large. Um, you know, to me that the possibilities are endless. >>It's a great security control Playing really flexibility is key, and and also being situationally aware of any kind of threats or new vectors or whatever is happening in the network. Alexandra, this is huge Upside, right? You've already identified some, uh, successes with some customers on your early field trials. What are they doing? And why are they attracted? The solution? >>Yeah, I think the response from customer has been the most encouraging and exciting for for us to, uh, to sort of continuing work and develop this product. And we have actually learned a lot in the process. Um, we talked to three or two or three cloud providers. We talked to s P um, sort of telco type of networks, uh, as well as enter large enterprise customers. Um, in one particular case, um uh, one, I think. Let me let me call out a couple of examples here just to give you a flavour. There is a service provider, a cloud provider in Asia who is actually managing a cloud where they are offering services based on multiple hypervisors their native services based on Zen. But they also, um, ramp into the cloud workloads based on SX I and N K P M. Depending on what the customer picks from the piece from the menu. And they have the problem of now orchestrating through the orchestrate or integrating with Zen Centre with this fear with open stock to coordinate this multiple environments and in the process to provide security, they actually deploy virtual appliances everywhere, which has a lot of cost complication, and it's up into the service of you the promise that they saw in this technology they call it. Actually, game changing is actually to remove all this complexity, even a single network, and distribute the micro segmentation service directly into the fabric. And overall, they're hoping to get out of it. Tremendous OPEC's benefit and overall operational simplification for the cloud infrastructure. That's one important use case, um, another large enterprise customer, a global enterprise customer is running both Essex I and I purvey in their environment, and they don't have a solution to do micro segmentation consistently across Hypervisors. So again, micro segmentation is a huge driver. Security looks like it's a recurring theme talking to most of these customers and in the telco space. Um, uh, we're working with a few telco customers on the CFT programme, uh, where the main goal is actually to Arman Eyes Network operation. They typically handle all the V NFC with their own homegrown DPD K stock. This is overly complex. It is, frankly, also slow and inefficient. And then they have a physical network to manage the idea of having again one network to coordinate the provisioning of cloud services between the take of the NFC. Uh, the rest of the infrastructure is extremely powerful on top of the offloading capability. After by the blue fill the pews. Those are just some examples. >>There's a great use case, a lot more potential. I see that with the unified cloud networking. Great stuff shout out to you guys that NVIDIA, you've been following your success for a long time and continuing to innovate his cloud scales and pluribus here with unified networking. Kind of bringing the next level great stuff. Great to have you guys on and again, software keeps, uh, driving the innovation again. Networking is just part of it, and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in in this new architecture and solution learn more? Because this is an architectural ship. People are working on this problem. They're trying to think about multiple clouds are trying to think about unification around the network and giving more security more flexibility to their teams. How can people learn more? >>And so, uh, Alexandra and I have a talk at the upcoming NVIDIA GTC conference, so it's the week of March 21st through 24th. Um, you can go and register for free and video dot com slash gtc. Um, you can also watch recorded sessions if you end up watching this on YouTube a little bit after the fact, Um, and we're going to dive a little bit more into the specifics and the details and what we're providing a solution >>as Alexandra. How can people learn more? >>Yeah, so that people can go to the pluribus website www pluribus networks dot com slash e. F t and they can fill up the form and, uh, they will contact Pluribus to either no more or to know more and actually to sign up for the actual early field trial programme. Which starts at the end of it. >>Okay, well, we'll leave it there. Thank you both for joining. Appreciate it up. Next, you're going to hear an independent analyst perspective and review some of the research from the Enterprise Strategy Group E s G. I'm John Ferry with the Cube. Thanks for watching. Mhm. Mhm.

Published Date : Mar 4 2022

SUMMARY :

And Pete Lambasts, the director of technical market and Let's get into the what and how Alexandra, we heard earlier about the pluribus and video Um, and the novelty about this operating system is that it integrates a distributed the And specifically, how is plural is integrating its software within video hardware? of the server capacity to be devoted either to additional war close to is what why we think this enables a very clean demarcation between computer and network. We heard that the GPUS enable cleaner separation of Yeah, I think it's, uh, you know, it's a pragmatic solution, in my opinion, Um, you know, So if infrastructure as code, you know, you're talking about, you know, that part of the stack But the other part is thinking about this at scale, right. You know what you know. the place where you deploy most of your services in the cloud, particularly from a security standpoint. I hear one network different operating models reminds me the old server enabled by the blue field deep you technology and, So I love to get your thoughts scale large enough, the question is, can you afford it? What's the benefit to the customers with this approach? I can now do this at a different layer and so you can run Alexandra, this is huge Upside, Let me let me call out a couple of examples here just to give you a flavour. So I got to ask both of you to wrap this bit more into the specifics and the details and what we're providing a solution How can people learn more? Yeah, so that people can go to the pluribus website www pluribus networks dot analyst perspective and review some of the research from the Enterprise Strategy Group E s G.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AlexandraPERSON

0.99+

NVIDIAORGANIZATION

0.99+

AsiaLOCATION

0.99+

Pete LambastsPERSON

0.99+

twoQUANTITY

0.99+

John FerryPERSON

0.99+

threeQUANTITY

0.99+

PluribusORGANIZATION

0.99+

20%QUANTITY

0.99+

Alexandra BarberryPERSON

0.99+

Pete LumbisPERSON

0.99+

JohnPERSON

0.99+

Alessandro BarbieriPERSON

0.99+

FirstQUANTITY

0.99+

OPECORGANIZATION

0.99+

second aspectQUANTITY

0.99+

PetePERSON

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

March 21stDATE

0.99+

24thDATE

0.99+

OneQUANTITY

0.98+

secondQUANTITY

0.98+

Arman Eyes NetworkORGANIZATION

0.98+

todayDATE

0.98+

two thingsQUANTITY

0.98+

AtwaterORGANIZATION

0.98+

Pluribus NetworksORGANIZATION

0.98+

oneQUANTITY

0.98+

YouTubeORGANIZATION

0.96+

one thingQUANTITY

0.92+

DACATITLE

0.92+

one networkQUANTITY

0.92+

EnterpriseORGANIZATION

0.91+

single networkQUANTITY

0.91+

zero quoteQUANTITY

0.89+

one common setQUANTITY

0.88+

zero trustQUANTITY

0.88+

one important use caseQUANTITY

0.87+

Essex IORGANIZATION

0.84+

telcoORGANIZATION

0.84+

three cloud providersQUANTITY

0.82+

N K PORGANIZATION

0.82+

CubanPERSON

0.82+

KCOMMERCIAL_ITEM

0.81+

X 86OTHER

0.8+

zeroQUANTITY

0.79+

ZenORGANIZATION

0.79+

each oneQUANTITY

0.78+

one particular caseQUANTITY

0.76+

up to 48 servers per rackQUANTITY

0.74+

around 2025%QUANTITY

0.73+

coupleQUANTITY

0.68+

GroupORGANIZATION

0.67+

ViequesORGANIZATION

0.65+

X 86COMMERCIAL_ITEM

0.64+

XCOMMERCIAL_ITEM

0.61+

NVIDIA GTC conferenceEVENT

0.6+

pluribusORGANIZATION

0.57+

NVIDIA BluefieldORGANIZATION

0.54+

CentreCOMMERCIAL_ITEM

0.52+

X 86TITLE

0.51+

ZenTITLE

0.47+

86TITLE

0.45+

CubeORGANIZATION

0.44+

SXTITLE

0.41+

Mike Capuano and Ami Badani


 

>>Okay, let's kick things off. We're here at my capital. One of the CMO of pluribus networks and AMI by Dani VP of networking, marketing developer ecosystem at Nvidia. Great to have you welcome folks. Thank you. Thanks. So let's get into the, the problem situation with cloud unified network. What problems are out there? What challenges do cloud operators have Mike let's get into? >>Yeah, really, you know, the challenges we're looking at are for non hyperscalers that's enterprises, governments, tier two service providers, cloud service providers, and the first mandate for them is to become as agile as a hyperscaler. So they need to be able to deploy services and security policies in seconds. They need to be able to abstract the complexity of the network and define things in software while it's accelerated in hardware. Really ultimately they need a single operating model everywhere. And then the second thing is they need to distribute networking and security services out to the edge of the host. We're seeing a growth in cyber attacks. It's it's not slowing down. It's only getting worse and, you know, solving for this security problem across clouds is absolutely critical. And the way to do it is to move security out to the host. >>Okay. With that goal in mind, what's the pluribus vision. How does this tie together? >>Yeah, so basically what we see is that this demands a new architecture and that new architecture has four tenants. The first tenant is unified and simplified cloud networks. If you look at cloud networks today, there's, there's sort of like discrete bespoke cloud networks, you know, per hypervisor, per private cloud edge cloud public cloud. Each of the public clouds have different networks that needs to be unified. You know, if we want these folks to be able to be agile, they need to be able to issue a single command or instantiate a security policy across all of those locations with one command and not have to go to each one. The second is like I mentioned, distributed security, distributed security without compromise, extended out to the host is absolutely critical. So micro-segmentation and distributed firewalls, but it doesn't stop there. They also need pervasive visibility. You know, it's, it's, it's sort of like with security, you really can't see you can't protect what you can't see. >>So you need visibility everywhere. The problem is visibility to date has been very expensive. Folks have had to basically build a separate overlay network of taps, packet brokers, tap aggregation infrastructure that really needs to be built into this unified network I'm talking about. And the last thing is automation. All of this needs to be SDN enabled. So this is related to my comment about abstraction abstract, the complexity of all these discreet networks, physic whatever's down there in the physical layer. Yeah. I don't want to see it. I want abstract it. I wanted to find things in software, but I do want to leverage the power of hardware to accelerate that. So that's the fourth tenant is SDN automation. >>Mike, we've been talking on the cable a lot about this architectural shift and customers are looking at this. This is a big part of everyone who's looking at cloud operations next gen, how do we get there? How do customers get this vision realized? >>That's a great question. And I appreciate the tee up. I mean, we're, we're here today for that reason. We're introducing two things today. The first is a unified cloud networking vision. And that is a vision of where pluribus is headed with our partners like Nvidia longterm. And that is about deploying a common operating model, SDN enabled SDN, automated hardware, accelerated across all clouds and whether that's underlying overlay switch or server hype, any hypervisor infrastructure containers, any workload doesn't matter. So that's ultimately where we want to get. And that's what we talked about earlier. The first step in that vision is what we call the unified cloud fabric. And this is the next generation of our adaptive cloud fabric. And what's nice about this is we're not starting from scratch. We have an award-winning adaptive cloud fabric product that is deployed globally. And in particular we're very proud of the fact that it's deployed in over a hundred tier one mobile operators as the network fabric for their 4g and 5g virtualized cores. We know how to build carrier grade networking infrastructure. What we're doing now to realize this next generation unified cloud fabric is we're extending from the switch to this Nvidia Bluefield to DPU. We, >>No, there's all that up real quick. That's a good, that's a good prop. That's the blue field and video. >>It's the Nvidia Bluefield two DPU data processing unit. And you know, what we're doing fundamentally is extending our SDN automated fabric, the unified cloud fabric out to the host, but it does take processing power. So we knew that we didn't want to do, we didn't want to implement that running on the CPU, which is what some other companies do because it consumes revenue generating CPU's from the application. So a DPU is a perfect way to implement this. And we knew that Nvidia was the leader with this blue field too. And so that is the first that's, that's the first step in the getting into realizing this vision >>And video has always been powering some great workloads of GPU. Now you've got DP networking, and then video is here. What is the relationship with clothes? How did that come together? Tell us the story. >>Yeah. So, you know, we've been working with pluribus for quite some time. I think the last several months was really when it came to fruition and what pluribus is trying build and what Nvidia has. So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, conceptually does really three things, offload, accelerate an isolate. So offload your workloads from your CPU to your data processing unit infrastructure workloads, that is a accelerate. So there's a bunch of acceleration engine. So you can run infrastructure workloads much faster than you would otherwise, and then isolation. So you have this nice security isolation between the data processing unit and your other CPU environment. And so you can run completely isolated workloads directly on the data processing unit. So we introduced this, you know, a couple of years ago, and with pluribus, you know, we've been talking to the pluribus team for quite some months now. >>And I think really the combination of what pluribus is trying to build and what they've developed around this unified cloud fabric is fits really nicely with the DPU and running that on the DPU and extending it really from your physical switch, all the way to your host environment, specifically on the data processing unit. So if you think about what's happening as you add data processing units to your environment. So every server we believe over time is going to have data processing units. So now you'll have to manage that complexity from the physical network layer to the host layer. And so what pluribus is really trying to do is extending the network fabric from the host, from the switch to the host, and really have that single pane of glass for network operators to be able to configure provision, manage all of the complexity of the network environment. >>So that's really how the partnership truly started. And so it started really with extending the network fabric, and now we're also working with them on security. So, you know, if you sort of take that concept of isolation and security isolation, what pluribus has within their fabric is the concept of micro-segmentation. And so now you can take that extended to the data processing unit and really have isolated micro-segmentation workloads, whether it's bare metal cloud native environments, whether it's virtualized environments, whether it's public cloud, private cloud hybrid cloud. So it really is a magical partnership between the two companies with their unified cloud fabric running on, on the DPU. >>I love about this conversation is it reminds me of when you have these changing markets, the product gets pulled out of the market and, and you guys step up and create these new solutions. And I think this is a great example. So I have to ask you, how do you guys differentiate what sets this apart for customers with what's in it for the, >>Yeah. So I mentioned, you know, three things in terms of the value of what the Bluefield brings, right? There's offloading, accelerating, isolating, that's sort of the key core tenants of Bluefield, so that, you know, if you sort of think about what, what Bluefield, what we've done, you know, in terms of the differentiation, we're really a robust platform for innovation. So we introduced Bluefield to last year, we're introducing Bluefield three, which is our next generation of Bluefields, you know, we'll have five X, the arm compute capacity. It will have 400 gig line rate acceleration for X better crypto acceleration. So it will be remarkably better than the previous generation. And we'll continue to innovate and add, you know, chips to our portfolio every, every 18 months to two years. So that's sort of one of the key areas of differentiation. The other is the, if you look at Nvidia and, and you know, what we're sort of known for is really known for our AI artificial intelligence and our artificial intelligence software, as well as our GPU. >>So you look at artificial intelligence and the combination of artificial intelligence plus data processing. This really creates the faster, more efficient, secure AI systems from, you know, the core of your data center all the way out to the edge. And so with Nvidia, we really have these converged accelerators where we've combined the GPU, which does all your AI processing with your data processing with the DPU. So we have this convergence really nice convergence of, of that area. And I would say the third area is really around our developer environment. So, you know, one of the key, one of our key motivations at Nvidia is really to have our partner ecosystem, embrace our technology and build solutions around our technology. So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called Doka, and it's an open SDK for our partners to really build and develop solutions using Bluefield and using all these accelerated libraries that we expose through Doka. And so part of our differentiation is really building this open ecosystem for our partners to take advantage and build solutions around our technology. >>You know, it's exciting is when I hear you talk, it's like you realize that there's no one general purpose network anymore. Everyone has their own super environment Supercloud or these new capabilities. They can really craft their own, I'd say custom environment at scale with easy tools. Right. And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers run this effectively? Cost-effectively and how do people migrate? >>Yeah, I think that is the key question, right? So we've got this beautiful architecture. You, you know, Amazon nitro is a, is a good example of, of a smart NIC architecture that has been successfully deployed, but enterprises and serve tier two service providers and tier one service providers and governments are not Amazon, right? So they need to migrate there and they need this architecture to be cost-effective. And, and that's, that's super key. I mean, the reality is deep user moving fast, but they're not going to be deployed everywhere on day one. Some servers will ha have DPS right away. Some servers will have deep use in a year or two. And then there are devices that may never have DPS, right? IOT gateways, or legacy servers, even mainframes. So that's the beauty of a solution that creates a fabric across both the switch and the DPU, right? >>And by leveraging the Nvidia Bluefield DPU, what we really like about it is it's open and that drives cost efficiencies. And then, you know, with this, with this, our architectural approach effectively, you get a unified solution across switch and DPU workload independent doesn't matter what hypervisor it is, integrated visibility, integrated security, and that can create tremendous cost efficiencies and really extract a lot of the expense from, from a capital perspective out of the network, as well as from an operational perspective, because now I have an SDN automated solution where I'm literally issuing a command to deploy a network service or to create or deploy our security policy and is deployed everywhere, automatically saving the oper, the network operations team and the security operations team time. >>All right. So let me rewind that because that's super important. Get the unified cloud architecture, I'm the customer, but it's implemented, what's the value again, take, take me through the value to me. I have a unified environment. What's the value. >>Yeah. So I mean, the value is effectively. So there's a few pieces of value. The first piece of value is I'm creating this clean D mark. I'm taking networking to the host. And like I mentioned, we're not running it on the CPU. So in implementations that run networking on the CPU, there's some conflict between the dev ops team who owned the server and the NetApps team who own the network because they're installing software on the, on the CPU stealing cycles from what should be revenue generating CPU's. So now by, by terminating the networking on the DPU, we click create this real clean DMARC. So the dev ops folks are happy because they don't necessarily have the skills to manage network and they don't necessarily want to spend the time managing networking. They've got their network counterparts who are also happy the NetApps team, because they want to control the networking. >>And now we've got this clean DMARC where the dev ops folks get the services they need and the NetApp folks get the control and agility they need. So that's a huge value. The next piece of value is distributed security. This is essential. I mentioned earlier, you know, put pushing out micro-segmentation and distributed firewall, basically at the application level, right, where I create these small, small segments on an application by application basis. So if a bad actor does penetrate the perimeter firewall, they're contained once they get inside. Cause the worst thing is a bad actor penetrates at perimeter firewall, and it can go wherever they want and wreak havoc, right? And so that's why this, this is so essential. And the next benefit obviously is this unified networking operating model, right? Having an operating model, switch and server underlay and overlay, workload agnostic, making the life of the NetApps teams much easier so they can focus their time on really strategy instead of spending an afternoon, deploying a single V LAN for example. >>Awesome. And I think also from my standpoint, I mean, perimeter security is pretty much, I mean, they're out there, it gets the firewall still out there exists, but pretty much they're being breached all the time, the perimeter. So you have to have this new security model. And I think the other thing that you mentioned, the separation between dev ops is cool because the infrastructure is code is about making the developers be agile and build security in from day one. So this policy aspect is, is huge new control points. I think you guys have a new architecture that enables the security to be handled more flexible. Right. That seems to be the killer feature, >>Right? Yeah. If you look at the data processing unit, I think one of the great things about sort of this new architecture, it's really the foundation for zero trust it's. So like you talked about the perimeter is getting breached. And so now each and every compute node has to be protected. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the foundation of zero trust. And pluribus is really building on that vision with allowing sort of micro-segmentation and being able to protect each and every compute node as well as the underlying network. >>And this is an illustration of how the market's evolving architectures are being reshaped and refactored for cloud scale and all this new goodness with data. So I got to ask how you guys go into market together. Michael, start with you. What's the relationship look like in the go to market with an Nvidia? >>Sure. I mean, we're, you know, we're super excited about the partnership. Obviously we're here together. We think we've got a really good solution for the market, so we're jointly marketing it. You know, obviously we appreciate that Nvidia is open that's, that's sort of in our DNA, we're about open networking. They've got other ISV who are gonna run on Bluefield too. We're probably going to run on other DPS in the future, but right now we're we feel like we're partnered with the number one provider of DPS in the world and super excited about making a splash with it >>In video, get the hot product. >>Yeah. So Bluefield too, as I mentioned was GA last year, we're introducing well, we now also have the converged accelerator. So I talked about artificial intelligence or artificial intelligence software with the Bluefield DPU, all of that put together on a converged accelerator. The nice thing there is you can either run those workloads. So if you have an artificial intelligence workload and an infrastructure workload, you can warn them separately on the same platform or you can actually use, you can actually run artificial intelligence applications on the Bluefield itself. So that's what the converged accelerator really brings the table. So that's available now. Then we have Bluefield three, which will be available late this year. And I talked about sort of, you know, how much better that next generation of Bluefield is in comparison to Bluefield two. So we will see Bluefield three shipping later on this year, and then our software stack, which I talked about, which is called Doka we're on our second version are DACA one dot two. >>We're releasing Doka one dot three in about two months from now. And so that's really our open ecosystem framework. So allow you to program the Bluefields. So we have all of our acceleration libraries, security libraries, that's all packed into this SDK called Doka. And it really gives that simplicity to our partners to be able to develop on top of Bluefield. So as we add new generations of Bluefield, you know, next, next year, we'll have, you know, another version and so on and so forth. Doka is really that unified unified layer that allows Bluefield to be both forwards compatible and backwards compatible. So partners only really have to think about writing to that SDK once and then it automatically works with future generations of Bluefields. So that's sort of the nice thing around, around Doka. And then in terms of our go to market model, we're working with every, every major OEM. So later on this year, you'll see, you know, major server manufacturers releasing Bluefield enabled servers, so more to come >>Save money, make it easier, more capabilities, more workload power. This is the future of, of cloud operations. Yeah. >>And one thing I'll add is we are, we have a number of customers as you'll hear in the next segment that are already signed up and we'll be working with us for our early field trial starting late April early may. We are accepting registrations. You can go to www.pluribusnetworks.com/e F T if you're interested in signing up for being part of our field trial and, and providing feedback on the product, >>Awesome innovation and network. Thanks so much for sharing the news. Really appreciate it. Thanks so much. Okay. In a moment, we'll be back to the deeper in the product, the integration security zero trust use cases. You're watching the cube, the leader in enterprise tech coverage.

Published Date : Mar 4 2022

SUMMARY :

Great to have you welcome folks. So they need to be able to deploy services and security policies in seconds. How does this tie together? Each of the public clouds have different networks that needs to be unified. So that's the fourth tenant How do customers get this vision realized? And I appreciate the tee up. That's the blue field and video. And so that is the first that's, that's the first step in the getting into realizing What is the relationship with clothes? So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, So if you think about what's happening as you add data So it really is a magical partnership between the two companies with out of the market and, and you guys step up and create these new solutions. of Bluefield, so that, you know, if you sort of think about what, So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers run So they need to migrate there and they need this architecture to be cost-effective. And then, you know, with this, with this, our architectural approach effectively, So let me rewind that because that's super important. So the dev ops folks are happy because they don't necessarily have the skills to And the next benefit obviously And I think the other thing that you mentioned, And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the So I got to ask how you of DPS in the world and super excited about making a And I talked about sort of, you know, how much better that next generation of Bluefield So as we add new generations of Bluefield, you know, next, This is the future And one thing I'll add is we are, we have a number of customers Thanks so much for sharing the news.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DaniPERSON

0.99+

MichaelPERSON

0.99+

AmazonORGANIZATION

0.99+

Mike CapuanoPERSON

0.99+

NvidiaORGANIZATION

0.99+

Ami BadaniPERSON

0.99+

BluefieldORGANIZATION

0.99+

MikePERSON

0.99+

two yearsQUANTITY

0.99+

two companiesQUANTITY

0.99+

late AprilDATE

0.99+

400 gigQUANTITY

0.99+

last yearDATE

0.99+

EachQUANTITY

0.99+

DokaTITLE

0.99+

second versionQUANTITY

0.99+

second thingQUANTITY

0.99+

one commandQUANTITY

0.99+

first stepQUANTITY

0.99+

next yearDATE

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

firstQUANTITY

0.99+

late this yearDATE

0.98+

first pieceQUANTITY

0.98+

NetAppsORGANIZATION

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

four tenantsQUANTITY

0.97+

first mandateQUANTITY

0.97+

third areaQUANTITY

0.97+

BluefieldCOMMERCIAL_ITEM

0.96+

fourth tenantQUANTITY

0.96+

secondQUANTITY

0.96+

two thingsQUANTITY

0.96+

BluefieldsORGANIZATION

0.96+

a yearQUANTITY

0.95+

first tenantQUANTITY

0.95+

each oneQUANTITY

0.94+

over a hundred tier oneQUANTITY

0.94+

eachQUANTITY

0.93+

three thingsQUANTITY

0.92+

one thingQUANTITY

0.91+

fiveQUANTITY

0.9+

zero trustQUANTITY

0.9+

day oneQUANTITY

0.9+

single commandQUANTITY

0.9+

4gQUANTITY

0.89+

single paneQUANTITY

0.88+

18 monthsQUANTITY

0.88+

BluefieldTITLE

0.87+

a couple of years agoDATE

0.86+

this yearDATE

0.85+

singleQUANTITY

0.84+

5gQUANTITY

0.83+

tier two serviceQUANTITY

0.82+

OneQUANTITY

0.81+

SupercloudTITLE

0.81+

Craig Nunes & Tobias Flitsch, Nebulon | CUBEconversations


 

(upbeat intro music) >> More than a decade ago, the team at Wikibon coined the term Server SAN. We saw the opportunity to dramatically change the storage infrastructure layer and predicted a major change in technologies that would hit the market. Server SAN had three fundamental attributes. First of all, it was software led. So all the traditionally expensive controller functions like snapshots and clones and de-dupe and replication, compression, encryption, et cetera, they were done in software directly challenging a two to three decade long storage controller paradigm. The second principle was it leveraged and shared storage inside of servers. And the third it enabled any-to-any typology between servers and storage. Now, at the time we defined this coming trend in a relatively narrow sense inside of a data center location, for example, but in the past decade, two additional major trends have emerged. First the software defined data center became the dominant model, thanks to VMware and others. And while this eliminated a lot of overhead, it also exposed another problem. Specifically data centers today allocate probably we estimate around 35% of CPU cores and cycles to managing things like storage and network and security, offloading those functions. This is wasted cores and doing this with traditional general purpose x86 processors is expensive and it's not efficient. This is why we've been reporting so aggressively on ARM's ascendancy into the enterprise. It's not only coming it's here and we're going to talk about that today. The second mega trend is cloud computing. Hyperscale infrastructure has allowed technology companies to put a management and control plane into the cloud and expand beyond our narrow server SAN scope within a single data center and support the management of distributed data at massive scale. And today we're on the cusp of a new era of infrastructure. And one of the startups in this space is Nebulon. Hello everybody, this is Dave Vellante, and welcome to this Cube Conversation where we welcome in two great guests, Craig Nunes, Cube alum, co-founder and COO at Nebulon and Tobias Flitsch who's director of product management at Nebulon. Guys, welcome. Great to see you. >> So good to be here Dave. Feels awesome. >> Soon, face to face. Craig, I'm heading your way. >> I can't wait. >> Craig, you heard my narrative upfront and I'm wondering are those the trends that you guys saw when you, when you started the company, what are the major shifts in the world today that, that caused you and your co-founders to launch Nebulon? >> Yeah, I'll give you sort of the way we think about the world, which I think aligns super right with, with what you're talking about, you know, over the last several years, organizations have had a great deal of experience with public cloud data centers. And I think like any platform or technology that is, you know, gets its use in a variety of ways, you know, a bit of savvy is being developed by organizations on, you know, what do I put where, how do I manage things in the most efficient way possible? And there are, in terms of the types of folks we're focused on in Nebulon's business, we see now kind of three groups of people emerging, and, and we sort of simply coined three terms, the returners, the removers, and the remainers. I'll explain what I mean by each of those, the returners are folks who maybe early on, you know, hit the gas on cloud, moved, you know, everything in, a lot in, and realize that while it's awesome for some things, for other things, it was less optimal. Maybe cost became a factor or visibility into what was going on with their data was a factor, security, service levels, whatever. And they've decided to move some of those workloads back. Returners. There are what I call the removers that are taking workloads from, you know, born in the cloud. On-prem, you know, and this was talked a lot about in Martine's blog that, you know, talked about a lot of the growth companies that built up such a large footprint in the public cloud, that economics were kind of working against them. You can, depending on the knobs you turn, you know, you're probably spending two and a half X, two X, what you might spend if you own your own factory. And you can argue about, you know, where your leverage is in negotiating your pricing with the cloud vendors, but there's a big gap. The last one is, and I think probably the most significant in terms of who we've engaged with is the remainers. And the remainers are, you know, hybrid IT organizations. They've got assets in the cloud and on-prem, they aspire to an operational model that is consistent across everything and, you know, leveraging all the best stuff that they observed in their cloud-based assets. And it's kind of our view that frankly we take from, from this constituency that, when people talk about cloud or cloud first, they're moving to something that is really more an operating model versus a destination or data center choice. And so, we get people on the phone every day, talking about cloud first. And when you kind of dig into what they're after, it's operating model characteristics, not which data center do I put it in, and those, those decisions are separating. And so that, you know, it's really that focus for us is where, we believe we're doing something unique for that group of customers. >> Yeah. Cloud first doesn't doesn't mean cloud only. And of course followers of this program know, we talk a lot about this, this definition of cloud is changing, it's evolving, It's moving to the edge, it's moving to data centers, data centers are moving to the cloud. Cross-cloud, it's that big layer that's expanding. And so I think the definition of cloud, even particularly in customer's minds is evolving. There's no question about it. People, they'll look at what VMware is doing in AWS and say, okay, that's cloud, but they'll also look at things like VMware cloud foundation and say oh yeah, that's cloud too. So to me, the beauty of cloud is in the eye of the customer beholder. So I buy that. Tobias. I wonder if you could talk about how this all translates into product, because you guys start up, you got to sell stuff, you use this term smart infrastructure, what is that? How does this all turn into stuff you can sell? >> Right. Yeah. So let me back up a little bit and talk a little bit about, you know, what we at Nebulon do. So we are a cloud based software company, and we're delivering sort of a new category of smart infrastructure. And if you think about things that you would know from your everyday surroundings, smart infrastructure is really all around us. Think smart home technology like Google Nest as an example. And what this all has in common is that there's a cloud control plane that is managing some IOT end points and smart devices in various locations. And by doing that, customers gain benefits like easy remote management, right? You can manage your thermostat, your temperature from anywhere in the world basically. You don't have to worry about automated software updates anymore, and you can easily automate your home, your infrastructure, through this cloud control plane and translating this idea to the data center, right? This idea is not necessarily new, right? If you look into the networking space with Meraki networks, now Cisco or Mist Systems now Juniper, they've really pioneered efforts in cloud management. So smart network infrastructure, and the key problem that they solved there is, you know, managing these vast amount of access points and switches that are scattered across the data centers across campuses, and, you know, the data center. Now, if you translate that to what Nebulon does, it's really applying this smart infrastructure idea, this methodology to application infrastructure, specifically to compute and storage infrastructure. And that's essentially what we're doing. So with smart infrastructure, basically our offering it at Nebulon, the product, that comes with the benefits of this cloud experience, public cloud operating model, as we've talked about, some of our customers look at the cloud as an operating model rather than a destination, a physical location. And with that, we bring to us this model, this, this experience as operating a model to on-premises application infrastructure, and really what you get with this broad offering from Nebulon and the benefits are really circling it out, you know, four areas, first of all, rapid time to value, right? So application owners think people that are not specialists or experts when it comes to IT infrastructure, but more generalists, they can provision on-premise application infrastructure in less than 10 minutes, right? It can go from, from just bare metal physical racks to the full application stack in less than 10 minutes, so they're up and running a lot quicker. And they can immediately deliver services to their end customers, cloud-like operations, this, this notion of zero touch remote management, which now with the last couple of months with this strange time that we're with COVID is, you know, turnout is becoming more and more relevant really as in remotely administrating and management of infrastructure that scales from just hundreds of nodes to thousands of nodes. It doesn't really matter with behind the scenes software updates, with global AI analytics and insights, and basically overall combined reduce the operational overhead when it comes to on-premises infrastructure by up to 75%, right? The other thing is support for any application, whether it's containerized, virtualized, or even bare metal applications. And the idea here is really consistent leveraging server-based storage that doesn't require any Nebulon-specific software on the server. So you get the full power of your application servers for your applications. Again, as the servers intended. And then the fourth benefit when it comes to smart infrastructure is, is of course doing this all at a lower cost and with better data center density. And that is really comparing it to three-tier architectures where you have your server, your SAN fabric, and then you have an external storage, but also when you compare it with hyper-converged infrastructure software, right, that is consuming resources of the application servers, think CPU, think memory and networking. So basically you get a lot more density with that approach compared to those architectures. >> Okay, I want to dig into some of that differentiation too, but what exactly do I buy from you? Do I buy a software subscription? Is that right? Can you explain that a little bit? >> Right. So basically the way we do this is it's really leveraging two key new innovations, right? So, and you see why I made the bridge to smart home technology, because the approach is civil, right? The one is, you know, the introduction of a cloud control plane that basically manage this on-premise application infrastructure, of course, that is delivered to customers as a service. The second one is, you know, a new infrastructure model that uses IOT endpoint technology, and that is embedded into standard application servers and the storage within this application servers. Let me add a couple of words to that to explain a little bit more, so really at the heart of smart infrastructure, in order to deliver this public cloud experience for any on-prem application is this cloud-based control plane, right? So we've built this, how we recommend our customers to use a public cloud, and that is built, you know, building your software on modern technologies that are vendor-agnostic. So it could essentially run anywhere, whether it is, you know, any public cloud vendor, or if we want to run in our own data centers, when regulatory requirements change, it's massively scalable and responsive, no matter how large the managed infrastructure is. But really the interesting part here, Dave, is that the customer doesn't really have to worry about any of that, it's delivered as a service. So what a customer gets from this cloud control plane is a single API end point, how they get it with a public cloud. They get a web user interface, from which they can manage all of their infrastructure, no matter how many devices, no matter where it is, can be in the data center, can be in an edge location anywhere in the world, they get template-based provisioning much like a marketplace in a public cloud. They get analytics, predictive support services, and super easy automation capabilities. Now the second thing that I mentioned is this server embedded software, the server embedded infrastructure software, and that is running on a PCIE based offload engine. And that is really acting as this managed IOT endpoint within the application server that I managed that I mentioned earlier. And that approach really further converges modern application infrastructure. And it really replaces the software defined storage approach that you'll find in hyper-converged infrastructure software. And that is really by embedding the data services, the storage data service into silicon within the server. Now this offload engine, we call that a services processing unit or SPU in short. And that is really what differentiates us from hyper-converged infrastructure. And it's quite different than a regular accelerator card that you get with some of the hyper-converged infrastructure offerings. And it's different in the sense that the SPU runs basically all of the shared and local data services, and it's not just accelerating individual algorithms, individual functions. And it basically provides all of these services aside the CPU with the boot drive, with data drives. And in essence provides you with this a separate fall domain from the service, so for example, if you reboot your server, the data plan remains intact. You know, it's not impacted for that. >> Okay. So I want to stay on that for just a second, Craig, if I could, I get very clear how you're different from, as Tobias said, the three-tier server SAN fabric, external array, the HCI thing's interesting because in some respects, the HCI has, you know, guys take Nutanix, they talk about cloud and becoming more friendly with developers and API piece, but what's your point of view Craig on how you position relative to say HCI? >> Yeah, absolutely. So everyone gets what three-tier architecture is and was, and HCI software, you know, emerged as an alternative to the three-tier architectures. Everyone I think today understands that data services are, you know, SDS is software hosted in the operating system of each HCI device and consume some amount of CPU, memory, network, whatever. And it's typically constrained to a hypervisor environment, kind of where we're most of that stuff is done. And over time, these platforms have added some monitoring capabilities, predictive analytics, typically provided by the vendor's cloud, right? And as Tobias mentioned, some HCIS vendors have augmented this approach by adding an accelerator to make things like compression and dedupe go faster, right? Think SimpliVity or something like that. The difference that we're talking about here is, the infrastructure software that we deliver as a service is embedded right into server silicon. So it's not sitting in the operating system of choice. And what that means is you get the full power of the server you bought for your workloads. It's not constrained to a hypervisor-only environment, it's OS agnostic. And, you know, it's entirely controlled and administered by the cloud versus with, you know, most HCIS is an on-prem console that manages a cluster or two on-prem. And, you know, think of it from a automation perspective. When you automate something, you've got to set up your playbook kind of cluster by cluster. And depending what versions they're on, APIs are changing, behaviors are changing. So a very different approach at scale. And so again, for us, we're talking about something that gives you a much more efficient infrastructure that is then managed by the cloud and gives you this full kind of operational model you would expect for any kind of cloud-based deployment. >> You know, I got to go back, you guys obviously have some three-part DNA hanging around and you know, of course you remember well, the three-part ASIC, it was kind of famous at the time and it was unique. And I bring that up only because you've mentioned a couple of times the silicon and a lot of people yeah, whatever, but we have been on this, especially, particularly with ARM. And I want to share with the audience, if you follow my breaking analysis, you know this. If you look at the historical curve of Moore's law with x86, it's the doubling of performance every two years, roughly, that comes out to about 40% a year. That's moderated down to about 30% a year now, if you look at the ARM ecosystem and take for instance, apple A15, and the previous series, for example, over the last five years, the performance, when you combine the CPU, GPU, NPU, the accelerators, the DSPs, which by the way, are all customizable. That's growing at 110% a year, and the SOC costs 50 bucks. So my point is that you guys are riding perfect example of doing offloads with a way more efficient architecture. You're now on that curve, that's growing at 100% plus per year. Whereas a lot of the legacy storage is still on that 30% a year curve, and so cheaper, lower power. That's why I love to buy, as you were bringing in the IOT and the smart infrastructure, this is the future of storage and infrastructure. >> Absolutely. And the thing I would emphasize is it's not limited to storage, storage is a big issue, but we're talking about your application infrastructure and you brought up something interesting on the GPU, the SmartNIC of things, and just to kind of level set with everybody there, there's the HCI world, and then there's this SmartNIC DPU world, whatever you want to call it, where it's effectively a network card, it's got that specialized processing onboard and firmware to provide some network security storage services, and think of it as a PCIE card in your server. It connects to an external storage system, so think Nvidia Bluefield 2 connecting to an external NVME storage device. And the interesting thing about that is, you know, storage processing is offloaded from the server. So as we said earlier, good, right, you get the server back to your application, but storage moves out of the server. And it starts to look a little bit like an external storage approach versus a server based approach. And infrastructure management is done by, you know, the server SmartNIC with some monitoring and analytics coming from, you know, your supplier's cloud support service. So complexity creeps back in, if you start to lose that, you know, heavily converged approach. Again, we are taking advantage of storage within the server and, you know, keeping this a real server based approach, but distinguishing ourselves from the HCI approach. Cause there's a real ROI there. And when we talked to folks who are looking at new and different ways, we talk a lot about the cloud and I think we've done a bit of that already, but then at the end of the day, folks are trying to figure out well, okay, but then what do I buy to enable this? And what you buy is your standard server recipe. So think your favorite HPE, Lenovo, Supermicro, whatever, whatever your brand, and it's going to come enabled with this IOT end point within it, so it's really a smart server, if you will, that can then be controlled by our cloud. And so you're effectively buying, you know, from your favorite server vendor, a server option that is this endpoint and a subscription. You don't buy any of this from us, by the way, it's all coming from them. And that's the way we deliver this. >> You know, sorry to get into the plumbing, but this is something we've been on and a facet of it. Is that silicon custom designed or is it pretty much off the shelf, do you guys add any value to it? >> No, there are off the shelf options that can deliver tremendous horsepower on that form factor. And so we take advantage of that to, you know, do what we do in terms of, you know, creating these sort of smart servers with our end point. And so that's where we're at. >> Yeah. Awesome. So guys, what's your sweet spot, you know, why are customers, you know, what are you seeing customers adopting? Maybe some examples you guys can share? >> Yeah, absolutely. So I think Tobias mentioned that because of the architectural approach, there's a lot of flexibility there, you can run virtualized, containerized, bare metal applications. The question is where are folks choosing to get started? And those use cases with our existing customers revolved heavily around virtualization modernization. So they're going back in to their virtualized environment, whether their existing infrastructure is array-based or HCI-based. And they're looking to streamline that, save money, automate more, the usual things. The second area is the distributed edge. You know, the edge is going through tremendous transformation with IOT devices, 5g, and trying to get processing closer to where customers are doing work. And so that distributed edge is a real opportunity because again, it's a more cost-effective, more dense infrastructure. The cloud effectively can manage across all of these sites through a single API. And then the third area is cloud service provider transformation. We do a fair bit of business with, you know, cloud service providers, CTOs, who are looking at trying to build top line growth, trying to create new services and, and drive better bottom line. And so this is really, you know, as much as a revenue opportunity for them as cost saving opportunity. And then the last one is this notion of, you know, bringing the cloud on-prem, we've done a cloud repatriation deal. And I know you've seen a little of that, but maybe not a lot of it. And, you know, I can tell you in our first deals, we've already seen it, so it's out there. Those are the places where people are getting started with us today. >> It's just interesting, you're right. I don't see a ton of it, but if I'm going to repatriate, I don't want to go backwards. I don't want to repatriate to legacy. So it actually does kind of make sense that I repatriate to essentially a component of on-prem cloud that's managed in the cloud, that makes sense to me to buy. But today you're managing from the cloud, you're managing on-prem infrastructure. Maybe you could show us a little leg, share a little roadmap, I mean, where are you guys headed from a product standpoint? >> Right, so I'm not going to go too far on the limb there, but obviously, right. So one of the key benefits of a cloud managed platform is this notion of a single API, right. We talked about the distributed edge where, you know, think of retailer that has, you know, thousands of stores, each store having local infrastructure. And, you know, if you think about the challenges that come with, you know, just administrating those systems, rolling out firmware updates, rolling out updates in general, monitoring those systems, et cetera. So having a single console, a cloud console to administrate all of that infrastructure, obviously, you know, the benefits are easy now. If you think about, if you're thinking about that and spin it further, right? So from the use cases and the types of users that we've see, and Craig talked about them at the very beginning, you can think about this as this is a hybrid world, right. Customers will have data that they'll have in the public cloud. They will have data and applications in their data centers and at the edge, obviously it is our objective to deliver the same experience that they gained from the public cloud on-prem, and eventually, you know, those two things can come closer together. Apart from that, we're constantly improving the data services. And as you mentioned, ARM is, is on a path that is becoming stronger and faster. So obviously we're going to leverage on that and build out our data storage services and become faster. But really the key thing that I'd like to, to mention all the time, and this is related to roadmap, but rather feature delivery, right? So the majority of what we do is in the cloud, our business logic in the cloud, the capabilities, the things that make infrastructure work are delivered in the cloud. And, you know, it's provided as a service. So compared with your Gmail, you know, your cloud services, one day, you don't have a feature, the next day you have a feature, so we're continuously rolling out new capabilities through our cloud. >> And that's about feature acceleration as opposed to technical debt, which is what you get with legacy features, feature creep. >> Absolutely. The other thing I would say too, is a big focus for us now is to help our customers more easily consume this new concept. And we've already got, you know, SDKs for things like Python and PowerShell and some of those things, but we've got, I think, nearly ready, an Ansible SDK. We're trying to help folks better kind of use case by use case, spin this stuff up within their organization, their infrastructure. Because again, part of our objective, we know that IT professionals have, you know, a lot of inertia when they're, you know, moving stuff around in their own data center. And we're aiming to make this, you know, a much simpler, more agile experience to deploy and grow over time. >> We've got to go, but Craig, quick company stats. Am I correct, you've raised just under 20 million. Where are you on funding? What's your head count today? >> I am going to plead the fifth on all of that. >> Oh, okay. Keep it stealth. Staying a little stealthy, I love it. Really excited for you. I love what you're doing. It's really starting to come into focus. And so congratulations. You know, you got a ways to go, but Tobias and Craig, appreciate you coming on The Cube today. And thank you for watching this Cube Conversation. This is Dave Vellante. We'll see you next time. (upbeat outro music)

Published Date : Jul 15 2021

SUMMARY :

We saw the opportunity to So good to be here Dave. Soon, face to face. hit the gas on cloud, moved, you know, of the customer beholder. that you would know from your and that is built, you know, building your the HCI has, you know, guys take Nutanix, that data services are, you know, So my point is that you guys about that is, you know, or is it pretty much off the of that to, you know, why are customers, you know, And so this is really, you know, the cloud, that makes sense to me to buy. challenges that come with, you know, you get with legacy features, a lot of inertia when they're, you know, Where are you on funding? the fifth on all of that. And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CiscoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

Tobias FlitschPERSON

0.99+

TobiasPERSON

0.99+

Craig NunesPERSON

0.99+

LenovoORGANIZATION

0.99+

100%QUANTITY

0.99+

CraigPERSON

0.99+

Mist SystemsORGANIZATION

0.99+

AWSORGANIZATION

0.99+

SupermicroORGANIZATION

0.99+

fifthQUANTITY

0.99+

NebulonORGANIZATION

0.99+

less than 10 minutesQUANTITY

0.99+

twoQUANTITY

0.99+

JuniperORGANIZATION

0.99+

50 bucksQUANTITY

0.99+

three decadeQUANTITY

0.99+

PythonTITLE

0.99+

second thingQUANTITY

0.99+

MerakiORGANIZATION

0.99+

NebulonPERSON

0.99+

less than 10 minutesQUANTITY

0.99+

secondQUANTITY

0.99+

WikibonORGANIZATION

0.99+

two thingsQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

first dealsQUANTITY

0.99+

each storeQUANTITY

0.99+

PowerShellTITLE

0.99+

third areaQUANTITY

0.98+

MartinePERSON

0.98+

todayDATE

0.98+

thirdQUANTITY

0.98+

NutanixORGANIZATION

0.98+

A15COMMERCIAL_ITEM

0.98+

three-tierQUANTITY

0.98+

GmailTITLE

0.98+

FirstQUANTITY

0.98+

second principleQUANTITY

0.98+

Bluefield 2COMMERCIAL_ITEM

0.98+

110% a yearQUANTITY

0.98+

single consoleQUANTITY

0.98+

second areaQUANTITY

0.98+

hundreds of nodesQUANTITY

0.98+

MoorePERSON

0.97+

about 40% a yearQUANTITY

0.97+

oneQUANTITY

0.97+

ARMORGANIZATION

0.97+

VMwareORGANIZATION

0.97+

CubeORGANIZATION

0.97+

three-partQUANTITY

0.97+

thousands of storesQUANTITY

0.97+

singleQUANTITY

0.97+

fourth benefitQUANTITY

0.96+

two great guestsQUANTITY

0.96+

firstQUANTITY

0.96+

eachQUANTITY

0.96+

second oneQUANTITY

0.96+

More than a decade agoDATE

0.96+

about 30% a yearQUANTITY

0.96+

HPEORGANIZATION

0.96+

around 35%QUANTITY

0.95+

thousands of nodesQUANTITY

0.95+

up to 75%QUANTITY

0.95+

appleORGANIZATION

0.95+

Krish Prasad and Manuvir Das | VMworld 2020


 

>> Narrator: From around the globe, it's theCube. With digital coverage of VMworld 2020. Brought to you by VMware and its ecosystem partners. >> Hello, and welcome back to theCube virtual coverage of VMworld 2020. I'm John Furrier, host of theCube. VMworld's not in person this year, it's on the virtual internet. A lot of content, check it out, vmworld.com, a lot of great stuff, online demos, and a lot of great keynotes. Here we got a great conversation to unpack, the NVIDIA, the AI and all things Cloud Native. With Krish Prasad, who's the SVP and GM of Cloud Platform, Business Unit, and Manuvir Das head of enterprise computing at NVIDIA. Gentlemen, great to see you virtually. Thanks for joining me on the virtual Cube, for the virtual VMworld 2020. >> Thank you John. >> Pleasure to be here. >> Quite a world. And I think one of the things that obviously we've been talking about all year since COVID is the acceleration of this virtualized environment with media and everyone working at home remote. Really puts the pressure on digital transformation Has been well discussed and documented. You guys have some big news, obviously on the main stage NVIDIA CEO, Jensen there legend. And of course, you know, big momentum with with AI and GPUs and all things, you know, computing. Krish, what are your announcements today? You got some big news. Could you take a minute to explain the big announcements today? >> Yeah, John. So today we want to make two major announcements regarding our partnership with NVIDIA. So let's take the first one, and talk through it and then we can get to the second announcement later. In the first one, as you well know, NVIDIA is the leader in AI and VMware as the leader in virtualization and cloud. This announcement is about us teaming up, deliver a jointly engineered solution to the market to bring AI to every enterprise. So as you well know, VMware has more than 300,000 customers worldwide. And we believe that this solution would enable our customers to transform their data centers or AI applications running on top of their virtualized VMware infrastructure that they already have. And we think that this is going to vastly accelerate the adoption of AI and essentially democratize AI in the enterprise. >> Why AI? Why now Manuvir? Obviously we know the GPUs have set the table for many cool things, from mining Bitcoin to really providing a great user experience. But AI has been a big driver. Why now? Why VMware now? >> Yes. Yeah. And I think it's important to understand this is about AI more than even about GPUs, you know. This is a great moment in time where AI has finally come to life, because the hardware and software has come together to make it possible. And if you just look at industries and different parts of life, how is AI impacting? So for example, if you're a company on the internet doing business, everything you do revolves around making recommendations to your customers about what they should do next. This is based on AI. Think about the world we live in today, with the importance of healthcare, drug discovery, finding vaccines for something like COVID. That work is dramatically accelerated if you use AI. And what we've been doing in NVIDIA over the years is, we started with the hardware technology with the GPU, the Parallel Processor, if you will, that could really make these algorithms real. And then we worked very hard on building up the ecosystem. You know, we have 2 million developers today who work with NVIDIA AI. That's thousands of companies that are using AI today. But then if you think about what Krish said, you know about the number of customers that VMware has, which is in the hundreds of thousands, the opportunity before us really now is, how do we democratize this? How do we take this power of AI, that makes every customer and every person better and put it in the hands of every enterprise customer? And we need a great vehicle for that, and that vehicle is VMware. >> Guys, before we get to the next question, I would just want to get your personal take on this, because again, we've talked many times, both of you've been on theCube on this topic. But now I want to highlight, you mentioned the GPU that's hardware. This is software. VMware had hardware partners and then still software's driving it. Software's driving everything. Whether it's something in space, it's an IOT device or anything at the edge of the network. Software, is the value. This has become so obvious. Just share your personal take on this for folks who are now seeing this for the first time. >> Yeah. I mean, I'll give you my take first. I'm a software guy by background, I learned a few years ago for the first time that an array is a storage device and not a data structure in programming. And that was a shock to my system. Definitely the world is based on algorithms. Algorithms are implemented in software. Great hardware enables those algorithms. >> Krish, your thoughts. we live we're living in the future right now. >> Yeah, yeah. I would say that, I mean, the developers are becoming the center. They are actually driving the transformation in this industry, right? It's all about the application development, it's all about software, the infrastructure itself is becoming software defined. And the reason for that is you want the developers to be able to craft the infrastructure the way they need for the applications to run on top of. So it's all about software like I said. >> Software defined. Yeah, just want to get that quick self-congratulatory high five amongst ourselves virtually. (laughs) Congratulations. >> Exactly. >> Krish, last time we spoke at VMworld, we were obviously in person, but we talked about Tanzu and vSphere. Okay, you had Project Pacific. Does this expand? Does this announcement expand on that offering? >> Absolutely. As you know John, for the past several years, VMware has been on this journey to define the Hybrid Cloud Infrastructure, right? Essentially is the software stack that we have, which will enable our customers to provide a cloud operating model to their developers, irrespective of where they want to land their workloads. Whether they want to land their workloads On-Premise, or if they want it to be on top of AWS, Google, Azure, VMware stack is already running across all of them as you well know. And in addition to that, we have around, you know, 4,000, 5,000 service providers who are also running our Platform to deliver cloud services to their customers. So as part of that journey, last year, we took the Platform and we added one further element to it. Traditionally, our platform has been used by customers for running via VMs. Last year, we natively integrated Kubernetes into our platform. This was the big re architecture of vSphere, as we talked about. That was delivered to the market. And essentially now customers can use the same platform to run Kubernetes, Containers and VM workloads. The exact same platform, it is operationally the same. So the same skillsets, tools and processes can be used to run Kubernetes as well as VM applications. And the same platform runs, whether you want to run it On-Premise or in any of the clouds, as we talked about before. So that vastly simplifies the operational complexity that our customers have to deal with. And this is the next chapter in that journey, by doing the same thing for AI workload. >> You guys had great success with these Co-Engineering joined efforts. VMware and now with NVIDIA is interesting. It's very relevant and is very cool. So it's cool and relevant, so check, check. Manuvir, talk about this, because how do you bring that vision to the enterprises? >> Yeah, John, I think, you know, it's important to understand there is some real deep Computer Science here between the Engineers at VMware and NVIDIA. Just to lay that out, you can think of this as a three layer stack, right? The first thing that you need is, clearly you need the hardware that is capable of running these algorithms, that's what the GPU enable. Then you need a great software stack for AI, all the right Algorithmics that take advantage of that hardware. This is actually where NVIDIA spends most of its effort today. People may sometimes think of NVIDIA as a GPU company, but we have much more a software company now, where we have over the years created a body of work of all of the software that it actually takes to do good AI. But then how do you marry the software stack with the hardware? You need a platform in the middle that supports the applications and consumes the hardware and exposes it properly. And that's where vSphere, you know, as Krish described with either VMs or Containers comes into the picture. So the Computer Science here is, to wire all these things up together with the right algorithmics so that you get real acceleration. So as examples of early work that the two teams have done together, we have workloads in healthcare, for example. In cancer detection, where the acceleration we get with this new stack is 30X, right? The workload is running 30 times faster than it was running before this integration just on CPUs. >> Great performance increase again. You guys are hiring a lot of software developers. I can attest to knowing folks in Silicon Valley and around the world. So I know you guys are bringing the software jobs to the table on a great product by the way, so congratulations. Krish, Democratization of AI for the enterprise. This is a liberating opportunity, because one of the things we've heard from your customers and also from VMware, but mostly from the customer's successes, is that there's two types of extremes. There's the, I'm going to modernize my business, certainly COVID forcing companies, whether they're airlines or whatever, not a lot going on, they have an opportunity to modernize, to essentially modern apps that are getting a tailwind from these new digital transformation accelerated. How does AI democratize this? Cause you got people and you've got technology. (laughs) Right? So share your thoughts on how you see this democratizing. >> That's a very good question. I think if you look at how people are running AI applications today, like you go to an enterprise, you would see that there is a silo of bare metal sun works on the side, where the AI stack is run. And you have people with specialized skills and different tools and utilities that manage that environment. And that is what is standing in the way of AI taking off in the enterprise, right? It is not the use case. There are all these use cases which are mission critical that all companies want to do, right? Worldwide, that has been the case. It is about the complexity of life that is standing in the way. So what we are doing with this is we are saying, "hey, that whole solution stack that Manuvir talked about, is integrated into the VMware Virtualized Infrastructure." Whether it's On-Prem or in the cloud. And you can manage that environment with the exact same tools and processes and skills that you traditionally had for running any other application on VMware infrastructure. So, you don't need to have anything special to run this. And that's what is going to give us the acceleration that we talked about and essentially hive the Democratization of AI. >> That's a great point. I just want to highlight that and call that out, because AI's every use case. You could almost say theCube could have AI and we do actually have a little bit of AI and some of our transcriptions and work. But it's not so much just use cases, it's actually not just saying you got to do it. So taking down that blocker, the complexity, certainly is the key. And that's a great point. We're going to call that out after. Alright, let's move on to the second part of the announcement. Krish Project Monterey. This is a big deal. And it looks like a, you know, kind of this elusive, it's architectural thing, but it's directionally really strategic for VMware. Could you take a minute to explain this announcement? Frame this for us. >> Absolutely. I think John, you remember Pat got on stage last year at Vmworld and said, you know, "we are undertaking the biggest re architecture of the vSphere platform in the last 10 years." And he was talking about natively embedding Kubernetes, in vSphere, right? Remember Tanzu and Project Pacific. This year we are announcing Project Monterrey. It's a project that is significant with several partners in the industry, along with NVIDIA was one of the key partners. And what we are doing is we are reimagination of the data center for the next generation applications. And at the center of it, what we are going to do is rearchitect vSphere and ESX. So that the ESX can normally run on the CPU, but it'll also run on the Smart Mix. And what this gives us is the whole, let's say data center, infrastructure type services to be offloaded from running on the CPU onto the Smart Mix. So what does this provide the applications? The applications then will perform better. And secondly, it provides an extra layer of security for the next generation applications. Now we are not going to stop there. We are going to use this architecture and extended it so that we can finally eliminate one of the big silos that exist in the enterprise, which is the bare metal silo. Right? Today we have virtualized environments and bare metal, and what this architecture will do is bring those bare metal environments also under ESX management. So you ESX will manage environments which are virtualized and environments which are running bare metal OS. And so that's one big breakthrough and simplification for the elimination of silo or the elimination of, you know, specialized skills to keep it running. And lastly, but most importantly, where we are going with this. That just on the question you asked us earlier about software defined and developers being in control. Where we want to go with this is give developers, the application developers, the ability to really define and create their run time on the Fly, dynamically. So think about it. If dynamically they're able to describe how the application should run. And the infrastructure essentially kind of attaches computer resources on the Fly, whether they are sitting in the same server or somewhere in the network as pools of resources. Bring it all together and compose the runtime environment for them. That's going to be huge. And they won't be constrained anymore by the resources that are tied to the physical server that they are running on. And that's the vision of where we are taking it. It is going to be the next big change in the industry in terms of enterprise computing. >> Sounds like an Operating System to me. Yeah. Run time, assembly orchestration, all these things coming together, exciting stuff. Looking forward to digging in more after Vmworld. Manuvir, how does this connect to NVIDIA and AI? Tie that together for us. >> Yeah, It's an interesting question, because you would think, you know, okay, so NVIDIA this GPU company or this AI company. But you have to remember that INVIDIA is also a networking company. Because friends at Mellanox joined us not that long ago. And the interesting thing is that there's a Yin and Yang here, because, Krish described the software vision, which is brilliant. And what this does is it imposes a lot on the host CPU of the server to do. And so what we've be doing in parallel is developing hardware. A new kind of "Nick", if you will, we call it a DPU or a Data Processing Unit or a Smart Nick that is capable of hosting all this stuff. So, amusingly when Krish and I started talking, we exchanged slides and we basically had the same diagram for our vision of where things go with that software, the infrastructure software being offloaded, data center infrastructure on a chip, if you will. Right? And so it's a very natural confluence. We are very excited to be part of this, >> Yeah. >> Monterey program with Krish and his team. And we think our DPU, which is called the NVIDIA BlueField-2, is a pretty good device to empower the work that Krish's team is doing. >> Guys it's awesome stuff. And I got to say, you know, I've been covering Vmworld now 11 years with theCube, and I've known VMware since its founding, just the evolution. And just recently before VMworld, you know, you saw the biggest IPO in the history of Wall Street, Snowflake an Enterprise Data Cloud Company. The number one IPO ever. Enterprise tech is so exciting. This is really awesome. And NVIDIA obviously well known, great brand. You own some chip company as well, and get processors and data and software. Guys, customers are going to be very interested in this, so what should customers do to find out more? Obviously you've got Project Monterey, strategic direction, right? Framed perfectly. You got this announcement. If I'm a customer, how do I get involved? How do I learn more? And what's in it for me. >> Yeah, John, I would say, sorry, go ahead, Krish. >> No, I was just going to say sorry Manuvir. I was just going to say like a lot of these discussions are going to be happening, there are going to be panel discussions there are going to be presentations at Vmworld. So I would encourage customers to really look at these topics around Project Monterey and also about the AI work we are doing with NVIDIA and attend those sessions and be active and we will have a ways for them to connect with us in terms of our early access programs and whatnot. And then as Manuvir was about to say, I think Manuvir, I will give it to you about GTC. >> Yeah, I think right after that, we have the NVIDIA conference, which is GTC, where we'll also go over this. And I think some of this work is a lot closer to hand than people might imagine. So I would encourage watching all the sessions and learning more about how to get started. >> Yeah, great stuff. And just for the folks @vmworld.com watching, Cloud City's got 60 solution demos, go look for the sessions. You got the EX, the expert sessions, Raghu, Joe Beda amongst other people from VMware are going to be there. And of course, a lot of action on the content. Guys, thanks so much for coming on. Congratulations on the news, big news. NVIDIA on the Bay in Virtual stage here at VMworld. And of course you're in theCube. Thanks for coming. Appreciate it. >> Thank you for having us. Okay. >> Thank you very much. >> This is Cube's coverage of VMworld 2020 virtual. I'm John Furrier, host of theCube virtual, here in Palo Alto, California for VMworld 2020. Thanks for watching. (upbeat music)

Published Date : Sep 18 2020

SUMMARY :

Brought to you by VMware Thanks for joining me on the virtual Cube, is the acceleration of this and VMware as the leader GPUs have set the table the Parallel Processor, if you will, Software, is the value. the first time that an array the future right now. for the applications to run on top of. Yeah, just want to get that quick Okay, you had Project Pacific. And the same platform runs, because how do you bring that the acceleration we get and around the world. that is standing in the way. certainly is the key. the ability to really define Sounds like an Operating System to me. of the server to do. And we think our DPU, And I got to say, you know, Yeah, John, I would say, and also about the AI work And I think some of this And just for the folks Thank you for having us. This is Cube's coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NVIDIAORGANIZATION

0.99+

JohnPERSON

0.99+

KrishPERSON

0.99+

30 timesQUANTITY

0.99+

John FurrierPERSON

0.99+

Krish PrasadPERSON

0.99+

VMwareORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

RaghuPERSON

0.99+

Joe BedaPERSON

0.99+

Last yearDATE

0.99+

two teamsQUANTITY

0.99+

last yearDATE

0.99+

MellanoxORGANIZATION

0.99+

Manuvir DasPERSON

0.99+

todayDATE

0.99+

more than 300,000 customersQUANTITY

0.99+

Project PacificORGANIZATION

0.99+

PatPERSON

0.99+

11 yearsQUANTITY

0.99+

30XQUANTITY

0.99+

first oneQUANTITY

0.99+

ESXTITLE

0.99+

VmworldORGANIZATION

0.99+

hundreds of thousandsQUANTITY

0.99+

two typesQUANTITY

0.99+

AWSORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

VMworldORGANIZATION

0.99+

first timeQUANTITY

0.99+

vSphereTITLE

0.99+

INVIDIAORGANIZATION

0.99+

second partQUANTITY

0.99+

TodayDATE

0.98+

VMworld 2020EVENT

0.98+

SnowflakeORGANIZATION

0.98+

first thingQUANTITY

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

60 solution demosQUANTITY

0.98+

first oneQUANTITY

0.98+

GoogleORGANIZATION

0.97+

This yearDATE

0.97+

firstQUANTITY

0.97+

vmworld.comOTHER

0.97+

Paresh Kharya & Kevin Deierling, NVIDIA | HPE Discover 2020


 

>> Narrator: From around the global its theCUBE, covering HPE Discover Virtual Experience, brought to you by HPE. >> Hi, I'm Stu Miniman and this is theCUBE's coverage of HPE, discover the virtual experience for 2020, getting to talk to Hp executives, their partners, the ecosystem, where they are around the globe, this session we're going to be digging in about artificial intelligence, obviously a super important topic these days. And to help me do that, I've got two guests from Nvidia, sitting in the window next to me, we have Paresh Kharya, he's director of product marketing and sitting next to him in the virtual environment is Kevin Deierling, who is this senior vice president of marketing as I mentioned both with Nvidia. Thank you both so much for joining us. >> Thank you, so great to be here. >> Great to be here. >> All right, so Paresh when you set the stage for us? AI, obviously, one of those mega trends to talk about but just, give us the stages, where Nvidia sits, where the market is, and your customers today, that they think about AI. >> Yeah, so we are basically witnessing a massive changes that are happening across every industry. And it's basically the confluence of three things. One is of course, AI, the second is 5G and IOT, and the third is the ability to process all of the data that we have, that's now possible. For AI we are now seeing really advanced models, from computer vision, to understanding natural language, to the ability to speak in conversational terms. In terms of IOT and 5G, there are billions of devices that are sensing and inferring information. And now we have the ability to act, make decisions in various industries, and finally all of the processing capabilities that we have today, at the data center, and in the cloud, as well as at the edge with the GPUs as well as advanced networking that's available, we can now make sense all of this data to help industrial transformation. >> Yeah, Kevin, you know it's interesting when you look at some of these waves of technology and we say, "Okay, there's a lot of new pieces here." You talk about 5G, it's the next generation but architecturally some of these things remind us of the past. So when I look at some of these architectures, I think about, what we've done for high performance computing for a long time, obviously, you know, Mellanox, where you came from through NVIDIA's acquisition, strong play in that environment. So, maybe give us a little bit compare, contrast, what's the same, and what's different about this highly distributed, edge compute AI, IOT environment and what's the same with what we were doing with HPC in the past. >> Yeah, so we've--Mellanox has now been a part of Nvidia for a little over a month and it's great to be part of that. We were both focused on accelerated computing and high performance computing. And to do that, what it means is the scale and the type of problems that we're trying to solve are just simply too large to fit into a single computer. So if that's the case, then you connect a lot of computers. And Jensen talked about this recently at the GTC keynote where he said that the new unit computing, it's really the data center. So it's no longer the box that sits on your desk or even in Iraq, it's the entire data center because that's the scale of the types of problems that we're solving. And so the notion of scale up and scale out, the network becomes really, really critical. And we're doing high-performance networking for a long time. When you move to the edge, instead of having, a single data center with 10,000 computers, you have 10,000 data centers, each of which as a small number of servers that is processing all of that information that's coming in. But in a sense, the problems are very, very similar, whether you're at the edge or you're doing massive HPC, scientific computing or cloud computing. And so we're excited to be part of bringing together the AI and the networking because they are really optimizing at the data center scale across the entire stack. >> All right, so it's interesting. You mentioned, Nvidia CEO, Jensen. I believe if I saw right in there, he actually could, wrote a term which I had not run across, it was the data processing unit or DPU in that, data center, as you talked about. Help us wrap our heads around this a little bit. I know my CPU, when I think about GPUs, I obviously think of Nvidia. TPUs, in the cloud and everything we're doing. So, what is DPUs? Is this just some new AI thing or, is this kind of a new architectural model? >> Yeah. I think what Jensen highlighted is that there's three key elements of this accelerated disaggregated infrastructure that the data center has becoming. And so that's the CPU, which is doing traditional single threaded workloads but for all of the accelerated workloads, you need the GPU. And that does massive parallelism deals with massive amounts of data, but to get that data into the GPU and also into the CPU, you need really an intelligent data processing because the scale and scope of GPUs and CPUs today, these are not single core entities. These are hundreds or even thousands of cores in a big system. And you need to steer the traffic exactly to the right place. You need to do it securely. You need to do it virtualized. You need to do it with containers and to do all of that, you need a programmable data processing unit. So we have something called our BlueField, which combines our latest, greatest, 100 gig and 200 gig network connectivity with Arm processors and a whole bunch of accelerators for security, for virtualization, for storage. And all of those things then feed these giant parallel engines which are the GPU. And of course the CPU, which is really the workload at the application layer for non-accelerated outs. >> Great, so Paresh, Kevin talked about, needing similar types of services, wherever the data is. I was wondering if you could really help expand for us a little bit, the implications of it AI at the edge. >> Sure, yeah, so AI is basically not just one workload. AI is many different types of models and AI also means training as well as inferences, which are very different workloads or AI printing, for example, we are seeing the models growing exponentially, think of any AI model, like a brain of a computer or like a brain, solving a particular use case a for simple models like computer vision, we have models that are smaller, bugs have computer vision but advanced models like natural language processing, they require larger brains or larger models, so on one hand we are seeing the size of the AI models increasing tremendously and in order to train these models, you need to look at computing at the scale of data center, many processors, many different servers working together to train a single model, on the other hand because of these AI models, they are so accurate today from understanding languages to speaking languages, to providing the right recommendations whether it's for products or for content that you may want to consume or advertisements and so on. These models are so effective and efficient that they are being powered by AI today. These applications are being powered by AI and each application requires a small amount of acceleration, so you need the ability to scale out or, and support many different applications. So with our newly launched MPR architecture, just couple of weeks to go that Jensen announced, in the virtual keynote for the first time, we are now able to provide both, scale up and scale out both training data analytics as well as imprints on the single architecture and that's very exciting. >> Yeah, so look at that. The other thing that's interesting is you're talking about at the edge and scale out versus scale up, the networking is critical for both of those. And there's a lot of different workloads. And as Paresh was describing, you've got different workloads that require different amounts of GPU or storage or networking. And so part of that vision of this data center as the computer is that, the DPU lets you scale independently, everything. So you can compose, you desegregate into DPUs and storage and CPUs, and then you compose exactly the computer that you need on the fly container, right, to solve the problem that you're solving right now. So these new way of programming is programming the entire data center at once and you'll go grab all of it and it'll run for a few hundred milliseconds even and then it'll come back down and recompose itself onsite. And to do that, you need this very highly efficient networking infrastructure. And the good news is we're here at HPE Discover. We've got a great partner with HPE. You know, they have our M series switches that uses the Mellanox hundred gig and now even 200 and 400 gig ethernet switches, we have all of our adapters and they have great platforms. The Apollo platform for example, is break for HPC and they have other great platforms that we're looking at with the new telco that we're doing or 5G and accelerating that. >> Yeah, and on the edge computing side, there's the edge line set of products which are very interesting, the other sort of aspect that I wanted to touch upon, is the whole software stack that's needed for the edge. So edge is different in the sense that it's not centrally managed, the edge computing devices are distributed remote locations. And so managing the workflow of running and updating software on it is important and needs to be done in a very secure manner. The second thing that's, that's very different again, for the edges, these devices are going to require connectivity. As Kevin was pointing out, the importance of networking so we also announced, a couple of weeks ago at our GTC, our EGX product that combines the Mellanox NIC and our GPUs into a single a processor, Mellanox NIC provides a fast connectivity, security, as well as the encryption and decryption capabilities, GPUs provide acceleration to run the advanced DI models, that are required for applications at the edge. >> Okay, and if I understood that, right. So, you've got these throughout the HPE the product line, HPE's got long history of making, flexible configurations, I remember when they first came out with a Blade server it was, different form factors, different connectivity options, they pushed heavily into composable infrastructure. So it sounds like this is just a kind of extending, you know, what HP has been doing for a couple of decades. >> Yeah, I think HP is a great partner there and these new platforms, the EGX, for example that was just announced, a great workload there is a 5G telco. So we'll be working with our friends at HPE to take that to market as well. And, you know, really, there's a lot of different workloads and they've got a great portfolio of products across the spectrum from regular servers. And 1U, 2U, and then all the way up to their big Apollo platform. >> Well I'm glad you brought up telco, I'm curious, are there any specific, applications or workloads that, where the low hanging fruit or the kind of the first targets that you use for AI acceleration? >> Yeah, so you know, the 5G workload is just awesome. We're introduced with the EGX, a new platform called Ariel which is a programming framework and there were lots of partners there that were part of that, including, folks like Ericsson. And the idea there is that you have a software defined hardware accelerated radio area network, so a cloud RAM and it really has all of the right attributes of the cloud and what's nice there is now you can change on the fly, the algorithms that you're using for the baseband codex without having to go climb a radio tower and change the actual physical infrastructure. So that's a critical part. Our role in that, on the networking side, we introduced the technology that's part of EGX then are connected, It's like the DX adapter, it's called 5T for 5G. And one of the things that happens is you need this time triggered transport or a telco technology. That's the 5T's for 5G. And the reason is because you're doing distributed baseband unit, distributed radio processing and the timing between each of those server nodes needs to be super precise, 20 nanosecond. It's something that simply can't be done in software. And so we did that in hardware. So instead of having an expensive FPGA, I try to synchronize all of these boxes together. We put it into our NIC and now we put that into industry standard servers HP has some fantastic servers. And then with the EGX platform, with that we can build, really scale out software to client cloud RAM. >> Awesome, Paresh, anything else on the application side you'd like to add in just about what Kevin spoke about. >> Oh yeah, so from application perspective, every industry has applications that touch on edge. If you take a look at the retail, for example, there is, you know, all the way from supply chain to inventory management, to keeping the right stock units in the shelves, making sure there is a there is no slippage or shrinkage. So to telecom, to healthcare, we are re-looking at constantly monitoring patients and taking actions for the best outcomes to manufacturing. We are looking to automate production detecting failures much early on in the production cycle and so on every industry has different applications but they all use AI. They can all leverage the computing capabilities and high-speed networking at the edge to transform their business processes. >> All right, well, it's interesting almost every time we've talked about AI, networking has come up. So, you know, Kevin, I think that probably ease up a little bit why, Nvidia, spent around $7 billion for the acquisition of Mellanox and not only was it the Mellanox acquisition, Cumulus Networks, very known in the network space for software defined really, operating system for networking but give us strategically, does this change the direction of Nvidia, how should we be thinking about Nvidia in the overall network? >> Yeah, I think the way to think about it is going back to that data center as the computer. And if you're thinking about the data center as computer then networking becomes the back plane, if you will of that data center computer and having a high performance network is really critical. And Mellanox has been a leader in that for 20 years now with our InfiniBand and our Ethernet product. But beyond that, you need a programmatic interface because one of the things that's really important in the cloud is that everything is software defined and it's containerized now and there is no better company in the world then Cumulus, really the pioneer and building Cumulus clinics, taking the Linux operating system and running that on multiple homes. So not just hardware from Mellanox but hardware from other people as well. And so that whole notion of an open networking platform more committed to, you need to support that and now you have a programmatic interface that you can drop containers on top of, Cumulus has been the leader in the Linux FRR, it's Free Range Routing, which is the core routing algorithm. And that really is at the heart of other open source network operating systems like Sonic and DENT so we see a lot of synergy here, all the analytics that Cumulus is bringing to bear with NetQ. So it's really great that they're going to be part here of the Nvidia team. >> Excellent, well thank you both much. Want to give you the final word, what should they do, HPE customers in their ecosystem know about the Nvidia and HPE partnership? >> Yeah, so I'll start you know, I think HPE has been a longtime partner and a customer of ours. If you have accelerated workloads, you need to connect those together. The HPE server portfolio is an ideal place. We can combine some of the work we're doing with our new amp years and existing GPUs and then also to connect those together with the M series, which is their internet switches that are based on our spectrum switch platforms and then all of the HPC related activities on InfiniBand, they're a great partner there. And so all of that, pulling it together, and now as at the edge, as edge becomes more and more important, security becomes more and more important and you have to go to this zero trust model, if you plug in a camera that's somebody has at the edge, even if it's on a car, you can't trust it. So everything has to become, validated authenticated, all the data needs to be encrypted. And so they're going to be a great partner because they've been a leader and building the most secure platforms in the world. >> Yeah and on the data center, server, portfolio side, we really work very closely with HP on various different lines of products and really fantastic servers from the Apollo line of a scale up servers to synergy and ProLiant line, as well as the Edgeline for the edge and on the super computing side with the pre side of things. So we really work to the fullest spectram of solutions with HP. We also work on the software side, wehere a lot of these servers, are also certified to run a full stack under a program that we call NGC-Ready so customers get phenomenal value right off the bat, they're guaranteed, to have accelerated workloads work well when they choose these servers. >> Awesome, well, thank you both for giving us the updates, lots happening, obviously in the AI space. Appreciate all the updates. >> Thanks Stu, great to talk to you, stay well. >> Thanks Stu, take care. >> All right, stay with us for lots more from HPE Discover Virtual Experience 2020. I'm Stu Miniman and thank you for watching theCUBE. (bright upbeat music)

Published Date : Jun 24 2020

SUMMARY :

the global its theCUBE, in the virtual environment that they think about AI. and finally all of the processing the next generation And so the notion of TPUs, in the cloud and And of course the CPU, which of it AI at the edge. for the first time, we are And the good news is we're Yeah, and on the edge computing side, the product line, HPE's across the spectrum from regular servers. and it really has all of the else on the application side and high-speed networking at the edge in the network space for And that really is at the heart about the Nvidia and HPE partnership? all the data needs to be encrypted. Yeah and on the data Appreciate all the updates. Thanks Stu, great to I'm Stu Miniman and thank

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kevin DeierlingPERSON

0.99+

KevinPERSON

0.99+

Paresh KharyaPERSON

0.99+

NvidiaORGANIZATION

0.99+

200 gigQUANTITY

0.99+

HPORGANIZATION

0.99+

100 gigQUANTITY

0.99+

hundredsQUANTITY

0.99+

10,000 computersQUANTITY

0.99+

MellanoxORGANIZATION

0.99+

200QUANTITY

0.99+

NVIDIAORGANIZATION

0.99+

PareshPERSON

0.99+

CumulusORGANIZATION

0.99+

Cumulus NetworksORGANIZATION

0.99+

IraqLOCATION

0.99+

20 yearsQUANTITY

0.99+

HPEORGANIZATION

0.99+

EricssonORGANIZATION

0.99+

2020DATE

0.99+

two guestsQUANTITY

0.99+

OneQUANTITY

0.99+

thirdQUANTITY

0.99+

StuPERSON

0.99+

first timeQUANTITY

0.99+

around $7 billionQUANTITY

0.99+

telcoORGANIZATION

0.99+

each applicationQUANTITY

0.99+

Stu MinimanPERSON

0.99+

secondQUANTITY

0.99+

20 nanosecondQUANTITY

0.99+

LinuxTITLE

0.99+

bothQUANTITY

0.99+

NetQORGANIZATION

0.99+

400 gigQUANTITY

0.99+

eachQUANTITY

0.99+

10,000 data centersQUANTITY

0.98+

second thingQUANTITY

0.98+

three key elementsQUANTITY

0.98+

oneQUANTITY

0.98+

thousands of coresQUANTITY

0.98+

three thingsQUANTITY

0.97+

JensenPERSON

0.97+

ApolloORGANIZATION

0.97+

JensenORGANIZATION

0.96+

single computerQUANTITY

0.96+

HPE DiscoverORGANIZATION

0.95+

single modelQUANTITY

0.95+

firstQUANTITY

0.95+

hundred gigQUANTITY

0.94+

InfiniBandORGANIZATION

0.94+

DENTORGANIZATION

0.93+

GTCEVENT

0.93+