Image Title

Search Results for pluribus:

Ami Badani, NVIDIA & Mike Capuano, Pluribus Networks


 

(upbeat music) >> Let's kick things off. We're here at Mike Capuano the CMO of Pluribus Networks, and Ami Badani VP of Networking, Marketing, and Developer of Ecosystem at NVIDIA. Great to have you welcome folks. >> Thank you. >> Thanks. >> So let's get into the the problem situation with cloud unified networking. What problems are out there? What challenges do cloud operators have Mike? Let's get into it. >> The challenges that we're looking at are for non hyperscalers that's enterprises, governments Tier 2 service providers, cloud service providers. And the first mandate for them is to become as agile as a hyperscaler. So they need to be able to deploy services and security policies in seconds. They need to be able to abstract the complexity of the network and define things in software while it's accelerated in hardware. Really ultimately they need a single operating model everywhere. And then the second thing is they need to distribute networking and security services out to the edge of the host. We're seeing a growth cyber attacks. It's not slowing down. It's only getting worse and solving for this security problem across clouds is absolutely critical. And the way to do it is to move security out to the host. >> With that goal in mind, what's the Pluribus vision how does this tie together? >> So basically what we see is that this demands a new architecture and that new architecture has four tenets. The first tenet is unified and simplified cloud networks. If you look at cloud networks today, there's sort of like discreet bespoke cloud networks per hypervisor, per private cloud, edge cloud, public cloud. Each of the public clouds have different networks, that needs to be unified. If we want these folks to be able to be agile they need to be able to issue a single command or instantiate a security policy across all of those locations with one command and not have to go to each one. The second is, like I mentioned distributed security. Distributed security without compromise, extended out to the host is absolutely critical. So micro segmentation and distributed firewalls. But it doesn't stop there. They also need pervasive visibility. It's sort of like with security you really can't see you can't protect you can't see. So you need visibility everywhere. The problem is visibility to date has been very expensive. Folks have had to basically build a separate overlay network of taps, packet brokers, tap aggregation infrastructure, that really needs to be built in to this unified network I'm talking about. And the last thing is automation. All of this needs to be SDN enabled. So this is related to my comment about abstraction. Abstract the complexity of all these discreet networks whatever's down there in the physical layer. I don't want to see it. I want to abstract it. I want to define things in software but I do want to leverage the power of hardware to accelerate that. So that's the fourth tenet is SDN automation. >> Mike, we've been talking on theCUBE a lot about this architectural shift and customers are looking at this. This is a big part of everyone who's looking at cloud operations, NextGen. How do we get there? How do customer customers get this vision realized? >> That's a great question. And I appreciate the tee up. We're here today for that reason. We're introducing two things today. The first is a unified cloud networking vision. And that is a vision of where Pluribus is headed with our partners like NVIDIA long term. And that is about deploying a common operating model SDN enabled, SDN automated, hardware accelerated across all clouds. And whether that's underlay and overlay switch or server, any hypervisor infrastructure containers, any workload doesn't matter. So that's ultimately where we want to get. And that's what we talked about earlier. The first step in that vision is what we call the unified cloud fabric. And this is the next generation of our adaptive cloud fabric. And what's nice about this is we're not starting from scratch. We have an award-winning adaptive cloud fabric product that is deployed globally. And in particular, we're very proud of the fact that it's deployed in over 100 Tier 1 mobile operators as the network fabric for their 4G and 5G virtualized cores. We know how to build carrier grade networking infrastructure. What we're doing now to realize this next generation unified cloud fabric is we're extending from the switch to this NVIDIA BlueField-2 DPU. We know there's. >> Hold that up real quick. That's a good prop. That's the BlueField NVIDIA card. >> It's the NVIDIA BlueField-2 DPU, data processing unit. What we're doing fundamentally is extending our SDN automated fabric, the unified cloud fabric, out to the host. But it does take processing power. So we knew that we didn't want to do we didn't want to implement that running on the CPUs which is what some other companies do. Because it consumes revenue generating CPUs from the application. So a DPU is a perfect way to implement this. And we knew that NVIDIA was the leader with this BlueField-2. And so that is the first, that's the first step into getting, into realizing this vision. >> NVIDIA has always been powering some great workloads of GPUs, now you got DPUs. Networking and NVIDIA as here. What is the relationship with Pluribus? How did that come together? Tell us the story. >> We've been working with Pluribus for quite some time. I think the last several months was really when it came to fruition. And what Pluribus is trying to build and what NVIDIA has. So we have, this concept of a blue field data processing unit, which, if you think about it, conceptually does really three things, offload, accelerate, and isolate. So offload your workloads from your CPU to your data processing unit, infrastructure workloads that is. Accelerate, so there's a bunch of acceleration engines. You can run infrastructure workloads much faster than you would otherwise. And then isolation, So you have this nice security isolation between the data processing unit and your other CPU environment. And so you can run completely isolated workloads directly on the data processing unit. So we introduced this, a couple years ago. And with Pluribus we've been talking to the Pluribus team for quite some months now. And I think really the combination of what Pluribus is trying to build, and what they've developed around this unified cloud fabric fits really nicely with the DPU and running that on the DPU and extending it really from your physical switch all the way to your host environment, specifically on the data processing unit. So if you think about what's happening as you add data processing units to your environment. So every server we believe over time is going to have data processing units. So now you'll have to manage that complexity from the physical network layer to the host layer. And so what Pluribus is really trying to do is extending the network fabric from the host from the switch to the host and really have that single pane of glass for network operators to be able to configure, provision, manage all of the complexity of the network environment. So that's really how the partnership truly started. And so it started really with extending the network fabric and now we're also working with them on security. If you sort of take that concept of isolation and security isolation, what Pluribus has within their fabric is the concept of micro segmentation. And so now you can take that extend it to the data processing unit and really have isolated micro segmentation workloads whether it's bare metal, cloud native environments, whether it's virtualized environments, whether it's public cloud, private cloud, hybrid cloud. So it really is a magical partnership between the two companies with their unified cloud fabric running on the DPU. >> You know what I love about this conversation is it reminds me of when you have these changing markets. The product gets pulled out of the market and you guys step up and create these new solutions. And I think this is a great example. So I have to ask you how do you guys differentiate what sets this apart for customers? What's in it for the customer? >> So I mentioned three things in terms of the value of what the BlueField brings. There's offloading, accelerating and isolating. And that's sort of the key core tenets of BlueField. So that, if you sort of think about what BlueField what we've done, in terms of the differentiation. We're really a robust platform for innovation. So we introduced BlueField-2 last year. We're introducing BlueField-3 which is our next generation of blue field. It'll have 5X the ARM compute capacity. It will have 400 gig line rate acceleration, 4X better crypto acceleration. So it will be remarkably better than the previous generation. And we'll continue to innovate and add, chips to our portfolio every 18 months to two years. So that's sort of one of the key areas of differentiation. The other is that if you look at NVIDIA, what we're sort of known for is really known for our AI, our artificial intelligence and our artificial intelligence software, as well as our GPU. So you look at artificial intelligence and the combination of artificial intelligence plus data processing. This really creates faster, more efficient secure AI systems from, the core of your data center, all the way out to the edge. And so with NVIDIA we really have these converged accelerators where we've combined the GPU, which does all your AI processing with your data processing with the DPU. So we have this convergence really nice convergence of that area. And I would say the third area is really around our developer environment. One of the key, one of our key motivations at NVIDIA is really to have our partner ecosystem embrace our technology and build solutions around our technology. So if you look at what we've done with the DPU we've created an SDK, which is an open SDK called DOCA. And it's an open SDK for our partners to really build and develop solutions using BlueField and using all these accelerated libraries that we expose through DOCA. And so part of our differentiation is really building this open ecosystem for our partners to take advantage and build solutions around our technology. >> What's exciting is when I hear you talk it's like you realize that there's no one general purpose network anymore. Everyone has their own super environment, super cloud or these new capabilities. They can really craft their own I'd say custom environment at scale with easy tools. And it's all kind of that again this is the new architecture Mike, you were talking about. How does customers run this effectively, cost effectively? And how do people migrate? >> I think that is the key question. So we've got this beautiful architecture. Amazon Nitro is a good example of a SmartNIC architecture that has been successfully deployed but, enterprises and Tier 2 service providers and Tier 1 service providers and governments are not Amazon. So they need to migrate there and they need this architecture to be cost of effective. And that's super key. I mean, the reality is DPU are moving fast but they're not going to be deployed everywhere on day one. Some servers will have have DPUs right away. Some servers will have DPUs in a year or two. And then there are devices that may never have DPUs. IOT gateways, or legacy servers, even mainframes. So that's the beauty of a solution that creates a fabric across both the switch and the DPU. And by leveraging the NVIDIA BlueField DPU what we really like about it is, it's open and that drives cost efficiencies. And then, with this our architectural approach effectively you get a unified solution across switch and DPU, workload independent. It doesn't matter what hypervisor it is. Integrated visibility, integrated security and that can create tremendous cost efficiencies and really extract a lot of the expense from a capital perspective out of the network as well as from an operational perspective because now I have an SDN automated solution where I'm literally issuing a command to deploy a network service, or to deploy a security policy and is deployed everywhere automatically saving the network operations team and the security operations team time. >> So let me rewind that 'cause that's super important. Got the unified cloud architecture. I'm the customer, it's implemented. What's the value again, take me through the value to me. I have a unified environment. What's the value? >> I mean the value is effectively, there's a few pieces of value. The first piece of value is I'm creating this clean demark. I'm taking networking to the host. And like I mentioned, we're not running it on the CPU. So in implementations that run networking on the CPU there's some conflict between the DevOps team who own the server, and the NetOps team who own the network because they're installing software on the CPU stealing cycles from what should be revenue generating CPUs. So now by terminating the networking on the DPU we create this real clean demark. So the DevOps folks are happy because they don't necessarily have the skills to manage network and they don't necessarily want to spend the time managing networking. They've got their network counterparts who are also happy the NetOps team because they want to control the networking. And now we've got this clean demark where the DevOps folks get the services they need and the NetOps folks get the control and agility they need. So that's a huge value. The next piece of value is distributed security. This is essential I mentioned it earlier, pushing out micro segmentation and distributed firewall basically at the application level, where I create these small segments on an application by application basis. So if a bad actor does penetrate the perimeter firewall they're contained once they get inside. 'Cause the worst thing is a bad actor penetrates perimeter firewall and can go wherever they want in wreak havoc. And so that's why this is so essential. And the next benefit obviously is this unified networking operating model. Having an operating model across switch and server, underlay and overlay, workload agnostic, making the life of the NetOps teams much easier so they can focus their time on really strategy instead of spending an afternoon deploying a single VLAN for example. >> Awesome, and I think also for my stand point I mean perimeter security is pretty much, that out there, I guess the firewall still out there exists but pretty much they're being breached all the time the perimeter. You have to have this new security model. And I think the other thing that you mentioned the separation between DevOps is cool because the infrastructure is code is about making the developers be agile and build security in from day one. So this policy aspect is huge new control plan. I think you guys have a new architecture that enables the security to be handled more flexible. That seems to be the killer feature here. >> If you look at the data processing unit, I think one of the great things about sort of this new architecture it's really the foundation for zero trust. So like you talked about the perimeter is getting breached. And so now each and every compute node has to be protected. And I think that's sort of what you see with the partnership between Pluribus and NVIDIA is the DPU is really the foundation of zero trust and Pluribus is really building on that vision with allowing sort of micro-segmentation and being able to protect each and every compute node as well as the underlying network. >> This is super exciting. This is illustration of how the market's evolving architectures are being reshaped and refactored for cloud scale and all this new goodness with data. So I got to ask how you guys go into market together. Michael, start with you. What's the relationship look like in the go to market with NVIDIA? >> We're super excited about the partnership. Obviously we're here together. We think we've got a really good solution for the market so we're jointly marketing it. Obviously we appreciate that NVIDIA's open that's sort of in our DNA, we're about a open networking. They've got other ISVs who are going to run on BlueField-2. We're probably going to run on other DPUs in the future. But right now we feel like we're partnered with the number one provider of DPUs in the world and super excited about making a splash with it. >> Oh man NVIDIA got the hot product. >> So BlueField-2 as I mentioned was GA last year, we're introducing, well we now also have the converged accelerator. So I talked about artificial intelligence our artificial intelligence software with the BlueField DPU, all of that put together on a converged accelerator. The nice thing there is you can either run those workloads, so if you have an artificial intelligence workload and an infrastructure workload, you can work on them separately on the same platform or you can actually use you can actually run artificial intelligence applications on the BlueField itself. So that's what the converged accelerator really brings to the table. So that's available now. Then we have BlueField-3 which will be available late this year. And I talked about sort of, how much better that next generation of BlueField is in comparison to BlueField-2. So we'll see BlueField-3 shipping later on this year. And then our software stack which I talked about, which is called DOCA. We're on our second version, our DOCA 1.2 we're releasing DOCA 1.3 in about two months from now. And so that's really our open ecosystem framework. So allow you to program the BlueField. So we have all of our acceleration libraries, security libraries, that's all packed into this SDK called DOCA. And it really gives that simplicity to our partners to be able to develop on top of BlueField. So as we add new generations of BlueField, next year we'll have another version and so on and so forth. DOCA is really that unified layer that allows BlueField to be both forwards compatible and backwards compatible. So partners only really have to think about writing to that SDK once. And then it automatically works with future generations of BlueField. So that's sort of the nice thing around DOCA. And then in terms of our go to market model we're working with every major OEM. Later on this year you'll see, major server manufacturers releasing BlueField enabled servers, so more to come. >> Awesome, save money, make it easier, more capabilities, more workload power. This is the future of cloud operations. >> And one thing I'll add is we are, we have a number of customers as you'll hear in the next segment that are already signed up and will be working with us for our early field trial starting late April early May. We are accepting registrations. You can go to www.pluribusnetworks.com/eft. If you're interested in signing up for being part of our field trial and providing feedback on the product >> Awesome innovation and networking. Thanks so much for sharing the news. Really appreciate, thanks so much. In a moment we'll be back to look deeper in the product the integration, security, zero trust use cases. You're watching theCUBE, the leader in enterprise tech coverage. (upbeat music)

Published Date : Mar 16 2022

SUMMARY :

the CMO of Pluribus Networks, So let's get into the And the way to do it is to So that's the fourth and customers are looking at this. And I appreciate the tee up. That's the BlueField NVIDIA card. And so that is the first, What is the relationship with Pluribus? DPU and running that on the DPU So I have to ask you how So that's sort of one of the And it's all kind of that again So that's the beauty of a solution that Got the unified cloud architecture. and the NetOps team who own the network that enables the security is the DPU is really the in the go to market with NVIDIA? on other DPUs in the future. So that's sort of the This is the future of cloud operations. and providing feedback on the product Thanks so much for sharing the news.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TomPERSON

0.99+

StefaniePERSON

0.99+

JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

MichaelPERSON

0.99+

NVIDIAORGANIZATION

0.99+

AWSORGANIZATION

0.99+

ManasiPERSON

0.99+

LisaPERSON

0.99+

PluribusORGANIZATION

0.99+

John FurrierPERSON

0.99+

Stephanie ChirasPERSON

0.99+

2015DATE

0.99+

Ami BadaniPERSON

0.99+

Stefanie ChirasPERSON

0.99+

AmazonORGANIZATION

0.99+

2008DATE

0.99+

Mike CapuanoPERSON

0.99+

two companiesQUANTITY

0.99+

two yearsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

90%QUANTITY

0.99+

yesterdayDATE

0.99+

MikePERSON

0.99+

RHELTITLE

0.99+

ChicagoLOCATION

0.99+

2021DATE

0.99+

Pluribus NetworksORGANIZATION

0.99+

second versionQUANTITY

0.99+

last yearDATE

0.99+

next yearDATE

0.99+

AnsibleORGANIZATION

0.99+

Pete Lumbis, NVIDIA & Alessandro Barbieri, Pluribus Networks


 

(upbeat music) >> Okay, we're back. I'm John Furrier with theCUBE and we're going to go deeper into a deep dive into unified cloud networking solution from Pluribus and NVIDIA. And we'll examine some of the use cases with Alessandro Barbieri, VP of product management at Pluribus Networks and Pete Lumbis, the director of technical marketing and video remotely. Guys thanks for coming on, appreciate it. >> Yeah thanks a lot. >> I'm happy to be here. >> So a deep dive, let's get into the what and how. Alessandro, we heard earlier about the Pluribus and NVIDIA partnership and the solution you're working together in. What is it? >> Yeah, first let's talk about the what. What are we really integrating with the NVIDIA BlueField the DPU technology? Pluribus has been shipping in volume in multiple mission critical networks, this Netvisor ONE network operating systems. It runs today on merchant silicon switches and effectively it's standard based open network operating system for data center. And the novelty about this operating system is that it integrates distributed the control plane to automate effect with SDN overlay. This automation is completely open and interoperable and extensible to other type of clouds. It's not enclosed. And this is actually what we're now porting to the NVIDIA DPU. >> Awesome, so how does it integrate into NVIDIA hardware and specifically how is Pluribus integrating its software with the NVIDIA hardware? >> Yeah, I think we leverage some of the interesting properties of the BlueField DPU hardware which allows actually to integrate our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the DPU card is completely agnostic to the hypervisor layer or OS layer running on the host. Even more, we can also independently manage this network node this switch on a NIC effectively, managed completely independently from the host. You don't have to go through the network operating system running on X86 to control this network node. So you truly have the experience effectively top of rack for virtual machine or a top of rack for Kubernetes spots, where if you allow me with analogy, instead of connecting a server NIC directly to a switchboard, now we are connecting a VM virtual interface to a virtual interface on the switch on an niche. And also as part of this integration, we put a lot of effort, a lot of emphasis in accelerating the entire data plan for networking and security. So we are taking advantage of the NVIDIA DOCA API to program the accelerators. And these you accomplish two things with that. Number one, you have much better performance. They're running the same network services on an X86 CPU. And second, this gives you the ability to free up I would say around 20, 25% of the server capacity to be devoted either to additional workloads to run your cloud applications or perhaps you can actually shrink the power footprint and compute footprint of your data center by 20% if you want to run the same number of compute workloads. So great efficiencies in the overall approach. >> And this is completely independent of the server CPU, right? >> Absolutely, there is zero code from Pluribus running on the X86. And this is why we think this enables a very clean demarcation between compute and network. >> So Pete, I got to get you in here. We heard that the DPU enable cleaner separation of DevOps and NetOps. Can you explain why that's important because everyone's talking DevSecOps, right? Now, you've got NetSecOps. This separation, why is this clean separation important? >> Yeah, I think, it's a pragmatic solution in my opinion. We wish the world was all kind of rainbows and unicorns, but it's a little messier than that. I think a lot of the DevOps stuff and that mentality and philosophy. There's a natural fit there. You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers. Well, the network has always been this other thing and the network operators have always had a very different approach to things than compute operators. And I think that we in the networking industry have gotten closer together but there's still a gap, there's still some distance. And I think that distance isn't going to be closed and so, again, it comes down to pragmatism. And I think one of my favorite phrases is look, good fences make good neighbors. And that's what this is. >> Yeah, and it's a great point 'cause DevOps has become kind of the calling car for cloud, right? But DevOps is a simply infrastructures code and infrastructure is networking, right? So if infrastructure is code you're talking about that part of the stack under the covers, under the hood if you will. This is super important distinction and this is where the innovation is. Can you elaborate on how you see that because this is really where the action is right now? >> Yeah, exactly. And I think that's where one from the policy, the security, the zero trust aspect of this, right? If you get it wrong on that network side, all of a sudden you can totally open up those capabilities. And so security's part of that. But the other part is thinking about this at scale, right? So we're taking one top of rack switch and adding up to 48 servers per rack. And so that ability to automate, orchestrate and manage its scale becomes absolutely critical. >> Alessandro, this is really the why we're talking about here and this is scale. And again, getting it right. If you don't get it right, you're going to be really kind of up you know what? So this is a huge deal. Networking matters, security matters, automation matters, DevOps, NetOps, all coming together clean separation. Help us understand how this joint solution with NVIDIA fits into the Pluribus unified cloud networking vision because this is what people are talking about and working on right now. >> Yeah, absolutely. So I think here with this solution we're attacking two major problems in cloud networking. One, is operation of cloud networking and the second, is distributing security services in the cloud infrastructure. First, let me talk about first what are we really unifying? If we're unifying something, something must be at least fragmented or disjointed. And what is disjointed is actually the network in the cloud. If you look wholistically how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers. You build your IP clause, fabric leaf and spine topologies. This is actually a well understood problem I would say. There are multiple vendors with let's say similar technologies, very well standardized, very well understood and almost a commodity I would say building an IP fabric these days, but this is not the place where you deploy most of your services in the cloud particularly from a security standpoint. Those services are actually now moved into the compute layer where cloud builders have to instrument a separate network virtualization layer where they deploy segmentation and security closer to the workloads. And this is where the complication arise. This high value part of the cloud network is where you have a plethora of options that they don't talk to each other and they're very dependent on the kind of hypervisor or compute solution you choose. For example, the networking API between an ESXi environment or an Hyper-V or a Zen are completely disjointed. You have multiple orchestration layers. And then when you throw in also Kubernetes in this type of architecture, you are introducing yet another level of networking. And when Kubernetes runs on top of VMs which is a prevalent approach, you actually are stuck in multiple networks on the compute layer that they eventually ran on the physical fabric infrastructure. Those are all ships in the knights effectively, right? They operate as completely disjointed and we're trying to tackle this problem first with the notion of a unified fabric which is independent from any workloads whether this fabric spans on a switch which can be connected to bare metal workload or can span all the way inside the DPU where you have your multi hypervisor compute environment. It's one API, one common network control plane and one common set of segmentation services for the network. That's problem number one. >> It's interesting I hear you talking and I hear one network among different operating models. Reminds me of the old serverless days. There's still servers but they call it serverless. Is there going to be a term network-less because at the end of the day it should be one network, not multiple operating models. This is a problem that you guys are working on, is that right? I'm just joking serverless and network-less, but the idea is it should be one thing. >> Yeah, effectively what we're trying to do is we're trying to recompose this fragmentation in terms of network cooperation across physical networking and server networking. Server networking is where the majority of the problems are because as much as you have standardized the ways of building physical networks and cloud fabrics with IP protocols and internet, you don't have that sort of operational efficiency at the server layer. And this is what we're trying to attack first with this technology. The second aspect we're trying to attack is how we distribute security services throughout the infrastructure more efficiently whether it's micro-segmentation is a stateful firewall services or even encryption. Those are all capabilities enabled by the BlueField DPU technology. And we can actually integrate those capabilities directly into the network fabric limiting dramatically at least for east west traffic the sprawl of security appliances whether virtual or physical. That is typically the way people today segment and secure the traffic in the cloud. >> Awesome. Pete, all kidding aside about network-less and serverless kind of fun play on words there, the network is one thing it's basically distributed computing, right? So I'd love to get your thoughts about this distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >> Yeah, I think what's beautiful and kind of what the DPU brings that's new to this model is completely isolated compute environment inside. So it's the, yo dog, I heard you like a server so I put a server inside your server. And so we provide ARM CPUs, memory and network accelerators inside and that is completely isolated from the host. The actual X86 host just thinks it has a regular niche in there, but you actually have this full control plane thing. It's just like taking your top of rack switch and shoving it inside of your compute node. And so you have not only this separation within the data plane, but you have this complete control plane separation so you have this element that the network team can now control and manage, but we're taking all of the functions we used to do at the top of rack switch and we're distributing them now. And as time has gone on we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, the other option is centralized appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that NVIDIA's good enough or we hope that the VXLAN tunnel's good enough. And we can't actually apply more advanced techniques there because we can't financially afford that appliance to see all of the traffic. And now that we have a distributed model with this accelerator, we could do it. >> So what's in it for the customer real quick and I think this is an interesting point you mentioned policy. Everyone in networking knows policy is just a great thing. And as you hear it being talked about up the stack as well when you start getting to orchestrating microservices and whatnot all that good stuff going on there, containers and whatnot and modern applications. What's the benefit to the customers with this approach because what I heard was more scale, more edge, deployment flexibility relative to security policies and application enablement? What's the customer get out of this architecture? What's the enablement? >> It comes down to taking again the capabilities that we're in that top of rack switch and distributing them down. So that makes simplicity smaller, blast radius' for failures smaller failure domains, maintenance on the networks and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. And again, we always want to kind of separate each one of those layers so just as in say a VXLAN network, my leaf in spine don't have to be tightly coupled together. I can now do this at a different layer and so you can run a DPU with any networking in the core there. And so you get this extreme flexibility. You can start small, you can scale large. To me the possibilities are endless. >> It's a great security control plan. Really flexibility is key and also being situationally aware of any kind of threats or new vectors or whatever's happening in the network. Alessandro, this is huge upside, right? You've already identified some successes with some customers on your early field trials. What are they doing and why are they attracted to the solution? >> Yeah, I think the response from customer has been the most encouraging and exciting for us to sort of continue and work and develop this product. And we have actually learned a lot in the process. We talked to tier two, tier three cloud providers. We talked to SP, Soft Telco type of networks as well as inter large enterprise customers. In one particular case one, let me call out a couple of examples here just to give you a flavor. There is a cloud provider in Asia who is actually managing a cloud where they're offering services based on multiple hypervisors. They are native services based on Zen, but they also on ramp into the cloud workloads based on ESXi and KVM depending on what the customer picks from the menu. And they have the problem of now orchestrating through their orchestrate or integrating with Zen center, with vSphere, with OpenStack to coordinate this multiple environments. And in the process to provide security, they actually deploy virtual appliances everywhere which has a lot of cost complication and eats up into the server CPU. The promise that they saw in this technology, they call it actually game changing is actually to remove all this complexity, having a single network and distribute the micro segmentation service directly into the fabric. And overall they're hoping to get out it tremendous OPEX benefit and overall operational simplification for the cloud infrastructure. That's one important use case. Another global enterprise customer is running both ESXi and Hyper-V environment and they don't have a solution to do micro segmentation consistently across hypervisors. So again, micro segmentation is a huge driver security. Looks like it's a recurring theme talking to most of these customers. And in the Telco space, we're working with few Telco customers on the CFT program where the main goal is actually to harmonize network cooperation. They typically handle all the VNFs with their own homegrown DPDK stack. This is overly complex. It is frankly also slow and inefficient. And then they have a physical network to manage. The idea of having again one network to coordinate the provisioning of cloud services between the Telco VNFs and the rest of the infrastructure is extremely powerful on top of the offloading capability opted by the BlueField DPUs. Those are just some examples. >> That was a great use case. A lot more potential I see that with the unified cloud networking, great stuff, Pete, shout out to you 'cause at NVIDIA we've been following your success us for a long time and continuing to innovate as cloud scales and Pluribus with unified networking kind of bring it to the next level. Great stuff, great to have you guys on and again, software keeps driving the innovation and again, networking is just a part of it and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in this new architecture and solution learn more because this is an architectural shift? People are working on this problem, they're try to think about multiple clouds, they're try to think about unification around the network and giving more security, more flexibility to their teams. How can people learn more? >> Yeah, so Alessandro and I have a talk at the upcoming NVIDIA GTC conference. So it's the week of March 21st through 24th. You can go and register for free nvidia.com/gtc. You can also watch recorded sessions if you end up watching this on YouTube a little bit after the fact. And we're going to dive a little bit more into the specifics and the details and what we're providing in the solution. >> Alessandro, how can we people learn more? >> Yeah, absolutely. People can go to the Pluribus website, www.pluribusnetworks.com/eft and they can fill up the form and they will contact Pluribus to either know more or to know more and actually to sign up for the actual early field trial program which starts at the end of April. >> Okay, well, we'll leave it there. Thank you both for joining, appreciate it. Up next you're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. I'm John Furrier with theCUBE, thanks for watching. (upbeat music)

Published Date : Mar 16 2022

SUMMARY :

Pete Lumbis, the director and NVIDIA partnership and the solution And the novelty about So the first byproduct of this approach on the X86. We heard that the DPU and the network operators have of the calling car for cloud, right? And so that ability to into the Pluribus unified and the second, is Reminds me of the old serverless days. and secure the traffic in the cloud. as the driver for this the data plane, but you have this complete What's the benefit to the and the systems become easier. to the solution? And in the process to provide security, and it's the key solution. and the details and what we're at the end of April. and review some of the research from

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alessandro BarbieriPERSON

0.99+

AlessandroPERSON

0.99+

AsiaLOCATION

0.99+

NVIDIAORGANIZATION

0.99+

PluribusORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

Pluribus NetworksORGANIZATION

0.99+

John FurrierPERSON

0.99+

20%QUANTITY

0.99+

Pete LumbisPERSON

0.99+

FirstQUANTITY

0.99+

ESXiTITLE

0.99+

March 21stDATE

0.99+

ESGORGANIZATION

0.99+

PetePERSON

0.99+

www.pluribusnetworks.com/eftOTHER

0.99+

second aspectQUANTITY

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

24thDATE

0.99+

bothQUANTITY

0.99+

OneQUANTITY

0.99+

two thingsQUANTITY

0.98+

one networkQUANTITY

0.98+

DevOpsTITLE

0.98+

end of AprilDATE

0.98+

secondQUANTITY

0.97+

vSphereTITLE

0.97+

Soft TelcoORGANIZATION

0.97+

KubernetesTITLE

0.97+

todayDATE

0.97+

YouTubeORGANIZATION

0.97+

tier threeQUANTITY

0.96+

nvidia.com/gtcOTHER

0.96+

two major problemsQUANTITY

0.95+

ZenTITLE

0.94+

around 20, 25%QUANTITY

0.93+

zero codeQUANTITY

0.92+

each oneQUANTITY

0.92+

X86COMMERCIAL_ITEM

0.92+

OpenStackTITLE

0.92+

NetOpsTITLE

0.92+

single networkQUANTITY

0.92+

ARMORGANIZATION

0.91+

one common setQUANTITY

0.89+

one APIQUANTITY

0.88+

BlueFieldORGANIZATION

0.87+

one important use caseQUANTITY

0.86+

zero trustQUANTITY

0.86+

tier twoQUANTITY

0.85+

Hyper-VTITLE

0.85+

one common network control planeQUANTITY

0.83+

BlueFieldOTHER

0.82+

Number oneQUANTITY

0.81+

48 serversQUANTITY

0.8+

Changing the Game for Cloud Networking | Pluribus Networks


 

>>Everyone wants a cloud operating model. Since the introduction of the modern cloud. Last decade, the entire technology landscape has changed. We've learned a lot from the hyperscalers, especially from AWS. Now, one thing is certain in the technology business. It's so competitive. Then if a faster, better, cheaper idea comes along, the industry will move quickly to adopt it. They'll add their unique value and then they'll bring solutions to the market. And that's precisely what's happening throughout the technology industry because of cloud. And one of the best examples is Amazon's nitro. That's AWS has custom built hypervisor that delivers on the promise of more efficiently using resources and expanding things like processor, optionality for customers. It's a secret weapon for Amazon. As, as we, as we wrote last year, every infrastructure company needs something like nitro to compete. Why do we say this? Well, Wiki Bon our research arm estimates that nearly 30% of CPU cores in the data center are wasted. >>They're doing work that they weren't designed to do well, specifically offloading networking, storage, and security tasks. So if you can eliminate that waste, you can recapture dollars that drop right to the bottom line. That's why every company needs a nitro like solution. As a result of these developments, customers are rethinking networks and how they utilize precious compute resources. They can't, or won't put everything into the public cloud for many reasons. That's one of the tailwinds for tier two cloud service providers and why they're growing so fast. They give options to customers that don't want to keep investing in building out their own data centers, and they don't want to migrate all their workloads to the public cloud. So these providers and on-prem customers, they want to be more like hyperscalers, right? They want to be more agile and they do that. They're distributing, networking and security functions and pushing them closer to the applications. >>Now, at the same time, they're unifying their view of the network. So it can be less fragmented, manage more efficiently with more automation and better visibility. How are they doing this? Well, that's what we're going to talk about today. Welcome to changing the game for cloud networking made possible by pluribus networks. My name is Dave Vellante and today on this special cube presentation, John furrier, and I are going to explore these issues in detail. We'll dig into new solutions being created by pluribus and Nvidia to specifically address offloading, wasted resources, accelerating performance, isolating data, and making networks more secure all while unifying the network experience. We're going to start on the west coast and our Palo Alto studios, where John will talk to Mike of pluribus and AMI, but Donnie of Nvidia, then we'll bring on Alessandra Bobby airy of pluribus and Pete Lummus from Nvidia to take a deeper dive into the technology. And then we're gonna bring it back here to our east coast studio and get the independent analyst perspective from Bob Liberte of the enterprise strategy group. We hope you enjoy the program. Okay, let's do this over to John >>Okay. Let's kick things off. We're here at my cafe. One of the TMO and pluribus networks and NAMI by Dani VP of networking, marketing, and developer ecosystem at Nvidia. Great to have you welcome folks. >>Thank you. Thanks. >>So let's get into the, the problem situation with cloud unified network. What problems are out there? What challenges do cloud operators have Mike let's get into it. >>Yeah, it really, you know, the challenges we're looking at are for non hyperscalers that's enterprises, governments, um, tier two service providers, cloud service providers, and the first mandate for them is to become as agile as a hyperscaler. So they need to be able to deploy services and security policies. And second, they need to be able to abstract the complexity of the network and define things in software while it's accelerated in hardware. Um, really ultimately they need a single operating model everywhere. And then the second thing is they need to distribute networking and security services out to the edge of the host. Um, we're seeing a growth in cyber attacks. Um, it's, it's not slowing down. It's only getting worse and, you know, solving for this security problem across clouds is absolutely critical. And the way to do it is to move security out to the host. >>Okay. With that goal in mind, what's the pluribus vision. How does this tie together? >>Yeah. So, um, basically what we see is, uh, that this demands a new architecture and that new architecture has four tenants. The first tenant is unified and simplified cloud networks. If you look at cloud networks today, there's, there's sort of like discreet bespoke cloud networks, you know, per hypervisor, per private cloud edge cloud public cloud. Each of the public clouds have different networks that needs to be unified. You know, if we want these folks to be able to be agile, they need to be able to issue a single command or instantiate a security policy across all those locations with one command and not have to go to each one. The second is like I mentioned, distributed security, um, distributed security without compromise, extended out to the host is absolutely critical. So micro-segmentation and distributed firewalls, but it doesn't stop there. They also need pervasive visibility. >>You know, it's, it's, it's sort of like with security, you really can't see you can't protect what you can't see. So you need visibility everywhere. The problem is visibility to date has been very expensive. Folks have had to basically build a separate overlay network of taps, packet brokers, tap aggregation infrastructure that really needs to be built into this unified network I'm talking about. And the last thing is automation. All of this needs to be SDN enabled. So this is related to my comment about abstraction abstract, the complexity of all of these discreet networks, physic whatever's down there in the physical layer. Yeah. I don't want to see it. I want to abstract it. I wanted to find things in software, but I do want to leverage the power of hardware to accelerate that. So that's the fourth tenant is SDN automation. >>Mike, we've been talking on the cube a lot about this architectural shift and customers are looking at this. This is a big part of everyone who's looking at cloud operations next gen, how do we get there? How do customers get this vision realized? >>That's a great question. And I appreciate the tee up. I mean, we're, we're here today for that reason. We're introducing two things today. Um, the first is a unified cloud networking vision, and that is a vision of where pluribus is headed with our partners like Nvidia longterm. Um, and that is about, uh, deploying a common operating model, SDN enabled SDN, automated hardware, accelerated across all clouds. Um, and whether that's underlying overlay switch or server, um, hype, any hypervisor infrastructure containers, any workload doesn't matter. So that's ultimately where we want to get. And that's what we talked about earlier. Um, the first step in that vision is what we call the unified cloud fabric. And this is the next generation of our adaptive cloud fabric. Um, and what's nice about this is we're not starting from scratch. We have a, a, an award-winning adaptive cloud fabric product that is deployed globally. Um, and in particular, uh, we're very proud of the fact that it's deployed in over a hundred tier one mobile operators as the network fabric for their 4g and 5g virtualized cores. We know how to build carrier grade, uh, networking infrastructure, what we're doing now, um, to realize this next generation unified cloud fabric is we're extending from the switch to this Nvidia Bluefield to DPU. We know there's a, >>Hold that up real quick. That's a good, that's a good prop. That's the blue field and video. >>It's the Nvidia Bluefield two DPU data processing unit. And, um, uh, you know, what we're doing, uh, fundamentally is extending our SDN automated fabric, the unified cloud fabric out to the host, but it does take processing power. So we knew that we didn't want to do, we didn't want to implement that running on the CPU, which is what some other companies do because it consumes revenue generating CPU's from the application. So a DPU is a perfect way to implement this. And we knew that Nvidia was the leader with this blue field too. And so that is the first that's, that's the first step in the getting into realizing this vision. >>I mean, Nvidia has always been powering some great workloads of GPU. Now you've got DPU networking and then video is here. What is the relationship with clothes? How did that come together? Tell us the story. >>Yeah. So, you know, we've been working with pluribus for quite some time. I think the last several months was really when it came to fruition and, uh, what pluribus is trying to build and what Nvidia has. So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, conceptually does really three things, offload, accelerate an isolate. So offload your workloads from your CPU to your data processing unit infrastructure workloads that is, uh, accelerate. So there's a bunch of acceleration engines. So you can run infrastructure workloads much faster than you would otherwise, and then isolation. So you have this nice security isolation between the data processing unit and your other CPU environment. And so you can run completely isolated workloads directly on the data processing unit. So we introduced this, you know, a couple of years ago, and with pluribus, you know, we've been talking to the pluribus team for quite some months now. >>And I think really the combination of what pluribus is trying to build and what they've developed around this unified cloud fabric, uh, is fits really nicely with the DPU and running that on the DPU and extending it really from your physical switch, all the way to your host environment, specifically on the data processing unit. So if you think about what's happening as you add data processing units to your environment. So every server we believe over time is going to have data processing units. So now you'll have to manage that complexity from the physical network layer to the host layer. And so what pluribus is really trying to do is extending the network fabric from the host, from the switch to the host, and really have that single pane of glass for network operators to be able to configure provision, manage all of the complexity of the network environment. >>So that's really how the partnership truly started. And so it started really with extending the network fabric, and now we're also working with them on security. So, you know, if you sort of take that concept of isolation and security isolation, what pluribus has within their fabric is the concept of micro-segmentation. And so now you can take that extended to the data processing unit and really have, um, isolated micro-segmentation workloads, whether it's bare metal cloud native environments, whether it's virtualized environments, whether it's public cloud, private cloud hybrid cloud. So it really is a magical partnership between the two companies with their unified cloud fabric running on, on the DPU. >>You know, what I love about this conversation is it reminds me of when you have these changing markets, the product gets pulled out of the market and, and you guys step up and create these new solutions. And I think this is a great example. So I have to ask you, how do you guys differentiate what sets this apart for customers with what's in it for the customer? >>Yeah. So I mentioned, you know, three things in terms of the value of what the Bluefield brings, right? There's offloading, accelerating, isolating, that's sort of the key core tenants of Bluefield. Um, so that, you know, if you sort of think about what, um, what Bluefields, what we've done, you know, in terms of the differentiation, we're really a robust platform for innovation. So we introduced Bluefield to, uh, last year, we're introducing Bluefield three, which is our next generation of Bluefields, you know, we'll have five X, the arm compute capacity. It will have 400 gig line rate acceleration, four X better crypto acceleration. So it will be remarkably better than the previous generation. And we'll continue to innovate and add, uh, chips to our portfolio every, every 18 months to two years. Um, so that's sort of one of the key areas of differentiation. The other is the, if you look at Nvidia and, and you know, what we're sort of known for is really known for our AI artificial intelligence and our artificial intelligence software, as well as our GPU. >>So you look at artificial intelligence and the combination of artificial intelligence plus data processing. This really creates the, you know, faster, more efficient, secure AI systems from the core of your data center, all the way out to the edge. And so with Nvidia, we really have these converged accelerators where we've combined the GPU, which does all your AI processing with your data processing with the DPU. So we have this convergence really nice convergence of that area. And I would say the third area is really around our developer environment. So, you know, one of the key, one of our key motivations at Nvidia is really to have our partner ecosystem, embrace our technology and build solutions around our technology. So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called Doka, and it's an open SDK for our partners to really build and develop solutions using Bluefield and using all these accelerated libraries that we expose through Doka. And so part of our differentiation is really building this open ecosystem for our partners to take advantage and build solutions around our technology. >>You know, what's exciting is when I hear you talk, it's like you realize that there's no one general purpose network anymore. Everyone has their own super environment Supercloud or these new capabilities. They can really craft their own, I'd say, custom environment at scale with easy tools. Right. And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers run this effectively? Cost-effectively and how do people migrate? >>Yeah, I, I think that is the key question, right? So we've got this beautiful architecture. You, you know, Amazon nitro is a, is a good example of, of a smart NIC architecture that has been successfully deployed, but enterprises and serve tier two service providers and tier one service providers and governments are not Amazon, right? So they need to migrate there and they need this architecture to be cost-effective. And, and that's, that's super key. I mean, the reality is deep user moving fast, but they're not going to be, um, deployed everywhere on day one. Some servers will have DPS right away, some servers will have use and a year or two. And then there are devices that may never have DPS, right. IOT gateways, or legacy servers, even mainframes. Um, so that's the beauty of a solution that creates a fabric across both the switch and the DPU, right. >>Um, and by leveraging the Nvidia Bluefield DPU, what we really like about it is it's open. Um, and that drives, uh, cost efficiencies. And then, um, uh, you know, with this, with this, our architectural approach effectively, you get a unified solution across switch and DPU workload independent doesn't matter what hypervisor it is, integrated visibility, integrated security, and that can, uh, create tremendous cost efficiencies and, and really extract a lot of the expense from, from a capital perspective out of the network, as well as from an operational perspective, because now I have an SDN automated solution where I'm literally issuing a command to deploy a network service or to create or deploy our security policy and is deployed everywhere, automatically saving the oppor, the network operations team and the security operations team time. >>All right. So let me rewind that because that's super important. Get the unified cloud architecture, I'm the customer guy, but it's implemented, what's the value again, take, take me through the value to me. I have a unified environment. What's the value. >>Yeah. So I mean, the value is effectively, um, that, so there's a few pieces of value. The first piece of value is, um, I'm creating this clean D mark. I'm taking networking to the host. And like I mentioned, we're not running it on the CPU. So in implementations that run networking on the CPU, there's some conflict between the dev ops team who owned the server and the NetApps team who own the network because they're installing software on the, on the CPU stealing cycles from what should be revenue generating. Uh CPU's. So now by, by terminating the networking on the DPU, we click create this real clean DMARC. So the dev ops folks are happy because they don't necessarily have the skills to manage network and they don't necessarily want to spend the time managing networking. They've got their network counterparts who are also happy the NetApps team, because they want to control the networking. >>And now we've got this clean DMARC where the DevOps folks get the services they need and the NetApp folks get the control and agility they need. So that's a huge value. Um, the next piece of value is distributed security. This is essential. I mentioned earlier, you know, put pushing out micro-segmentation and distributed firewall, basically at the application level, right, where I create these small, small segments on an by application basis. So if a bad actor does penetrate the perimeter firewall, they're contained once they get inside. Cause the worst thing is a bad actor, penetrates a perimeter firewall and can go wherever they want and wreak havoc. Right? And so that's why this, this is so essential. Um, and the next benefit obviously is this unified networking operating model, right? Having, uh, uh, uh, an operating model across switch and server underlay and overlay, workload agnostic, making the life of the NetApps teams much easier so they can focus their time on really strategy instead of spending an afternoon, deploying a single villain, for example. >>Awesome. And I think also from my standpoint, I mean, perimeter security is pretty much, I mean, they're out there, it gets the firewall still out there exists, but pretty much they're being breached all the time, the perimeter. So you have to have this new security model. And I think the other thing that you mentioned, the separation between dev ops is cool because the infrastructure is code is about making the developers be agile and build security in from day one. So this policy aspect is, is huge. Um, new control points. I think you guys have a new architecture that enables the security to be handled more flexible. >>Right. >>That seems to be the killer feature here, >>Right? Yeah. If you look at the data processing unit, I think one of the great things about sort of this new architecture, it's really the foundation for zero trust it's. So like you talked about the perimeter is getting breached. And so now each and every compute node has to be protected. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the foundation of zero trust. And pluribus is really building on that vision with, uh, allowing sort of micro-segmentation and being able to protect each and every compute node as well as the underlying network. >>This is super exciting. This is an illustration of how the market's evolving architectures are being reshaped and refactored for cloud scale and all this new goodness with data. So I gotta ask how you guys go into market together. Michael, start with you. What's the relationship look like in the go to market with an Nvidia? >>Sure. Um, I mean, we're, you know, we're super excited about the partnership, obviously we're here together. Um, we think we've got a really good solution for the market, so we're jointly marketing it. Um, uh, you know, obviously we appreciate that Nvidia is open. Um, that's, that's sort of in our DNA, we're about open networking. They've got other ISV who are gonna run on Bluefield too. We're probably going to run on other DPS in the, in the future, but right now, um, we're, we feel like we're partnered with the number one, uh, provider of DPS in the world and, uh, super excited about, uh, making a splash with it. >>I'm in get the hot product. >>Yeah. So Bluefield too, as I mentioned was GA last year, we're introducing, uh, well, we now also have the converged accelerator. So I talked about artificial intelligence or artificial intelligence with the Bluefield DPU, all of that put together on a converged accelerator. The nice thing there is you can either run those workloads. So if you have an artificial intelligence workload and an infrastructure workload, you can warn them separately on the same platform or you can actually use, uh, you can actually run artificial intelligence applications on the Bluefield itself. So that's what the converged accelerator really brings to the table. Uh, so that's available now. Then we have Bluefield three, which will be available late this year. And I talked about sort of, you know, uh, how much better that next generation of Bluefield is in comparison to Bluefield two. So we will see Bluefield three shipping later on this year, and then our software stack, which I talked about, which is called Doka we're on our second version are Doka one dot two. >>We're releasing Doka one dot three, uh, in about two months from now. And so that's really our open ecosystem framework. So allow you to program the Bluefields. So we have all of our acceleration libraries, um, security libraries, that's all packed into this STK called Doka. And it really gives that simplicity to our partners to be able to develop on top of Bluefield. So as we add new generations of Bluefield, you know, next, next year, we'll have, you know, another version and so on and so forth Doka is really that unified unified layer that allows, um, Bluefield to be both forwards compatible and backwards compatible. So partners only really have to think about writing to that SDK once, and then it automatically works with future generations of Bluefields. So that's sort of the nice thing around, um, around Doka. And then in terms of our go to market model, we're working with every, every major OEM. So, uh, later on this year, you'll see, you know, major server manufacturers, uh, releasing Bluefield enabled servers. So, um, more to come >>Awesome, save money, make it easier, more capabilities, more workload power. This is the future of, of cloud operations. >>Yeah. And, and, and, uh, one thing I'll add is, um, we are, um, we have a number of customers as you'll hear in the next segment, um, that are already signed up and we'll be working with us for our, uh, early field trial starting late April early may. Um, we are accepting registrations. You can go to www.pluribusnetworks.com/e F T a. If you're interested in signing up for, um, uh, being part of our field trial and providing feedback on the product, >>Awesome innovation and network. Thanks so much for sharing the news. Really appreciate it. Thanks so much. Okay. In a moment, we'll be back to look deeper in the product, the integration security zero trust use cases. You're watching the cube, the leader in enterprise tech coverage, >>Cloud networking is complex and fragmented slowing down your business. How can you simplify and unify your cloud networks to increase agility and business velocity? >>Pluribus unified cloud networking provides a unified simplify and agile network fabric across all clouds. It brings the simplicity of a public cloud operation model to private clouds, dramatically reducing complexity and improving agility, availability, and security. Now enterprises and service providers can increase their business philosophy and delight customers in the distributed multi-cloud era. We achieve this with a new approach to cloud networking, pluribus unified cloud fabric. This open vendor, independent network fabric, unifies, networking, and security across distributed clouds. The first step is extending the fabric to servers equipped with data processing units, unifying the fabric across switches and servers, and it doesn't stop there. The fabric is unified across underlay and overlay networks and across all workloads and virtualization environments. The unified cloud fabric is optimized for seamless migration to this new distributed architecture, leveraging the power of the DPU for application level micro-segmentation distributed fireball and encryption while still supporting those servers and devices that are not equipped with a DPU. Ultimately the unified cloud fabric extends seamlessly across distributed clouds, including central regional at edge private clouds and public clouds. The unified cloud fabric is a comprehensive network solution. That includes everything you need for clouds, networking built in SDN automation, distributed security without compromises, pervasive wire speed, visibility and application insight available on your choice of open networking switches and DP use all at the lowest total cost of ownership. The end result is a dramatically simplified unified cloud networking architecture that unifies your distributed clouds and frees your business to move at cloud speed, >>To learn more, visit www.pluribusnetworks.com. >>Okay. We're back I'm John ferry with the cube, and we're going to go deeper into a deep dive into unified cloud networking solution from Clovis and Nvidia. And we'll examine some of the use cases with Alessandra Burberry, VP of product management and pullovers networks and Pete Bloomberg who's director of technical marketing and video remotely guys. Thanks for coming on. Appreciate it. >>Yeah. >>So deep dive, let's get into the what and how Alexandra we heard earlier about the pluribus Nvidia partnership and the solution you're working together on what is it? >>Yeah. First let's talk about the water. What are we really integrating with the Nvidia Bluefield, the DPO technology, uh, plugable says, um, uh, there's been shipping, uh, in, uh, in volume, uh, in multiple mission critical networks. So this advisor one network operating systems, it runs today on a merchant silicone switches and effectively it's a standard open network operating system for data center. Um, and the novelty about this system that integrates a distributed control plane for, at water made effective in SDN overlay. This automation is a completely open and interoperable and extensible to other type of clouds is not enclosed them. And this is actually what we're now porting to the Nvidia DPO. >>Awesome. So how does it integrate into Nvidia hardware and specifically how has pluribus integrating its software with the Nvidia hardware? >>Yeah, I think, uh, we leverage some of the interesting properties of the Bluefield, the DPO hardware, which allows actually to integrate, uh, um, uh, our software, our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the DPU card that is completely agnostic to the hypervisor layer or OSTP layer running on, uh, on the host even more, um, uh, we can also independently manage this network, know that the switch on a Neek effectively, um, uh, managed completely independently from the host. You don't have to go through the network operating system, running on x86 to control this network node. So you throw yet the experience effectively of a top of rack for virtual machine or a top of rack for, uh, Kubernetes bots, where instead of, uh, um, if you allow me with the analogy instead of connecting a server knee directly to a switchboard, now you're connecting a VM virtual interface to a virtual interface on the switch on an ache. >>And, uh, also as part of this integration, we, uh, put a lot of effort, a lot of emphasis in, uh, accelerating the entire, uh, data plane for networking and security. So we are taking advantage of the DACA, uh, Nvidia DACA API to program the accelerators. And these accomplished two things with that. Number one, uh, you, uh, have much greater performance, much better performance. They're running the same network services on an x86 CPU. And second, this gives you the ability to free up, I would say around 20, 25% of the server capacity to be devoted either to, uh, additional workloads to run your cloud applications, or perhaps you can actually shrink the power footprint and compute footprint of your data center by 20%, if you want to run the same number of compute workloads. So great efficiencies in the overall approach, >>And this is completely independent of the server CPU, right? >>Absolutely. There is zero code from running on the x86, and this is what we think this enables a very clean demarcation between computer and network. >>So Pete, I gotta get, I gotta get you in here. We heard that, uh, the DPU is enabled cleaner separation of dev ops and net ops. Can you explain why that's important because everyone's talking DevSecOps right now, you've got net ops, net, net sec ops, this separation. Why is this clean separation important? >>Yeah, I think it's a, you know, it's a pragmatic solution in my opinion. Um, you know, we wish the world was all kind of rainbows and unicorns, but it's a little, a little messier than that. And I think a lot of the dev ops stuff and that, uh, mentality and philosophy, there's a natural fit there. Right? You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers. Well, the network has always been this other thing and the network operators have always had a very different approach to things than compute operators. And, you know, I think that we, we in the networking industry have gotten closer together, but there's still a gap there's still some distance. And I think in that distance, isn't going to be closed. And so, you know, again, it comes down to pragmatism and I think, you know, one of my favorite phrases is look good fences, make good neighbors. And that's what this is. >>Yeah. That's a great point because dev ops has become kind of the calling card for cloud, right. But dev ops is as simply infrastructure as code and infrastructure is networking, right? So if infrastructure is code, you know, you're talking about, you know, that part of the stack under the covers under the hood, if you will, this is super important distinction. And this is where the innovation is. Can you elaborate on how you see that? Because this is really where the action is right now. >>Yeah, exactly. And I think that's where, um, one from, from the policy, the security that the zero trust aspect of this, right? If you get it wrong on that network side, all of a sudden you, you can totally open up that those capabilities. And so security is part of that. But the other part is thinking about this at scale, right? So we're taking one top of rack switch and adding, you know, up to 48 servers per rack. And so that ability to automate, orchestrate and manage at scale becomes absolutely critical. >>I'll Sandra, this is really the why we're talking about here, and this is scale. And again, getting it right. If you don't get it right, you're going to be really kind of up, you know what you know, so this is a huge deal. Networking matters, security matters, automation matters, dev ops, net ops, all coming together, clean separation, um, help us understand how this joint solution with Nvidia fits into the pluribus unified cloud networking vision, because this is what people are talking about and working on right now. >>Yeah, absolutely. So I think here with this solution, we're attacking two major problems in cloud networking. One is, uh, operation of, uh, cloud networking. And the second is a distributing security services in the cloud infrastructure. First, let me talk about the first water. We really unifying. If we're unifying something, something must be at least fragmented or this jointed and the, what is this joint that is actually the network in the cloud. If you look holistically, how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers, you'll build your IP clause fabric leaf in spine typologies. This is actually a well understood the problem. I, I would say, um, there are multiple vendors, uh, uh, with, uh, um, uh, let's say similar technologies, um, very well standardized, whether you will understood, um, and almost a commodity, I would say building an IP fabric these days, but this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint, two services are actually now moved into the compute layer where you actually were called builders, have to instrument the, a separate, uh, network virtualization layer, where they deploy segmentation and security closer to the workloads. >>And this is where the complication arise. These high value part of the cloud network is where you have a plethora of options that they don't talk to each other. And they are very dependent on the kind of hypervisor or compute solution you choose. Um, for example, the networking API to be between an GSXI environment or an hyper V or a Zen are completely disjointed. You have multiple orchestration layers. And when, and then when you throw in also Kubernetes in this, in this, in this type of architecture, uh, you're introducing yet another level of networking. And when Kubernetes runs on top of VMs, which is a prevalent approach, you actually just stacking up multiple networks on the compute layer that they eventually run on the physical fabric infrastructure. Those are all ships in the nights effectively, right? They operate as completely disjointed. And we're trying to attack this problem first with the notion of a unified fabric, which is independent from any workloads, whether it's this fabric spans on a switch, which can be con connected to a bare metal workload, or can span all the way inside the DPU, uh, where, um, you have, uh, your multi hypervisor compute environment. >>It's one API, one common network control plane, and one common set of segmentation services for the network. That's probably the number one, >>You know, it's interesting you, man, I hear you talking, I hear one network month, different operating models reminds me of the old serverless days. You know, there's still servers, but they call it serverless. Is there going to be a term network list? Because at the end of the day, it should be one network, not multiple operating models. This, this is a problem that you guys are working on. Is that right? I mean, I'm not, I'm just joking server listen network list, but the idea is it should be one thing. >>Yeah, it's effectively. What we're trying to do is we are trying to recompose this fragmentation in terms of network operation, across physical networking and server networking server networking is where the majority of the problems are because of the, uh, as much as you have standardized the ways of building, uh, physical networks and cloud fabrics with IP protocols and internet, you don't have that kind of, uh, uh, sort of, uh, um, um, uh, operational efficiency, uh, at the server layer. And, uh, this is what we're trying to attack first. The, with this technology, the second aspect we're trying to attack is are we distribute the security services throughout the infrastructure, more efficiently, whether it's micro-segmentation is a stateful firewall services, or even encryption. Those are all capabilities enabled by the blue field, uh, uh, the Butte technology and, uh, uh, we can actually integrate those capabilities directly into the nettle Fabrica, uh, limiting dramatically, at least for east-west traffic, the sprawl of, uh, security appliances, whether virtual or physical, that is typically the way the people today, uh, segment and secure the traffic in the cloud. >>Awesome. Pete, all kidding aside about network lists and serverless kind of fun, fun play on words there, the network is one thing it's basically distributed computing, right? So I love to get your thoughts about this distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >>Yeah, I think what's, what's beautiful and kind of what the DPU brings. That's new to this model is a completely isolated compute environment inside. So, you know, it's the, uh, yo dog, I heard you like a server, so I put a server inside your server. Uh, and so we provide, uh, you know, armed CPU's memory and network accelerators inside, and that is completely isolated from the host. So the server, the, the actual x86 host just thinks it has a regular Nick in there, but you actually have this full control plane thing. It's just like taking your top of rack switch and shoving it inside of your compute node. And so you have not only the separation, um, within the data plane, but you have this complete control plane separation. So you have this element that the network team can now control and manage, but we're taking all of the functions we used to do at the top of rack switch, and we're just shooting them now. >>And, you know, as time has gone on we've, we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, um, the other option is centralized appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that of aliens good enough, or we hope that if the excellent tunnel is good enough and we can actually apply more advanced techniques there because we can't physically, you know, financially afford that appliance to see all of the traffic. And now that we have a distributed model with this accelerator, we could do it. >>So what's the what's in it for the customer. I real quick, cause I think this is interesting point. You mentioned policy, everyone in networking knows policy is just a great thing and it adds, you hear it being talked about up the stack as well. When you start getting to orchestrating microservices and whatnot, all that good stuff going on there, containers and whatnot and modern applications. What's the benefit to the customers with this approach? Because what I heard was more scale, more edge deployment, flexibility, relative to security policies and application enablement. I mean, is that what what's the customer get out of this architecture? What's the enablement. >>It comes down to, uh, taking again the capabilities that were in that top of rack switch and asserting them down. So that makes simplicity smaller blast radiuses for failure, smaller failure domains, maintenance on the networks, and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. Um, and again, you know, we always want to kind of separate each one of those layers. So just as in say, a VX land network, my leaf and spine don't have to be tightly coupled together. I can now do this at a different layer. And so you can run a DPU with any networking in the core there. And so you get this extreme flexibility. You can start small, you can scale large. Um, you know, to me, the, the possibilities are endless. Yes, >>It's a great security control plan. Really flexibility is key. And, and also being situationally aware of any kind of threats or new vectors or whatever's happening in the network. Alessandra, this is huge upside, right? You've already identified some successes with some customers on your early field trials. What are they doing and why are they attracted to the solution? >>Yeah, I think the response from customers has been, uh, the most, uh, encouraging and, uh, exciting, uh, for, uh, for us to, uh, to sort of continue and work and develop this product. And we have actually learned a lot in the process. Um, we talked to tier two tier three cloud providers. Uh, we talked to, uh, SP um, software Tyco type of networks, uh, as well as a large enterprise customers, um, in, uh, one particular case. Um, uh, one, uh, I think, um, let me, let me call out a couple of examples here, just to give you a flavor. Uh, there is a service provider, a cloud provider, uh, in Asia who is actually managing a cloud, uh, where they are offering services based on multiple hypervisors. They are native services based on Zen, but they also are on ramp into the cloud, uh, workloads based on, uh, ESI and, uh, uh, and KVM, depending on what the customer picks from the piece on the menu. >>And they have the problem of now orchestrating through their orchestrate or integrating with the Zen center with vSphere, uh, with, uh, open stack to coordinate these multiple environments and in the process to provide security, they actually deploy virtual appliances everywhere, which has a lot of costs, complication, and eats up into the server CPU. The problem is that they saw in this technology, they call it actually game changing is actually to remove all this complexity of in a single network and distribute the micro-segmentation service directly into the fabric. And overall, they're hoping to get out of it, uh, uh, tremendous, uh, um, opics, uh, benefit and overall, um, uh, operational simplification for the cloud infrastructure. That's one potent a use case. Uh, another, uh, large enterprise customer global enterprise customer, uh, is running, uh, both ESI and hyper V in that environment. And they don't have a solution to do micro-segmentation consistently across hypervisors. >>So again, micro-segmentation is a huge driver security looks like it's a recurring theme, uh, talking to most of these customers and in the Tyco space, um, uh, we're working with a few types of customers on the CFT program, uh, where the main goal is actually to our Monet's network operation. They typically handle all the VNF search with their own homegrown DPDK stack. This is overly complex. It is frankly also as low and inefficient, and then they have a physical network to manage the, the idea of having again, one network, uh, to coordinate the provision in our cloud services between the, the take of VNF, uh, and, uh, the rest of the infrastructure, uh, is extremely powerful on top of the offloading capability of the, by the bluefin DPOs. Those are just some examples. >>That was a great use case, a lot more potential. I see that with the unified cloud networking, great stuff, feed, shout out to you guys at Nvidia had been following your success for a long time and continuing to innovate as cloud scales and pluribus here with the unified networking, kind of bring it to the next level. Great stuff. Great to have you guys on. And again, software keeps driving the innovation again, networking is just a part of it, and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in, in this, uh, new architecture and solution, uh, learn more because this is an architectural shift. People are working on this problem. They're trying to think about multiple clouds of trying to think about unification around the network and giving more security, more flexibility, uh, to their teams. How can people learn more? >>Yeah, so, uh, all Sandra and I have a talk at the upcoming Nvidia GTC conference. Um, so that's the week of March 21st through 24th. Um, you can go and register for free and video.com/at GTC. Um, you can also watch recorded sessions if you ended up watching us on YouTube a little bit after the fact. Um, and we're going to dive a little bit more into the specifics and the details and what we're providing in the solution. >>Alexandra, how can people learn more? >>Yeah, absolutely. People can go to the pluribus, a website, www boost networks.com/eft, and they can fill up the form and, uh, they will contact durables to either know more or to know more and actually to sign up for the actual early field trial program, which starts at the end of April. >>Okay. Well, we'll leave it there. Thanks. You both for joining. Appreciate it up next. You're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. I'm John ferry with the >>Cube. Thanks for watching. >>Okay. We've heard from the folks at networks and Nvidia about their effort to transform cloud networking and unify bespoke infrastructure. Now let's get the perspective from an independent analyst and to do so. We welcome in ESG, senior analysts, Bob LA Liberte, Bob. Good to see you. Thanks for coming into our east coast studios. >>Oh, thanks for having me. It's great to be >>Here. Yeah. So this, this idea of unified cloud networking approach, how serious is it? What's what's driving it. >>Yeah, there's certainly a lot of drivers behind it, but probably the first and foremost is the fact that application environments are becoming a lot more distributed, right? So the, it pendulum tends to swing back and forth. And we're definitely on one that's swinging from consolidated to distributed. And so applications are being deployed in multiple private data centers, multiple public cloud locations, edge locations. And as a result of that, what you're seeing is a lot of complexity. So organizations are having to deal with this highly disparate environment. They have to secure it. They have to ensure connectivity to it and all that's driving up complexity. In fact, when we asked in one of our last surveys and last year about network complexity, more than half 54% came out and said, Hey, our network environment is now either more or significantly more complex than it used to be. >>And as a result of that, what you're seeing is it's really impacting agility. So everyone's moving to these modern application environments, distributing them across areas so they can improve agility yet it's creating more complexity. So a little bit counter to the fact and, you know, really counter to their overarching digital transformation initiatives. From what we've seen, you know, nine out of 10 organizations today are either beginning in process or have a mature digital transformation process or initiative, but their top goals, when you look at them, it probably shouldn't be a surprise. The number one goal is driving operational efficiency. So it makes sense. I've distributed my environment to create agility, but I've created a lot of complexity. So now I need these tools that are going to help me drive operational efficiency, drive better experience. >>I mean, I love how you bring in the data yesterday. Does a great job with that. Uh, questions is, is it about just unifying existing networks or is there sort of a need to rethink kind of a do-over network, how networks are built? >>Yeah, that's a, that's a really good point because certainly unifying networks helps right. Driving any kind of operational efficiency helps. But in this particular case, because we've made the transition to new application architectures and the impact that's having as well, it's really about changing and bringing in new frameworks and new network architectures to accommodate those new application architectures. And by that, what I'm talking about is the fact that these new modern application architectures, microservices, containers are driving a lot more east west traffic. So in the old days, it used to be easier in north south coming out of the server, one application per server, things like that. Right now you've got hundreds, if not thousands of microservices communicating with each other users communicating to them. So there's a lot more traffic and a lot of it's taking place within the servers themselves. The other issue that you starting to see as well from that security perspective, when we were all consolidated, we had those perimeter based legacy, you know, castle and moat security architectures, but that doesn't work anymore when the applications aren't in the castle, right. >>When everything's spread out that that no longer happens. So we're absolutely seeing, um, organizations trying to, trying to make a shift. And, and I think much, like if you think about the shift that we're seeing with all the remote workers and the sassy framework to enable a secure framework there, this it's almost the same thing. We're seeing this distributed services framework come up to support the applications better within the data centers, within the cloud data centers, so that you can drive that security closer to those applications and make sure they're, they're fully protected. Uh, and that's really driving a lot of the, you know, the zero trust stuff you hear, right? So never trust, always verify, making sure that everything is, is, is really secure micro-segmentation is another big area. So ensuring that these applications, when they're connected to each other, they're, they're fully segmented out. And that's again, because if someone does get a breach, if they are in your data center, you want to limit the blast radius, you want to limit the amount of damage that's done. So that by doing that, it really makes it a lot harder for them to see everything that's in there. >>You know, you mentioned zero trust. It used to be a buzzword, and now it's like become a mandate. And I love the mode analogy. You know, you build a moat to protect the queen and the castle, the Queens left the castles, it's just distributed. So how should we think about this, this pluribus and Nvidia solution. There's a spectrum, help us understand that you've got appliances, you've got pure software solutions. You've got what pluribus is doing with Nvidia, help us understand that. >>Yeah, absolutely. I think as organizations recognize the need to distribute their services to closer to the applications, they're trying different models. So from a legacy approach, you know, from a security perspective, they've got these centralized firewalls that they're deploying within their data centers. The hard part for that is if you want all this traffic to be secured, you're actually sending it out of the server up through the rack, usually to in different location in the data center and back. So with the need for agility, with the need for performance, right, that adds a lot of latency. Plus when you start needing to scale, that means adding more and more network connections, more and more appliances. So it can get very costly as well as impacting the performance. The other way that organizations are seeking to solve this problem is by taking the software itself and deploying it on the servers. Okay. So that's a, it's a great approach, right? It brings it really close to the applications, but the things you start running into there, there's a couple of things. One is that you start seeing that the DevOps team start taking on that networking and security responsibility, which they >>Don't want to >>Do, they don't want to do right. And the operations teams loses a little bit of visibility into that. Um, plus when you load the software onto the server, you're taking up precious CPU cycles. So if you're really wanting your applications to perform at an optimized state, having additional software on there, isn't going to, isn't going to do it. So, you know, when we think about all those types of things, right, and certainly the other side effects of that is the impact of the performance, but there's also a cost. So if you have to buy more servers because your CPU's are being utilized, right, and you have hundreds or thousands of servers, right, those costs are going to add up. So what, what Nvidia and pluribus have done by working together is to be able to take some of those services and be able to deploy them onto a smart Nick, right? >>To be able to deploy the DPU based smart SMARTNICK into the servers themselves. And then pluribus has come in and said, we're going to unify create that unified fabric across the networking space, into those networking services all the way down to the server. So the benefits of having that are pretty clear in that you're offloading that capability from the server. So your CPU's are optimized. You're saving a lot of money. You're not having to go outside of the server and go to a different rack somewhere else in the data center. So your performance is going to be optimized as well. You're not going to incur any latency hit for every trip round trip to the, to the firewall and back. So I think all those things are really important. Plus the fact that you're going to see from a, an organizational aspect, we talked about the dev ops and net ops teams. The network operations teams now can work with the security teams to establish the security policies and the networking policies. So that they've dev ops teams. Don't have to worry about that. So essentially they just create the guardrails and let the dev op team run. Cause that's what they want. They want that agility and speed. >>Yeah. Your point about CPU cycles is key. I mean, it's estimated that 25 to 30% of CPU cycles in the data center are wasted. The cores are wasted doing storage offload or, or networking or security offload. And, you know, I've said many times everybody needs a nitro like Amazon nugget, but you can't go, you can only buy Amazon nitro if you go into AWS. Right. Everybody needs a nitro. So is that how we should think about this? >>Yeah. That's a great analogy to think about this. Um, and I think I would take it a step further because it's, it's almost the opposite end of the spectrum because pluribus and video are doing this in a very open way. And so pluribus has always been a proponent of open networking. And so what they're trying to do is extend that now to these distributed services. So leverage working with Nvidia, who's also open as well, being able to bring that to bear so that organizations can not only take advantage of these distributed services, but also that unified networking fabric, that unified cloud fabric across that environment from the server across the switches, the other key piece of what pluribus is doing, because they've been doing this for a while now, and they've been doing it with the older application environments and the older server environments, they're able to provide that unified networking experience across a host of different types of servers and platforms. So you can have not only the modern application supported, but also the legacy environments, um, you know, bare metal. You could go any type of virtualization, you can run containers, et cetera. So a wide gambit of different technologies hosting those applications supported by a unified cloud fabric from pluribus. >>So what does that mean for the customer? I don't have to rip and replace my whole infrastructure, right? >>Yeah. Well, think what it does for, again, from that operational efficiency, when you're going from a legacy environment to that modern environment, it helps with the migration helps you accelerate that migration because you're not switching different management systems to accomplish that. You've got the same unified networking fabric that you've been working with to enable you to run your legacy as well as transfer over to those modern applications. Okay. >>So your people are comfortable with the skillsets, et cetera. All right. I'll give you the last word. Give us the bottom line here. >>So yeah, I think obviously with all the modern applications that are coming out, the distributed application environments, it's really posing a lot of risk on these organizations to be able to get not only security, but also visibility into those environments. And so organizations have to find solutions. As I said, at the beginning, they're looking to drive operational efficiency. So getting operational efficiency from a unified cloud networking solution, that it goes from the server across the servers to multiple different environments, right in different cloud environments is certainly going to help organizations drive that operational efficiency. It's going to help them save money for visibility, for security and even open networking. So a great opportunity for organizations, especially large enterprises, cloud providers who are trying to build that hyperscaler like environment. You mentioned the nitro card, right? This is a great way to do it with an open solution. >>Bob, thanks so much for, for coming in and sharing your insights. Appreciate it. >>You're welcome. Thanks. >>Thanks for watching the program today. Remember all these videos are available on demand@thekey.net. You can check out all the news from today@siliconangle.com and of course, pluribus networks.com many thanks diplomas for making this program possible and sponsoring the cube. This is Dave Volante. Thanks for watching. Be well, we'll see you next time.

Published Date : Mar 16 2022

SUMMARY :

And one of the best examples is Amazon's nitro. So if you can eliminate that waste, and Pete Lummus from Nvidia to take a deeper dive into the technology. Great to have you welcome folks. Thank you. So let's get into the, the problem situation with cloud unified network. and the first mandate for them is to become as agile as a hyperscaler. How does this tie together? Each of the public clouds have different networks that needs to be unified. So that's the fourth tenant How do customers get this vision realized? And I appreciate the tee up. That's the blue field and video. And so that is the first that's, that's the first step in the getting into realizing What is the relationship with clothes? So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, the host, from the switch to the host, and really have that single pane of glass for So it really is a magical partnership between the two companies with pulled out of the market and, and you guys step up and create these new solutions. Um, so that, you know, if you sort of think about what, So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers So they need to migrate there and they need this architecture to be cost-effective. And then, um, uh, you know, with this, with this, our architectural approach effectively, Get the unified cloud architecture, I'm the customer guy, So now by, by terminating the networking on the DPU, Um, and the next benefit obviously So you have to have this new security model. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the the go to market with an Nvidia? in the future, but right now, um, we're, we feel like we're partnered with the number one, And I talked about sort of, you know, uh, how much better that next generation of Bluefield So as we add new generations of Bluefield, you know, next, This is the future of, of cloud operations. You can go to www.pluribusnetworks.com/e Thanks so much for sharing the news. How can you simplify and unify your cloud networks to increase agility and business velocity? Ultimately the unified cloud fabric extends seamlessly across And we'll examine some of the use cases with Alessandra Burberry, Um, and the novelty about this system that integrates a distributed control So how does it integrate into Nvidia hardware and specifically So the first byproduct of this approach is that whatever And second, this gives you the ability to free up, I would say around 20, and this is what we think this enables a very clean demarcation between computer and So Pete, I gotta get, I gotta get you in here. And so, you know, again, it comes down to pragmatism and I think, So if infrastructure is code, you know, you're talking about, you know, that part of the stack And so that ability to automate, into the pluribus unified cloud networking vision, because this is what people are talking but this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint, on the kind of hypervisor or compute solution you choose. That's probably the number one, I mean, I'm not, I'm just joking server listen network list, but the idea is it should the Butte technology and, uh, uh, we can actually integrate those capabilities directly So I love to get your thoughts about Uh, and so we provide, uh, you know, armed CPU's memory scale large enough, the question is, can you afford it? What's the benefit to the customers with this approach? And so you can run a DPU You've already identified some successes with some customers on your early field trials. couple of examples here, just to give you a flavor. And overall, they're hoping to get out of it, uh, uh, tremendous, and then they have a physical network to manage the, the idea of having again, one network, So I got to ask both of you to wrap this up. Um, so that's the week of March 21st through 24th. more or to know more and actually to sign up for the actual early field trial program, You're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. Now let's get the perspective It's great to be What's what's driving it. So organizations are having to deal with this highly So a little bit counter to the fact and, you know, really counter to their overarching digital transformation I mean, I love how you bring in the data yesterday. So in the old days, it used to be easier in north south coming out of the server, So that by doing that, it really makes it a lot harder for them to see And I love the mode analogy. but the things you start running into there, there's a couple of things. So if you have to buy more servers because your CPU's are being utilized, the server and go to a different rack somewhere else in the data center. So is that how we should think about this? environments and the older server environments, they're able to provide that unified networking experience across environment, it helps with the migration helps you accelerate that migration because you're not switching different management I'll give you the last word. that it goes from the server across the servers to multiple different environments, right in different cloud environments Bob, thanks so much for, for coming in and sharing your insights. You're welcome. You can check out all the news from today@siliconangle.com and of course,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DonniePERSON

0.99+

Bob LibertePERSON

0.99+

Dave VellantePERSON

0.99+

AmazonORGANIZATION

0.99+

Alessandra BurberryPERSON

0.99+

SandraPERSON

0.99+

Dave VolantePERSON

0.99+

NvidiaORGANIZATION

0.99+

Pete BloombergPERSON

0.99+

MichaelPERSON

0.99+

AsiaLOCATION

0.99+

AlexandraPERSON

0.99+

hundredsQUANTITY

0.99+

Pete LummusPERSON

0.99+

AWSORGANIZATION

0.99+

Bob LA LibertePERSON

0.99+

MikePERSON

0.99+

JohnPERSON

0.99+

ESGORGANIZATION

0.99+

BobPERSON

0.99+

two companiesQUANTITY

0.99+

25QUANTITY

0.99+

Alessandra BobbyPERSON

0.99+

two yearsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

thousandsQUANTITY

0.99+

BluefieldORGANIZATION

0.99+

NetAppsORGANIZATION

0.99+

demand@thekey.netOTHER

0.99+

20%QUANTITY

0.99+

last yearDATE

0.99+

a yearQUANTITY

0.99+

March 21stDATE

0.99+

FirstQUANTITY

0.99+

www.pluribusnetworks.com/eOTHER

0.99+

TycoORGANIZATION

0.99+

late AprilDATE

0.99+

DokaTITLE

0.99+

400 gigQUANTITY

0.99+

yesterdayDATE

0.99+

second versionQUANTITY

0.99+

two servicesQUANTITY

0.99+

first stepQUANTITY

0.99+

third areaQUANTITY

0.99+

oneQUANTITY

0.99+

second aspectQUANTITY

0.99+

OneQUANTITY

0.99+

EachQUANTITY

0.99+

www.pluribusnetworks.comOTHER

0.99+

PetePERSON

0.99+

last yearDATE

0.99+

one applicationQUANTITY

0.99+

two thingsQUANTITY

0.99+

Whats new in Cloud Networking


 

(upbeat music) >> Okay, we've heard from the folks at Pluribus Networks and NVIDIA about their effort to transform cloud networking and unify bespoke infrastructure. Now, let's get the perspective from an independent analyst. And to do so, we welcome in ESG senior analyst, Bob Laliberte. Bob, good to see you. Thanks for coming into our East Coast studios. >> Oh, thanks for having me. It's great to be here. >> So this idea of unified cloud networking approach, how serious is it? What's driving it? >> There's certainly a lot of drivers behind it, but probably the first and foremost is the fact that application environments are becoming a lot more distributed, right? So the IT pendulum tends to swing back and forth, and we're definitely on one that's swinging from consolidated to distributed. And so applications are being deployed in multiple private data centers, multiple public cloud locations, edge locations. And as a result of that, what you're seeing is a lot of complexity. So organizations are having to deal with this highly disparate environment. They have to secure it. They have to ensure connectivity to it. And all that's driving up complexity. In fact, when we asked, in one of our last surveys last year about network complexity, more than half, 54% came out and said, "Hey, our network environment is now either more or significantly more complex than it used to be." And as a result of that, what you're seeing is it's really impacting agility. So everyone's moving to these modern application environments, distributing them across areas so they can improve agility, yet it's creating more complexity. So a little bit counter to the fact and really counter to their overarching digital transformation initiatives. From what we've seen, 9 out 10 organizations today are either beginning, in process, or have a mature digital transformation process or initiative, but their top goals, when you look at them, and it probably shouldn't be a surprise, the number one goal is driving operational efficiency. So it makes sense. I've distributed my environment to create agility but I've created a lot of complexity. So now, I need these tools that are going to help me drive operational efficiency, drive better experiences. >> Got it. I mean, I love how you bring in the data. ESG does a great job with that. The question is, is it about just unifying existing networks or is there sort of a need to rethink, kind of do over how networks are built? >> That's a really good point. Because certainly, unifying networks helps, right. Driving any kind of operational efficiency helps. But in this particular case, because we've made the transition to new application architectures and the impact that's having as well, it's really about changing and bringing in new frameworks and new network architectures to accommodate those new application architectures. And by that, what I'm talking about is the fact that these new modern application architectures, microservices, containers, are driving a lot more east-west traffic. So in the old days, it used to be easier. North-south coming out of the server, one application per server, things like that. Right now, you've got hundreds, if not thousands, of microservices communicating with each other, users communicating to 'em. So there's a lot more traffic, and a lot of it's taking place within the servers themselves. The other issue that you're starting to see as well, from that security perspective, when we were all consolidated, we had those perimeter-based, legacy, castle-and-moat security architectures, but that doesn't work anymore when the applications aren't in the castle, right. When everything's spread out, that no longer happens. So we're absolutely seeing organizations trying to make a shift. And I think much like, if you think about the shift that we're seeing with all the remote workers in the SASE framework to enable a secure framework there, it's almost the same thing. We're seeing this distributed services framework come up to support the applications better within the data centers, within the cloud data centers, so that you can drive that security closer to those applications and make sure they're fully protected. And that's really driving a lot of the zero trust stuff you hear, right? So never trust, always verify, making sure that everything is really secure. Microsegmentation's another big area. So ensuring that these applications, when they're connected to each other, they're fully segmented out. And again, because if someone does get a breach, if they are in your data center, you want to limit the blast radius, you want to limit the amount of damage that's done. So that by doing that, it really makes it a lot harder for them to see everything that's in there. >> You mentioned zero trust. It used to be a buzzword and now it's become a mandate. And I love the moat analogy. You build a moat to protect the queen in the castle. The queen's left the castle. It's just distributed. So how should we think about this Pluribus and NVIDIA solution? There's a spectrum. Help us understand that. You got appliances. You got pure software solutions. You got what Pluribus is doing with NVIDIA. Help us understand that. >> Yeah, absolutely. I think as organizations recognize the need to distribute their services closer to the applications, they're trying different models. So from a legacy approach, from a security perspective, they've got decentralized firewalls that they're deploying within their data centers. The hard part for that is, if you want all this traffic to be secured, you're actually sending it out of the server, up through the rack, usually to a different location in the data center and back. So with the need for agility, with the need for performance, right, that adds a lot of latency. Plus, when you start needing to scale, that means adding more and more network connections, more and more appliances. So it can get very costly, as well as impacting the performance. The other way that organizations are seeking to solve this problem is by taking the software itself and deploying it on the servers, okay? So it's a great approach, right? It brings it really close to the applications. But the things you start running into there, there's a couple of things. One is that you start seeing that the DevOps team start taking on that networking and security responsibility. >> Which they don't want to do. >> They don't want to do, right. And the operations teams loses a little bit of visibility into that. Plus, when you load the software onto the server, you're taking up precious CPU cycles. So if you're really wanting your applications to perform at an optimized state, having additional software on there isn't going to do it. So when we think about all those types of things, right, and certainly, the other side effects of that is the impact in the performance, but there's also a cost. So if you have to buy more servers, because your CPUs are being utilized, right, and you have hundreds or thousands of servers, right, those costs are going to add up. So what NVIDIA and Pluribus have done by working together is to be able to take some of those services and be able to deploy them onto a SmartNIC, right, be able to deploy the DPU-based SmartNIC into the servers themselves, and then Pluribus has come in and said, "We're going to create that unified fabric across the networking space into those networking services all the way down to the server." So the benefits of having that are pretty clear in that you're offloading that capability from the server. So your CPUs are optimized. You're saving a lot of money. You're not having to go outside of the server and go to a different rack somewhere else in the data center. So your performance is going to be optimized as well. You're not going to incur any latency hit for every round trip to the firewall and back. So I think all those things are really important, plus the fact that you're going to see, from an organizational aspect, we talked about the DevOps and NetOps teams, the network operations teams now can work with the security teams to establish the security policies and the networking policies so that the DevOps teams don't have to worry about that. So essentially, they just create the guardrails and let the DevOps team run, 'cause that's what they want. They want that agility and speed. >> Your point about CPU cycles is key. I mean, it's estimated that 25 to 30% of CPU cycles in the data center are wasted. The cores are wasted doing storage offload or networking or security offload. And I've said many times, everybody needs a Nitro, like the Amazon Nitro. You can only buy Amazon Nitro if you go into AWS, right. But everybody needs a Nitro. So is that how we should think about this? >> Yeah, that's a great analogy to think about this. And I think I would take it a step further because it's almost the opposite end of the spectrum, because Pluribus and NVIDIA are doing this in a very open way. And so Pluribus has always been a proponent of open networking. And so what they're trying to do is extend that now to these distributed services. So Leverage working with NVIDIA is also open as well, being able to bring that to bear so that organizations cannot only take advantage of these distributed services, but also that unified networking fabric, that unified cloud fabric, across that environment from the server across the switches. The other key piece of what Pluribus is doing, because they've been doing this for a while now and they've been doing it with the older application environments and the older server environments, they're able to provide that unified networking experience across a host of different types of servers and platforms. So you can have not only the modern application supported, but also the legacy environments, bare metal. You could go any type of virtualization, you can run containers, et cetera. So a wide gambit of different technologies hosting those applications, supported by a unified cloud fabric from Pluribus. >> So what does that mean for the customer? I don't have to rip and replace my whole infrastructure right? >> Yeah, well, think what it does, again, from that operational efficiency, when you're going from a legacy environment to that modern environment, it helps with the migration, it helps you accelerate that migration because you're not switching different management systems to accomplish that. You've got the same, unified networking fabric that you've been working with to enable you to run your legacy as well as transfer over to those modern applications as well. >> Got it, so your people are comfortable with the skillsets, et cetera. All right, I'll give you the last word. Give us the bottom line here. >> So I think, obviously, with all the modern applications that are coming out, the distributed application environments, it's really posing a lot of risk on these organizations to be able to get not only security, but also visibility into those environments. And so organizations have to find solutions. As I said at the beginning, they're looking to drive operational efficiency. So getting operational efficiency from a unified cloud networking solution, that it goes from the server across the servers to multiple different environments, right, in different cloud environments, is certainly going to help organizations drive that operational efficiency, it's going to help them save money for visibility, for security, and even open networking. So a great opportunity for organizations, especially large enterprises, cloud providers, who are trying to build that hyperscale-like environment. You mentioned the Nitro card. This is a great way to do it with an open solution. >> Love it. Bob, thanks so much for coming in and sharing your insights. I appreciate it. >> You're welcome, thanks. >> All right, in a moment, I'll be back to give you some closing thoughts on unified cloud networking and the key takeaways from today. You're watching "theCUBE", your leader in enterprise tech coverage. (upbeat music)

Published Date : Mar 16 2022

SUMMARY :

And to do so, It's great to be here. So a little bit counter to or is there sort of a need to rethink, in the SASE framework to enable And I love the moat analogy. that the DevOps team start taking effects of that is the impact So is that how we should think about this? and the older server environments, to enable you to run your legacy All right, I'll give you the last word. across the servers to multiple and sharing your insights. and the key takeaways from today.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Bob LalibertePERSON

0.99+

NVIDIAORGANIZATION

0.99+

BobPERSON

0.99+

hundredsQUANTITY

0.99+

Pluribus NetworksORGANIZATION

0.99+

PluribusORGANIZATION

0.99+

25QUANTITY

0.99+

9QUANTITY

0.99+

last yearDATE

0.99+

NitroCOMMERCIAL_ITEM

0.99+

AWSORGANIZATION

0.99+

thousandsQUANTITY

0.99+

one applicationQUANTITY

0.99+

AmazonORGANIZATION

0.99+

firstQUANTITY

0.98+

30%QUANTITY

0.98+

OneQUANTITY

0.97+

ESGORGANIZATION

0.96+

East CoastLOCATION

0.95+

oneQUANTITY

0.94+

todayDATE

0.94+

more than half, 54%QUANTITY

0.93+

SASETITLE

0.92+

10 organizationsQUANTITY

0.9+

zero trustQUANTITY

0.88+

theCUBETITLE

0.84+

SmartNICCOMMERCIAL_ITEM

0.68+

DevOpsORGANIZATION

0.66+

Nitro cardCOMMERCIAL_ITEM

0.61+

Alessandro Barbieri and Pete Lumbis


 

>>mhm. Okay, we're back. I'm John. Fully with the Cuban. We're going to go deeper into a deep dive into unified cloud networking solution from Pluribus and NVIDIA. And we'll examine some of the use cases with Alexandra Barberry, VP of product Management and Pluribus Networks. And Pete Lambasts, the director of technical market and video. Remotely guys, thanks for coming on. Appreciate it. >>I think >>so. Deep dive. Let's get into the what and how Alexandra, we heard earlier about the pluribus and video partnership in the solution you're working together on. What is it? >>Yeah. First, let's talk about the what? What are we really integrating with the NVIDIA Bluefield deep You Technology pluribus says, uh, has been shipping, uh, in volume in multiple mission critical networks. So this adviser, one network operating systems it runs today on merchant silicon switches and effectively, it's a standard based open network computing system for data centre. Um, and the novelty about this operating system is that it integrates a distributed the control plane for Atwater made effective in STN overlay. This automation is completely open and interoperable, and extensible to other type of clouds is nothing closed and this is actually what we're now porting to the NVIDIA GPU. >>Awesome. So how does it integrate into video hardware? And specifically, how is plural is integrating its software within video hardware? >>Yeah, I think we leverage some of the interesting properties of the blue field the GPU hardware, which allows actually to integrate, um, our soft our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the GPU card is completely agnostic to the hyper visor layer or OS layer running on on the host even more. Um, uh, we can also independently manage this network. Note this switch on a nick effectively, uh, managed completely independently from the host. You don't have to go through the network operating system running on X 86 to control this network node. So you truly have the experience effectively of a top of rack for virtual machine or a top of rack for kubernetes spots. Where instead of, uh, um, if you allow me with analogy instead of connecting a server nique directly to a switchboard now you're connecting a VM virtual interface to a virtual interface on the switch on a nick. And also as part of this integration, we, uh, put a lot of effort, a lot of emphasis in accelerating the entire day to play in for networking and security. So we are taking advantage of the DACA, uh, video DACA api to programme the accelerators and this your accomplished two things with that number one, you, uh, have much greater performance, much better performance than running the same network services on an X 86 CPU. And second, this gives you the ability to free up. I would say around 2025% of the server capacity to be devoted either to additional war close to run your cloud applications. Or perhaps you can actually shrink the power footprint and compute footprint of your data centre by 20% if you want to run. The same number of computer work was so great efficiencies in the overall approach. >>And this is completely independent of the server CPU, right? >>Absolutely. There is zero quote from pluribus running on the X 86 this is what why we think this enables a very clean demarcation between computer and network. >>So, Pete, I gotta get I gotta get you in here. We heard that the GPUS enable cleaner separation of devops and net ops. Can you explain why that's important? Because everybody's talking. Def SEC ops, right now you've got Net ops. Net net SEC ops, this separation. Why is this clean separation important? >>Yeah, I think it's, uh, you know, it's a pragmatic solution, in my opinion, Um, you know, we wish the world was all kind of rainbows and unicorns, but it's a little a little messier than that. And I think a lot of the devops stuff in that, uh, mentality and philosophy. There's a natural fit there, right? You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers? Well, the network has always been this other thing, and the network operators have always had a very different approach to things than compute operators. And, you know, I think that we we in the networking industry have gotten closer together. But there's still a gap. There's still some distance, and I think in that distance isn't going to be closed and So again it comes down to pragmatism. And I think, you know, one of my favourite phrases is look, good fences make good neighbours. And that's what this is. Yeah, >>it's a great point because devops has become kind of the calling card for cloud. Right? But devops is a simply infrastructure as code infrastructure is networking, right? So if infrastructure as code, you know, you're talking about, you know, that part of the stack under the covers under the hood, if you will. This is super important distinction. And this is where the innovation is. Can you elaborate on how you see that? Because this is really where the action is right now. >>Yeah, exactly. And I think that's where one from from the policy, the security, the zero trust aspect of this right. If you get it wrong on that network side, all of a sudden, you you can totally open up that those capabilities and so security is part of that. But the other part is thinking about this at scale, right. So we're taking one top of rack switch and adding, you know, up to 48 servers per rack, and so that ability to automate orchestrate and manage its scale becomes absolutely critical. >>Alexandra, this is really the why we're talking about here. And this is scale and again getting it right. If you don't get it right, you're gonna be really kind of up. You know what you know. So this is a huge deal. Networking matters. Security matters. Automation matters. DEVOPS. Net ops all coming together. Clean separation. Help us understand how this joint solution within video gets into the pluribus unified cloud networking vision. Because this is what people are talking about and working on right now. >>Yeah, absolutely. So I think here with this solution, we're talking to major problems in cloud networking. One is the operation of cloud networking, and the second is distributing security services in the cloud infrastructure. First, let me talk about first. What are we really unifying? If you really find something, something must be at least fragmented or disjointed. And what is this? Joint is actually the network in the cloud. If you look holistically how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers. You build your I P clause fabric leaf and spine to apologies. this is actually well understood the problem. I would say, um, there are multiple vendors with a similar technologies. Very well, standardised. Very well understood. Um, and almost a commodity, I would say building an I P fabric these days. But this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint. Those services are actually now moved into the compute layer where you actually were called. Builders have to instrument a separate network virtualisation layer, where they deploy segmentation and security closer to the workloads. And this is where the complication arise. This high value part of the cloud network is where you have a plethora of options, that they don't talk to each other, and they are very dependent on the kind of hyper visor or compute solution you choose. Um, for example, the networking API between an SX I environment or and hyper V or a Zen are completely disjointed. You have multiple orchestration layers and when and then when you throw in Also kubernetes in this In this in this type of architecture, uh, you're introducing yet another level of networking, and when you burn it, it runs on top of the M s, which is a prevalent approach. You actually just stuck in multiple networks on the compute layer that they eventually run on the physical fabric infrastructure. Those are all ships in the night effectively, right? They operate as completely disjointed. And we're trying to attack this problem first with the notion of a unified fabric, which is independent from any work clothes. Uh, whether it's this fabric spans on a switch which can become connected to a bare metal workload or can spend all the way inside the deep You where you have your multi hypervisors computer environment. It's one a P I one common network control plane and one common set of segmentation services for the network. That's probably number one. >>You know, it's interesting you I hear you talking. I hear one network different operating models reminds me the old server list days. You know there's still servers, but they called server list. Is there going to be a term network list? Because at the end of the, it should be one network, not multiple operating models. This this is like a problem that you guys are working on. Is that right? I mean, I'm not I'm just joking. Server, Listen, network list. But the idea is it should be one thing. >>Yeah, it's effectively. What we're trying to do is we're trying to recompose this fragmentation in terms of network operations across physical networking and server networking. Server networking is where the majority of the problems are because of the as much as you have standardised the ways of building, uh, physical networks and cloud fabrics with high people articles on the Internet. And you don't have that kind of, uh, sort of, uh, operational efficiency at the server layer. And this is what we're trying to attack first with this technology. The second aspect we're trying to attack is how we distribute the security services throughout the infrastructure more efficiently. Whether it's micro segmentation is a state, full firewall services or even encryption, those are all capabilities enabled by the blue field deep you technology and, uh, we can actually integrate those capabilities directly into the network fabric. Limiting dramatically, at least for is to have traffic, the sprawl of security appliances with a virtual or physical that is typically the way people today segment and secured the traffic in the >>cloud. All kidding aside about network. Listen, Civil is kind of fun. Fun play on words There the network is one thing is basically distributed computing, right? So I love to get your thoughts about this Distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >>Yeah, I think. What's what's beautiful and kind of what the deep you brings that's new to this model is completely isolated. Compute environment inside. So you know, it's the yo dog. I heard you like a server, So I put a server inside your server. Uh, and so we provide, you know, arm CPUs, memory and network accelerators inside, and that is completely isolated from the host. So the server, the the actual X 86 host just thinks it has a regular nick in there. But you actually have this full control plane thing. It's just like taking your top of rack, switch and shovel. Get inside of your compute node. And so you have not only the separation, um, within the data plane, but you have this complete control plane separation. So you have this element that the network team can now control and manage. But we're taking all of the functions we used to do at the top of rack Switch, and we distribute them now. And, you know, as time has gone on, we've we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, um, the other option is centralised appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that if aliens good enough or we hope that if you excellent tunnel is good enough, and we can actually apply more advanced techniques there because we can't physically, financially afford that appliance to see all of the traffic, and now that we have a distributed model with this accelerator, we could do it. >>So what's the what's in it for the customer real quick. I think this is an interesting point. You mentioned policy. Everyone in networking those policies just a great thing. And it has. You hear it being talked about up the stack as well. When you start getting to orchestrate microservices and what not all that good stuff going on their containers and whatnot and modern applications. What's the benefit to the customers with this approach? Because what I heard was more scale, more edge deployment, flexibility relative to security policies and application. Enablement. I mean, is that what what's the customer get out of this architecture? What's the enablement? >>It comes down to taking again the capabilities that were that top of rack switch and distracting them down. So that makes simplicity smaller. Blast Radius is for failure, smaller failure domains, maintenance on the networks and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. Um, and again, you know, we always want to kind of separate each one of those layers. So, just as in, say, a Vieques land network, my leaf and spine don't have to be tightly coupled together. I can now do this at a different layer and so you can run a deep You with any networking in the core there. And so you get this extreme flexibility, you can start small. You can scale large. Um, you know, to me that the possibilities are endless. >>It's a great security control Playing really flexibility is key, and and also being situationally aware of any kind of threats or new vectors or whatever is happening in the network. Alexandra, this is huge Upside, right? You've already identified some, uh, successes with some customers on your early field trials. What are they doing? And why are they attracted? The solution? >>Yeah, I think the response from customer has been the most encouraging and exciting for for us to, uh, to sort of continuing work and develop this product. And we have actually learned a lot in the process. Um, we talked to three or two or three cloud providers. We talked to s P um, sort of telco type of networks, uh, as well as enter large enterprise customers. Um, in one particular case, um uh, one, I think. Let me let me call out a couple of examples here just to give you a flavour. There is a service provider, a cloud provider in Asia who is actually managing a cloud where they are offering services based on multiple hypervisors their native services based on Zen. But they also, um, ramp into the cloud workloads based on SX I and N K P M. Depending on what the customer picks from the piece from the menu. And they have the problem of now orchestrating through the orchestrate or integrating with Zen Centre with this fear with open stock to coordinate this multiple environments and in the process to provide security, they actually deploy virtual appliances everywhere, which has a lot of cost complication, and it's up into the service of you the promise that they saw in this technology they call it. Actually, game changing is actually to remove all this complexity, even a single network, and distribute the micro segmentation service directly into the fabric. And overall, they're hoping to get out of it. Tremendous OPEC's benefit and overall operational simplification for the cloud infrastructure. That's one important use case, um, another large enterprise customer, a global enterprise customer is running both Essex I and I purvey in their environment, and they don't have a solution to do micro segmentation consistently across Hypervisors. So again, micro segmentation is a huge driver. Security looks like it's a recurring theme talking to most of these customers and in the telco space. Um, uh, we're working with a few telco customers on the CFT programme, uh, where the main goal is actually to Arman Eyes Network operation. They typically handle all the V NFC with their own homegrown DPD K stock. This is overly complex. It is, frankly, also slow and inefficient. And then they have a physical network to manage the idea of having again one network to coordinate the provisioning of cloud services between the take of the NFC. Uh, the rest of the infrastructure is extremely powerful on top of the offloading capability. After by the blue fill the pews. Those are just some examples. >>There's a great use case, a lot more potential. I see that with the unified cloud networking. Great stuff shout out to you guys that NVIDIA, you've been following your success for a long time and continuing to innovate his cloud scales and pluribus here with unified networking. Kind of bringing the next level great stuff. Great to have you guys on and again, software keeps, uh, driving the innovation again. Networking is just part of it, and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in in this new architecture and solution learn more? Because this is an architectural ship. People are working on this problem. They're trying to think about multiple clouds are trying to think about unification around the network and giving more security more flexibility to their teams. How can people learn more? >>And so, uh, Alexandra and I have a talk at the upcoming NVIDIA GTC conference, so it's the week of March 21st through 24th. Um, you can go and register for free and video dot com slash gtc. Um, you can also watch recorded sessions if you end up watching this on YouTube a little bit after the fact, Um, and we're going to dive a little bit more into the specifics and the details and what we're providing a solution >>as Alexandra. How can people learn more? >>Yeah, so that people can go to the pluribus website www pluribus networks dot com slash e. F t and they can fill up the form and, uh, they will contact Pluribus to either no more or to know more and actually to sign up for the actual early field trial programme. Which starts at the end of it. >>Okay, well, we'll leave it there. Thank you both for joining. Appreciate it up. Next, you're going to hear an independent analyst perspective and review some of the research from the Enterprise Strategy Group E s G. I'm John Ferry with the Cube. Thanks for watching. Mhm. Mhm.

Published Date : Mar 4 2022

SUMMARY :

And Pete Lambasts, the director of technical market and Let's get into the what and how Alexandra, we heard earlier about the pluribus and video Um, and the novelty about this operating system is that it integrates a distributed the And specifically, how is plural is integrating its software within video hardware? of the server capacity to be devoted either to additional war close to is what why we think this enables a very clean demarcation between computer and network. We heard that the GPUS enable cleaner separation of Yeah, I think it's, uh, you know, it's a pragmatic solution, in my opinion, Um, you know, So if infrastructure as code, you know, you're talking about, you know, that part of the stack But the other part is thinking about this at scale, right. You know what you know. the place where you deploy most of your services in the cloud, particularly from a security standpoint. I hear one network different operating models reminds me the old server enabled by the blue field deep you technology and, So I love to get your thoughts scale large enough, the question is, can you afford it? What's the benefit to the customers with this approach? I can now do this at a different layer and so you can run Alexandra, this is huge Upside, Let me let me call out a couple of examples here just to give you a flavour. So I got to ask both of you to wrap this bit more into the specifics and the details and what we're providing a solution How can people learn more? Yeah, so that people can go to the pluribus website www pluribus networks dot analyst perspective and review some of the research from the Enterprise Strategy Group E s G.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AlexandraPERSON

0.99+

NVIDIAORGANIZATION

0.99+

AsiaLOCATION

0.99+

Pete LambastsPERSON

0.99+

twoQUANTITY

0.99+

John FerryPERSON

0.99+

threeQUANTITY

0.99+

PluribusORGANIZATION

0.99+

20%QUANTITY

0.99+

Alexandra BarberryPERSON

0.99+

Pete LumbisPERSON

0.99+

JohnPERSON

0.99+

Alessandro BarbieriPERSON

0.99+

FirstQUANTITY

0.99+

OPECORGANIZATION

0.99+

second aspectQUANTITY

0.99+

PetePERSON

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

March 21stDATE

0.99+

24thDATE

0.99+

OneQUANTITY

0.98+

secondQUANTITY

0.98+

Arman Eyes NetworkORGANIZATION

0.98+

todayDATE

0.98+

two thingsQUANTITY

0.98+

AtwaterORGANIZATION

0.98+

Pluribus NetworksORGANIZATION

0.98+

oneQUANTITY

0.98+

YouTubeORGANIZATION

0.96+

one thingQUANTITY

0.92+

DACATITLE

0.92+

one networkQUANTITY

0.92+

EnterpriseORGANIZATION

0.91+

single networkQUANTITY

0.91+

zero quoteQUANTITY

0.89+

one common setQUANTITY

0.88+

zero trustQUANTITY

0.88+

one important use caseQUANTITY

0.87+

Essex IORGANIZATION

0.84+

telcoORGANIZATION

0.84+

three cloud providersQUANTITY

0.82+

N K PORGANIZATION

0.82+

CubanPERSON

0.82+

KCOMMERCIAL_ITEM

0.81+

X 86OTHER

0.8+

zeroQUANTITY

0.79+

ZenORGANIZATION

0.79+

each oneQUANTITY

0.78+

one particular caseQUANTITY

0.76+

up to 48 servers per rackQUANTITY

0.74+

around 2025%QUANTITY

0.73+

coupleQUANTITY

0.68+

GroupORGANIZATION

0.67+

ViequesORGANIZATION

0.65+

X 86COMMERCIAL_ITEM

0.64+

XCOMMERCIAL_ITEM

0.61+

NVIDIA GTC conferenceEVENT

0.6+

pluribusORGANIZATION

0.57+

NVIDIA BluefieldORGANIZATION

0.54+

CentreCOMMERCIAL_ITEM

0.52+

X 86TITLE

0.51+

ZenTITLE

0.47+

86TITLE

0.45+

CubeORGANIZATION

0.44+

SXTITLE

0.41+

Changing the Game for Cloud Networking


 

>>Okay. We've heard from the folks at pluribus networks and Nvidia about their effort to transform cloud networking and unify bespoke infrastructure. Now let's get the perspective from an independent analyst and to do so. We welcome in ESG, senior analysts, Bob LA Liberte, Bob. Good to see you. Thanks for coming into our east coast studios. Oh, thanks for having me. It's great >>To be here. Yeah. So >>This, this idea of unified cloud networking approach, how serious is it? What's what's driving it. >>Yeah, there's certainly a lot of drivers behind it, but probably the first and foremost is the fact that application environments are becoming a lot more distributed, right? So the, it pendulum tends to swing back and forth. And we're definitely on one that's swinging from consolidated to distributed. And so applications are being deployed in multiple private data centers, multiple public cloud locations, edge locations. And as a result of that, what you're seeing is a lot of complexity. So organizations are having to deal with this highly disparate environment. They have to secure it. They have to ensure connectivity to it and all that's driving up complexity. In fact, when we asked in one of our last surveys and last year about network complexity, more than half 54% came out and said, Hey, our network environment is now either more or significantly more complex than it used to be. >>And as a result of that, what you're seeing is it's really impacting agility. So everyone's moving to these modern application environments, distributing them across areas so they can improve agility yet it's creating more complexity. So a little bit counter to the fact and, you know, really counter to their overarching digital transformation initiatives. From what we've seen, you know, nine out of 10 organizations today are either beginning in process or have a mature digital transformation process or initiative, but their top goals, when you look at them, it probably shouldn't be a surprise. The number one goal is driving operational efficiency. So it makes sense. I've distributed my environment to create agility, but I've created a lot of complexity. So now I need these tools that are going to help me drive operational efficiency, drive better experiences. >>I love how you bring in the data yesterday. Does a great job with that. A question is, is it about just unifying existing networks or is there sort of a need to rethink kind of a do-over net or how networks are built? >>Yeah, that's a, that's a really good point because certainly unifying networks helps right. Driving any kind of operational efficiency helps. But in this particular case, because we've made the transition to new application architectures and the impact that's having as well, it's really about changing and bringing in new frameworks and new network architectures to accommodate those new application architectures. And, and by that, what I'm talking about is the fact that these new modern application architectures, microservices, containers are driving a lot more east-west traffic. So in the old days used to be easier. And north south coming out of the server, one application per server, things like that right now, you've got hundreds, if not thousands of microservices communicating with each other users communicating to them. So there's a lot more traffic and a lot of it's taking place within the servers themselves. The other issue that you starting to see as well from that security perspective, when we were all consolidated, we had those perimeter based legacy, you know, castle and moat security architectures, but that doesn't work anymore when the applications aren't in the castle, right. >>When everything's spread out that that no longer happens. So we're absolutely seeing, um, organizations trying to, trying to make a shift. And I think much like if you think about the shift that we're seeing with all the remote workers and the sassy framework to enable a secure framework there, this it's almost the same thing. We're seeing this distributed services framework come up to support the applications better within the data centers, within the cloud data centers, so that you can drive that security closer to those applications and make sure they're, they're fully protected. Uh, and that's really driving a lot of the, you know, the zero trust stuff you hear, right? So never trust, always verify. Making sure that everything is, is, is really secure micro-segmentation is another big area. So ensuring that these applications, when they're connected to each other, they're, they're fully segmented out. And that's again, because if someone does get a breach, if they are in your data center, you want to limit the blast radius, you want to limit the amount of damage that's done. So that by doing that, it really makes it a lot harder for them to see everything that's in there. >>I mentioned zero trust. It used to be a buzzword and now it's become a mandate. And I love the mode analogy. You know, you build a moat to protect the queen and the castle, the Queens left the castle, it's just distributed. So how should we think about this, this pluribus and Nvidia solution. There's a spectrum, help us understand that you've got appliances, you've got, you know, pure software solutions. You got what pluribus is doing with Nvidia, help us understand that. >>Yeah, absolutely. I think as organizations recognize the need to distribute the services to closer to the applications, they're trying different models. So from a legacy approach, you know, from a security perspective, they've got these centralized firewalls that they're deploying within their data centers. The hard part for that is if you want all this traffic to be secured, you're actually sending it out of the server up through the rack, usually to in different location in the data center and back. So with the need for agility, with the need for performance, right, that adds a lot of latency. Plus when you start needing to scale, that means adding more and more network connections, more and more appliances. So it can get very costly as well as impacting the performance. The other way that organizations are seeking to solve this problem is by taking the software itself and deploying it on the servers. >>Okay. So that's a, it's a great approach, right? It brings it really close to the applications, but the things you start running into there, there's a couple of things. One is that you start seeing that the DevOps team start taking on that networking and security responsibility, which they don't want to do, they don't want to do right. And the operations teams loses a little bit of visibility into that. Plus when you load the software onto the server, you're taking up precious CPU cycles. So if you're really wanting your applications to perform at an optimized state, having additional software on there, isn't going to, isn't going to do it. So, you know, when we think about all those types of things, right, and certainly the, the other side effects of that is the impact of the performance, but there's also a cost. So if you have to buy more servers because your CPU's are being utilized, right, and you have hundreds or thousands of servers, right, those costs are gonna add up. >>So what, what Nvidia and pluribus have done by working together is to be able to take some of those services and be able to deploy them onto a smart NIC, right? To be able to deploy the DPU based smart NIC into the servers themselves. And then pluribus has come in and said, we're going to unify create that unified fabric across the networking space, into those networking services all the way down to the server. So the benefits of having that are pretty clear in that you're offloading that capability from the server. So your CPU's are optimized. You're saving a lot of money. You're not having to go outside of the server and go to a different rack somewhere else in the data center. So your performance is going to be optimized as well. You're not going to incur any latency hit for every trip round trip to the, to the firewall and back. So I think all those things are really important. Plus the fact that you're going to see from a, an organizational aspect, we talked about the dev ops and net ops teams. The network operations teams now can work with the security teams to establish the security policies and the networking policies. So that they've dev ops teams. Don't have to worry about that. So essentially they just create the guardrails and let the dev op team run. Cause that's what they want. They want that agility and speed. >>Yeah. The point about CPU cycles is key. I mean, it's estimated that 25 to 30% of CPU cycles in the data center are wasted. The cores are wasted doing storage offload or, or networking or security offload. And, you know, I've said many times everybody needs a nitro like Amazon nugget, but you can't go, you can only buy Amazon nitro if you go into AWS. Right. Everybody needs a nitro. So is that how we should think? >>Yeah. That's a great analogy to think about this. Um, and I think I would take it a step further because it's, it's almost the opposite end of the spectrum because pluribus and video are doing this in a very open way. And so pluribus has always been a proponent of open networking. And so what they're trying to do is extend that now to these distributed services. So leverage working with Nvidia is also open as well, being able to bring that to bear so that organizations can not only take advantage of these distributed services, but also that unified networking fabric, that unified cloud fabric across that environment from the server across the switches, the other key piece of what pluribus is doing, because they've been doing this for a while now, and they've been doing it with the older application environments and the older server environments, they're able to provide that unified networking experience across a host of different types of servers and platforms. So you can have not only the modern application supported, but also the legacy environments, um, you know, bare metal. You could go any type of virtualization, you can run containers, et cetera. So a wide gambit of different technologies hosting those applications supported by a unified cloud fabric from pluribus. So what is it, >>I mean, for the customer, I don't have to rip and replace my whole infrastructure, right? Yeah. >>Yeah. Well, think what it does for, again, from that operational efficiency, when you're going from a legacy environment to that modern environment, it helps with the migration helps you accelerate that migration because you're not switching different management systems to accomplish that. You've got the same unified networking fabric that you've been working with to enable you to run your legacy, as well as transfer over to those modern applications. >>Got it. So your people are comfortable with the skillsets, et cetera. All right. I'll give you the last word. Give us the bottom line here. >>So yeah, I think obviously with all the modern applications that are coming out, the distributed application environments, it's really posing a lot of risk on these organizations to be able to get not only security, but also visibility into those environments. And so organizations have to find solutions. As I said, at the beginning, they're looking to drive operational efficiency. So getting operational efficiency from a unified cloud networking solution, that it goes from the server across the servers to multiple different environments, right in different cloud environments is certainly going to help organizations drive that operational efficiency. It's going to help them save money for visibility, for security and even open networking. So a great opportunity for organizations, especially large enterprises, cloud providers who are trying to build that hyperscaler like environment. You mentioned the nitro card, right? This is a great way to do it with an open solution. Love it, >>Bob. Thanks so much for coming in and sharing your insights. Appreciate it. You're welcome. Thanks. All right. In a moment, I'll be back to give you some closing thoughts on unified cloud networking and the key takeaways from today. You're watching the cube you're leader in enterprise tech coverage.

Published Date : Mar 3 2022

SUMMARY :

Okay. We've heard from the folks at pluribus networks and Nvidia about their effort to transform To be here. This, this idea of unified cloud networking approach, how serious is it? So organizations are having to deal with this highly So a little bit counter to the fact and, you know, really counter to their overarching digital transformation I love how you bring in the data yesterday. So in the old days used to be easier. So that by doing that, it really makes it a lot harder for them to see everything that's in there. And I love the mode analogy. So from a legacy approach, you know, from a security perspective, So if you have to buy more servers because your CPU's are being utilized, the server and go to a different rack somewhere else in the data center. I mean, it's estimated that 25 to 30% of CPU environments and the older server environments, they're able to provide that unified networking experience across I mean, for the customer, I don't have to rip and replace my whole infrastructure, right? it helps with the migration helps you accelerate that migration because you're not switching different management So your people are comfortable with the skillsets, et cetera. that it goes from the server across the servers to multiple different environments, right in different cloud environments In a moment, I'll be back to give you some closing thoughts on unified cloud networking and the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NvidiaORGANIZATION

0.99+

Bob LA LibertePERSON

0.99+

BobPERSON

0.99+

hundredsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

nineQUANTITY

0.99+

AWSORGANIZATION

0.99+

last yearDATE

0.99+

25QUANTITY

0.99+

yesterdayDATE

0.99+

nitroCOMMERCIAL_ITEM

0.99+

one applicationQUANTITY

0.98+

todayDATE

0.98+

firstQUANTITY

0.98+

more than half 54%QUANTITY

0.98+

10 organizationsQUANTITY

0.97+

30%QUANTITY

0.97+

OneQUANTITY

0.97+

oneQUANTITY

0.96+

thousands of microservicesQUANTITY

0.94+

ESGORGANIZATION

0.89+

zero trustQUANTITY

0.84+

QueensPERSON

0.77+

thousands of serversQUANTITY

0.71+

eachQUANTITY

0.64+

DevOpsORGANIZATION

0.59+

pluribusORGANIZATION

0.49+

Tom Burns, Dell EMC | Dell Technologies World 2018


 

>> Announcer: Live from Las Vegas, it's the Cube. Covering Dell Technologies World 2018. Brought to you by Dell EMC, and its ecosystem partners. >> Welcome back to SiliconANGLE media's coverage of Dell Technologies World 2018. I'm Stu Miniman here with my cohost Keith Townsend, happy to welcome back to the program Tom Burns, who's the SVP of Networking and Solutions at Dell EMC. Tom, great to see ya. >> Great to see you guys as well. Good to see you again. >> All right, so I feel like one of those CNBC guys. It's like, Tom, I remember back when Force10 was acquired by Dell and all the various pieces that have gone on and converged in infrastructure, but of course with the merger, you've gotten some new pieces to your toy chest. >> Tom: That's correct. >> So maybe give us the update first as to what's under your purview. >> Right, right, so I continue to support and manage the entire global networking business on behalf of Dell EMC, and then recently I picked up what we called our converged infrastructure business or the VxBlock, Vscale business. And I continue also to manage what we call Enterprise Infrastructure, which is basically any time our customers want to extend the life of their infrastructure around memory, storage, optics, and so forth. We support them with Dell EMC certified parts, and then we add to that some third-party componentry around rack power and cooling, software, Cumulus, Big Switch, things like that. Riverbed, Silver Peak, others. And so with that particular portfolio we also cover what we call the Dell EMC Ready Solutions, both for the service provider, but then also for traditional enterprises as well. >> Yeah, well luckily there's no change in any of those environments. >> Tom: No, no. >> Networking's been static for decades. I mean they threw a product line that I mean last I checked was somewhere in the three to four billion dollar range. With the VxBlock under what you're talking there. >> Yeah it's a so, yeah-- >> Maybe you could talk, what does this mean? 'Cause if I give you your networking guy. >> Right. >> Keith and I are networking guys by background, obviously networking's a piece of this, but give us a little bit of how the sausage is made inside to-- >> Tom: Sure. >> Get to this stuff. >> Well I think when you talk about all these solutions, Cloud, Hybrid Cloud, Public Cloud, when you think about software-defined X, the network is still pretty darn important, right? I often say that if the network's not working, it's going to be a pretty cloudy day. It's not going to connect. And so the fabric continues to remain one of the most critical parts of the solution. So the thought around the VxBlock and moving that in towards the networking team is the importance of the fabric and the capability to scale out and scale up with our customers' workloads and applications. So that's probably the reason primarily the reason. And then we can also look at how we can work very closely with our storage division 'cause that's the key IP component coming from Dell EMC on the block side. And see how we can continue to help our customers solve their problems when it comes to this not your do-it-yourself but do-it-for-me environment. >> All right, I know Keith wants to jump in, but one just kind of high-level question for you. I look at networking, we've really been talking about disaggregation of what's going on. It's really about disaggregated systems. And then you've got convergence, and there's other parts of the group that have hyper convergence. How do you square the circle on those two trends and how do those go together? >> Well, I think it's pretty similar on whether you go hyper converge, converge, or do-it-yourself, you build your own block so to speak. There's a set of buyers that want everything to be done for them. They want to buy the entire stack, they want it pre-tested, they want it certified, they want it supported. And then there's a set of customers that want to do it themselves. And that's where we see this opportunity around disaggregation. So we see it primarily in hyperscale and Cloud, but we're seeing it more and more in large enterprise, medium enterprise, particular verticals where customers are in essence looking for some level of agility or capability to interchange their solutions by a particular vendor or solutions that are coming from the same vendor but might be a different IP as an example. And I'm really proud of the fact that Dell EMC really kicked off this disaggregation of the hardware and software and networking. Some 4 1/2 years ago. Now you see some of the, let's say, larger industry players starting to follow suit. And they're starting to disaggregate their software as well. >> Yeah, I would have said just the commonality between those two seemingly opposed trends it's scale. >> Right. >> It's how do customers really help scale these environments? >> Exactly, exactly. It depends a lot around the customer environment and what kind of skill sets do they have. Are they willing to help go through some of that do-it-yourself type of process. Obviously Dell EMC services is there to help them in those particular cases. But we kind of have this buying conundrum of build versus buy. I think my old friend, Chad Sakac, used to say, there's different types of customers that want a VxRail or build-it-themselves, or they want a VxBlock. We see the same thing happen in a networking. There's those customers that want disaggregated hardware and software, and in some cases even disaggregated software. Putting those protocols and features on the switch that they actually use in the data center. Rather than buying a full proprietary stack, well we continue to build the full stack for a select number of customers as well because that's important to that particular sector. >> So again, Tom, two very different ends of the spectrum. I was at ONS a couple of months ago, talked to the team. Dell is a huge sponsor of the Open Source community. And I don't think many people know that. Can you talk about the Open Source relationship or the relationship that Dell Networking has with the Open Source community? >> Absolutely, we first made our venture in Open Source actually with Microsoft in their SONiC work. So they're creating their own network operating software, and we made a joint contribution around the switch abstraction interface, or side. So that was put into the Open Compute Project probably around 3 1/2, maybe four years ago. And that's right after we announced this disaggregation. We then built basically an entire layer of what we call our OS10 base, or what's known in the Linux foundation as OPX. And we contributed that to the OPX or to the Linux foundation, where basically that gives the customer the capability through the software that takes care of all the hardware, creates this switch subtraction interface to gather the intelligence from the ASIC and the silicon, and bringing it to a control plane, which allows APIs to be connected for all your north-bound applications or your general analysis that you want to use, or a disaggregated analysis, what you want to do. So we've been very active in Linux. We've been very active in OCP as well. We're seeing more and more of embracing this opportunity. You've probably seen recently AT&T announced a rather large endeavor to replace tens of thousands of routers with basically white box switches and Open Source software. We really think that this trend is moving, and I'm pretty proud that Dell EMC was a part of getting that all started. >> So that was an awful lot of provider talk. You covered both the provider's base and the enterprise space. Talk to us about where the two kind of meet. You know the provider space, they're creating software, they're embracing OpenStack, they're creating plug-ins for disaggregated networking. And then there's the enterprise. There's opportunity there. Where do you see the enterprise leveraging disaggregation versus the service provider? >> Well, I think it's this move towards software-defined. If you heard in Michael's keynote today, and you'll hear more tomorrow from Jeff Clarke. The whole world is moving to software-defined. It's no longer if, it's when. And I think the opportunity for enterprises that are kind of in that transformation stage, and moving from traditional software-defined, or excuse me, traditional data centers to the software-defined, they could look at disaggregation as an opportunity to give them that agility and capability. In a manner of which they can kind of continue to manage the old world, but move forward into the new world of disaggregation software-defined with the same infrastructure. You know it's not well-known that Dell EMC, we've made our switching now capable of running five different operating softwares. That's dependent upon workloads and use cases, and the customer environment. So, traditional enterprise, they want to look at traditional protocols, traditional features. We give them that capability through our own OS. We can reduce that with OS partners, software coming from some of our OS partners, giving them just the protocols and features that they need for the data center or even out to the edge. And it gives them that flexibility and change. So I think it really comes at this point of when are they going to move towards moving from traditional networking to the next generation of networking. And I'm very happy, I think Dell Technologies is leading the way. >> So I'm wondering if you could expand a little bit about that. When I think about Dell and this show, I mean it is a huge ecosystem. We're sitting right near the Solutions Expo, which will be opening in a little bit, but on the networking side, you've got everything from all the SD-WAN pieces, to all the network operating systems that can sit on top. Maybe, give us kind of the update on the overview, the ecosystem, where Dell wins. >> Yeah, yeah I mean, if you think about 30-something years ago when Michael started the company and Dell started, what was it about. It was really about transforming personal computing, right? It was about taking something that was kind of a traditional proprietary architecture and commoditizing it, making sure it's scalable and supportable. You think of the changes that's occurred now between the mainframe and x86. This is what we think's happening in networking. And at Dell Technologies in the networking area whether it's Dell EMC or to VMware, we're really geared towards this SDX type of market. Virtualization, Layer two, day or three disaggregated switching in the data center. Now SD-WAN with the acquisition of Velocloud by VMware. We're really hoping customers transform at the way networking is being managed, operated, supported to give them much more flexibility and agility in a software-defined market. That being said, we continue to support a multitude of other partners. We have Cumulus, Big Switch, IP infusion, and Pluribus as network operating software alternatives. We have our own, and then we have them as partners. On the SD-WAN area while we lead with Velocloud, we have Silver Peak and we also have Versa Technology, which is getting a lot of upkick in the area. Both in the service provider and in the enterprise space. Huge area of opportunity for enterprises to really lower their cost of connectivity and their branch offices. So, again, we at Dell, we want to have an opinion. We have some leading technologies that we own, but we also partner with some very good, best-of-breed solutions. But being that we're open, and we're disaggregated, and we have an incredible scaling and service department or organization, we have this capability to bring it together for our customers and support them as they go through their IT transformation. >> So, Dell EMC is learning a lot of lessons as you guys start to embrace software-defined. Couple of Dell EMC World's ago, big announcement Chad talked about, ScaleIO, and abstracting, and giving away basically, ScaleIO as a basic solution for free. Then you guys pulled back. And you said, you know what, that's not quite what customers want. They want a packaged solution. So we're talking on one end, total disaggregation and another end, you know what, in a different area of IT, customers seem to want packaged solutions. >> Tom: Yeah. >> Can you talk to the importance of software-defined and packaged solutions? >> Right, it's kind of this theory of appliances, right? Or how is that software going to be packaged? And we give that flexibility in either way. If you think of VxRail or even our vSAN operating or vSAN ready node, it gives that customer the capability to know that we put that software and hardware together, and we tested it, we certified it, most importantly we can support it with kind of one throat to choke, one single call. And so I think the importance for customers are again, am I building it myself or do I want to buy a stack. If I'm somewhere in the middle maybe I'm doing a hybrid or perhaps a Rail type of solution, where it's just compute and storage for the most part. Maybe I'm looking for something different on my networking or connectivity standpoint. But Dell EMC, having the entire portfolio, can help them at any point of the venture or at any part of the solution. So I think that you're absolutely right. The customer buying is varied. You've got those that want everything from a single point, and you got others that are saying I want decision points. I think a lot of the opportunity around the cost savings, mostly from an Opex standpoint are those that are moving towards disaggregated. It doesn't lock 'em in to a single solution. It doesn't get 'em into that long life cycle of when you're going to do changes and upgrades and so forth. This gives them a lot more flexibility and capability. >> Tom, sometimes we have the tendency to get down in the weeds on these products. Especially in the networking space. One of my complaints was, the whole SDN wave, didn't seem to connect necessarily to some of the big businesses' challenges. Heard in the keynote this morning a lot of talk about digital transformation. Bring us up to speed as to how networking plays into that overall story. What you're hearing from customers and if you have any examples we'd love to hear. >> Yeah, no so, I think networking plays a critical part of the IT transformation. I think if you think of the first move in virtualization around compute, then you have the software-defined storage, the networking component was kind of the lagger. It was kind of holding back. And in fact today, I think some analysts say that even when certain software-defined storage implementations occur, interruptions or issues happen in the network. Because the network has then been built and architected for that type of environment. So the companies end up going back and re-looking at how that's done. And companies overall are I think are frustrated with this. They're frustrated with the fact that the network is holding them back from enabling new services, new capabilities, new workloads, moving towards a software-defined environment. And so I think this area again, of disaggregation, of software-defined, of offering choice around software, I think it's doing well, and it's really starting to see an uptick. And the customer experiences as follows. One is, open networking where it's based upon standard commodity-based hardware. It's simply less expensive than proprietary hardware. So they're going to have a little bit of savings from the CapEx standpoint. But because they moved towards this disaggregated model where perhaps they're using one of our third-party software partners that happens to be based in Linux, or even our own OS10 is now based in Linux. Look at that, the tools around configuration and automation are the same as compute. And the same as storage. And so therefore I'm saving on this configuration and automation and so forth. So we have examples such as Verizon that literally not only saves about 30% cost savings on their CapEx, they're saving anywhere between 40 and 50% on their Opex. Why? They can roll out applications much faster. They can make changes to their network much faster. I mean that's the benefit of virtualization and NSX as well, right? Instead of having this decisions of sending a network engineer to a closet to do CLI, down in the dirt as you would say, and reconfigure the switch, a lot of that now has been attracted to a software lever, and getting the company much more capability to make the changes across the fabric, or to segregate it using NSX micro segmentation to make the changes to those users or to that particular environment that needs those changes. So, just the incredible amount of flexibility. I think SDN let's say six, seven years ago, everyone thought it was going to be CapEx. You know, cheaper hardware, cheaper ASICs, et cetera. It's all about Opex. It's around flexibility, agility, common tool sets, better configuration, faster automation. >> So we all have this nirvana idea that we can take our traditional stacks, whether it's pre-packaged CI configurations that's pre-engineered, HCI, SDN, disaggregated networking. Add to that a software layer this magical automation. Can you unpack that for us a little bit? What are you seeing practically whether it's in the server provider perspective or on the enterprise. What are those crucial relationships that Dell EMC is forming with the software industry to bring forth that automation? >> Well obviously we have a very strong relationship with VMware. >> Keith: Right. >> And so you have vRealize and vROps and so forth, and in fact in the new VxBlock 1000, you're going to see a lot of us gearings, a lot of our development towards the vRealize suite, so that helps those customers that are in a VMware environment. We also have a very strong relationship with Red Hat and OpenStack, where we've seen very successful implementations in the service provider space. Those that want to go a little bit more, a little bit more disaggregated, a little bit more open, even it from the storage participation like SAP and so forth. But then obviously we're doing a lot of work with Ansible, Chef, and Puppet, for those that are looking for more of a common open source set of tools across server, compute, networking storage and so forth. So I think the real benefit is kind of looking at it at that 25,000-foot view on how we want to automate. Do you want to go towards containers, do you want to go traditional? What are the tool sets that you've been using in your compute environment, and can those be brought down to the entire stack? >> All right, well Tom Burns, really appreciate catching up with you. I know Keith will be spending a little time at Interop this week too. I know, I'm excited that we have a lot more networking here at this end of the strip also this week. >> Appreciate it. Listen to Pat's talk this afternoon. I think we're going to be hearing even more about Dell Technology's networking. >> All right. Tom Burns, SVP of Networking and Solutions at Dell EMC. I'm Stu Miniman and this is Keith Townsend. Thanks for watching The Cube. (upbeat music)

Published Date : Apr 30 2018

SUMMARY :

Brought to you by Dell EMC, the program Tom Burns, Great to see you guys as well. all the various pieces to what's under your purview. and manage the entire in any of those environments. in the three to four billion dollar range. 'Cause if I give you your networking guy. and the capability to and how do those go together? that are coming from the same vendor said just the commonality on the switch that they different ends of the spectrum. and the silicon, and bringing and the enterprise space. and the customer environment. but on the networking and in the enterprise space. to want packaged solutions. gives that customer the have the tendency to get that the network is holding them back or on the enterprise. Well obviously we have and in fact in the new VxBlock 1000, of the strip also this week. Listen to Pat's talk this afternoon. and Solutions at Dell EMC.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

MichaelPERSON

0.99+

KeithPERSON

0.99+

MicrosoftORGANIZATION

0.99+

TomPERSON

0.99+

Tom BurnsPERSON

0.99+

Jeff ClarkePERSON

0.99+

Stu MinimanPERSON

0.99+

Chad SakacPERSON

0.99+

Tom BurnsPERSON

0.99+

DellORGANIZATION

0.99+

threeQUANTITY

0.99+

AT&TORGANIZATION

0.99+

tomorrowDATE

0.99+

ChadPERSON

0.99+

Las VegasLOCATION

0.99+

PatPERSON

0.99+

Dell EMCORGANIZATION

0.99+

Dell TechnologyORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

this weekDATE

0.99+

todayDATE

0.99+

LinuxTITLE

0.99+

CNBCORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

twoQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

25,000-footQUANTITY

0.99+

bothQUANTITY

0.99+

OneQUANTITY

0.99+

ScaleIOTITLE

0.99+

four years agoDATE

0.99+

50%QUANTITY

0.99+

firstQUANTITY

0.98+

BothQUANTITY

0.98+

OS10TITLE

0.98+

AnsibleORGANIZATION

0.98+

one single callQUANTITY

0.98+

sixDATE

0.98+

VxRailTITLE

0.98+

Dell Technologies World 2018EVENT

0.97+

Versa TechnologyORGANIZATION

0.97+

PluribusORGANIZATION

0.97+

about 30%QUANTITY

0.97+

CumulusORGANIZATION

0.97+

vSANTITLE

0.97+

two trendsQUANTITY

0.96+

seven years agoDATE

0.96+

SiliconANGLEORGANIZATION

0.96+

SDNORGANIZATION

0.96+

single pointQUANTITY

0.96+

four billion dollarQUANTITY

0.96+

InteropORGANIZATION

0.95+

OPXORGANIZATION

0.95+

oneQUANTITY

0.95+

Solutions ExpoEVENT

0.95+