Changing the Game for Cloud Networking | Pluribus Networks
>>Everyone wants a cloud operating model. Since the introduction of the modern cloud. Last decade, the entire technology landscape has changed. We've learned a lot from the hyperscalers, especially from AWS. Now, one thing is certain in the technology business. It's so competitive. Then if a faster, better, cheaper idea comes along, the industry will move quickly to adopt it. They'll add their unique value and then they'll bring solutions to the market. And that's precisely what's happening throughout the technology industry because of cloud. And one of the best examples is Amazon's nitro. That's AWS has custom built hypervisor that delivers on the promise of more efficiently using resources and expanding things like processor, optionality for customers. It's a secret weapon for Amazon. As, as we, as we wrote last year, every infrastructure company needs something like nitro to compete. Why do we say this? Well, Wiki Bon our research arm estimates that nearly 30% of CPU cores in the data center are wasted. >>They're doing work that they weren't designed to do well, specifically offloading networking, storage, and security tasks. So if you can eliminate that waste, you can recapture dollars that drop right to the bottom line. That's why every company needs a nitro like solution. As a result of these developments, customers are rethinking networks and how they utilize precious compute resources. They can't, or won't put everything into the public cloud for many reasons. That's one of the tailwinds for tier two cloud service providers and why they're growing so fast. They give options to customers that don't want to keep investing in building out their own data centers, and they don't want to migrate all their workloads to the public cloud. So these providers and on-prem customers, they want to be more like hyperscalers, right? They want to be more agile and they do that. They're distributing, networking and security functions and pushing them closer to the applications. >>Now, at the same time, they're unifying their view of the network. So it can be less fragmented, manage more efficiently with more automation and better visibility. How are they doing this? Well, that's what we're going to talk about today. Welcome to changing the game for cloud networking made possible by pluribus networks. My name is Dave Vellante and today on this special cube presentation, John furrier, and I are going to explore these issues in detail. We'll dig into new solutions being created by pluribus and Nvidia to specifically address offloading, wasted resources, accelerating performance, isolating data, and making networks more secure all while unifying the network experience. We're going to start on the west coast and our Palo Alto studios, where John will talk to Mike of pluribus and AMI, but Donnie of Nvidia, then we'll bring on Alessandra Bobby airy of pluribus and Pete Lummus from Nvidia to take a deeper dive into the technology. And then we're gonna bring it back here to our east coast studio and get the independent analyst perspective from Bob Liberte of the enterprise strategy group. We hope you enjoy the program. Okay, let's do this over to John >>Okay. Let's kick things off. We're here at my cafe. One of the TMO and pluribus networks and NAMI by Dani VP of networking, marketing, and developer ecosystem at Nvidia. Great to have you welcome folks. >>Thank you. Thanks. >>So let's get into the, the problem situation with cloud unified network. What problems are out there? What challenges do cloud operators have Mike let's get into it. >>Yeah, it really, you know, the challenges we're looking at are for non hyperscalers that's enterprises, governments, um, tier two service providers, cloud service providers, and the first mandate for them is to become as agile as a hyperscaler. So they need to be able to deploy services and security policies. And second, they need to be able to abstract the complexity of the network and define things in software while it's accelerated in hardware. Um, really ultimately they need a single operating model everywhere. And then the second thing is they need to distribute networking and security services out to the edge of the host. Um, we're seeing a growth in cyber attacks. Um, it's, it's not slowing down. It's only getting worse and, you know, solving for this security problem across clouds is absolutely critical. And the way to do it is to move security out to the host. >>Okay. With that goal in mind, what's the pluribus vision. How does this tie together? >>Yeah. So, um, basically what we see is, uh, that this demands a new architecture and that new architecture has four tenants. The first tenant is unified and simplified cloud networks. If you look at cloud networks today, there's, there's sort of like discreet bespoke cloud networks, you know, per hypervisor, per private cloud edge cloud public cloud. Each of the public clouds have different networks that needs to be unified. You know, if we want these folks to be able to be agile, they need to be able to issue a single command or instantiate a security policy across all those locations with one command and not have to go to each one. The second is like I mentioned, distributed security, um, distributed security without compromise, extended out to the host is absolutely critical. So micro-segmentation and distributed firewalls, but it doesn't stop there. They also need pervasive visibility. >>You know, it's, it's, it's sort of like with security, you really can't see you can't protect what you can't see. So you need visibility everywhere. The problem is visibility to date has been very expensive. Folks have had to basically build a separate overlay network of taps, packet brokers, tap aggregation infrastructure that really needs to be built into this unified network I'm talking about. And the last thing is automation. All of this needs to be SDN enabled. So this is related to my comment about abstraction abstract, the complexity of all of these discreet networks, physic whatever's down there in the physical layer. Yeah. I don't want to see it. I want to abstract it. I wanted to find things in software, but I do want to leverage the power of hardware to accelerate that. So that's the fourth tenant is SDN automation. >>Mike, we've been talking on the cube a lot about this architectural shift and customers are looking at this. This is a big part of everyone who's looking at cloud operations next gen, how do we get there? How do customers get this vision realized? >>That's a great question. And I appreciate the tee up. I mean, we're, we're here today for that reason. We're introducing two things today. Um, the first is a unified cloud networking vision, and that is a vision of where pluribus is headed with our partners like Nvidia longterm. Um, and that is about, uh, deploying a common operating model, SDN enabled SDN, automated hardware, accelerated across all clouds. Um, and whether that's underlying overlay switch or server, um, hype, any hypervisor infrastructure containers, any workload doesn't matter. So that's ultimately where we want to get. And that's what we talked about earlier. Um, the first step in that vision is what we call the unified cloud fabric. And this is the next generation of our adaptive cloud fabric. Um, and what's nice about this is we're not starting from scratch. We have a, a, an award-winning adaptive cloud fabric product that is deployed globally. Um, and in particular, uh, we're very proud of the fact that it's deployed in over a hundred tier one mobile operators as the network fabric for their 4g and 5g virtualized cores. We know how to build carrier grade, uh, networking infrastructure, what we're doing now, um, to realize this next generation unified cloud fabric is we're extending from the switch to this Nvidia Bluefield to DPU. We know there's a, >>Hold that up real quick. That's a good, that's a good prop. That's the blue field and video. >>It's the Nvidia Bluefield two DPU data processing unit. And, um, uh, you know, what we're doing, uh, fundamentally is extending our SDN automated fabric, the unified cloud fabric out to the host, but it does take processing power. So we knew that we didn't want to do, we didn't want to implement that running on the CPU, which is what some other companies do because it consumes revenue generating CPU's from the application. So a DPU is a perfect way to implement this. And we knew that Nvidia was the leader with this blue field too. And so that is the first that's, that's the first step in the getting into realizing this vision. >>I mean, Nvidia has always been powering some great workloads of GPU. Now you've got DPU networking and then video is here. What is the relationship with clothes? How did that come together? Tell us the story. >>Yeah. So, you know, we've been working with pluribus for quite some time. I think the last several months was really when it came to fruition and, uh, what pluribus is trying to build and what Nvidia has. So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, conceptually does really three things, offload, accelerate an isolate. So offload your workloads from your CPU to your data processing unit infrastructure workloads that is, uh, accelerate. So there's a bunch of acceleration engines. So you can run infrastructure workloads much faster than you would otherwise, and then isolation. So you have this nice security isolation between the data processing unit and your other CPU environment. And so you can run completely isolated workloads directly on the data processing unit. So we introduced this, you know, a couple of years ago, and with pluribus, you know, we've been talking to the pluribus team for quite some months now. >>And I think really the combination of what pluribus is trying to build and what they've developed around this unified cloud fabric, uh, is fits really nicely with the DPU and running that on the DPU and extending it really from your physical switch, all the way to your host environment, specifically on the data processing unit. So if you think about what's happening as you add data processing units to your environment. So every server we believe over time is going to have data processing units. So now you'll have to manage that complexity from the physical network layer to the host layer. And so what pluribus is really trying to do is extending the network fabric from the host, from the switch to the host, and really have that single pane of glass for network operators to be able to configure provision, manage all of the complexity of the network environment. >>So that's really how the partnership truly started. And so it started really with extending the network fabric, and now we're also working with them on security. So, you know, if you sort of take that concept of isolation and security isolation, what pluribus has within their fabric is the concept of micro-segmentation. And so now you can take that extended to the data processing unit and really have, um, isolated micro-segmentation workloads, whether it's bare metal cloud native environments, whether it's virtualized environments, whether it's public cloud, private cloud hybrid cloud. So it really is a magical partnership between the two companies with their unified cloud fabric running on, on the DPU. >>You know, what I love about this conversation is it reminds me of when you have these changing markets, the product gets pulled out of the market and, and you guys step up and create these new solutions. And I think this is a great example. So I have to ask you, how do you guys differentiate what sets this apart for customers with what's in it for the customer? >>Yeah. So I mentioned, you know, three things in terms of the value of what the Bluefield brings, right? There's offloading, accelerating, isolating, that's sort of the key core tenants of Bluefield. Um, so that, you know, if you sort of think about what, um, what Bluefields, what we've done, you know, in terms of the differentiation, we're really a robust platform for innovation. So we introduced Bluefield to, uh, last year, we're introducing Bluefield three, which is our next generation of Bluefields, you know, we'll have five X, the arm compute capacity. It will have 400 gig line rate acceleration, four X better crypto acceleration. So it will be remarkably better than the previous generation. And we'll continue to innovate and add, uh, chips to our portfolio every, every 18 months to two years. Um, so that's sort of one of the key areas of differentiation. The other is the, if you look at Nvidia and, and you know, what we're sort of known for is really known for our AI artificial intelligence and our artificial intelligence software, as well as our GPU. >>So you look at artificial intelligence and the combination of artificial intelligence plus data processing. This really creates the, you know, faster, more efficient, secure AI systems from the core of your data center, all the way out to the edge. And so with Nvidia, we really have these converged accelerators where we've combined the GPU, which does all your AI processing with your data processing with the DPU. So we have this convergence really nice convergence of that area. And I would say the third area is really around our developer environment. So, you know, one of the key, one of our key motivations at Nvidia is really to have our partner ecosystem, embrace our technology and build solutions around our technology. So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called Doka, and it's an open SDK for our partners to really build and develop solutions using Bluefield and using all these accelerated libraries that we expose through Doka. And so part of our differentiation is really building this open ecosystem for our partners to take advantage and build solutions around our technology. >>You know, what's exciting is when I hear you talk, it's like you realize that there's no one general purpose network anymore. Everyone has their own super environment Supercloud or these new capabilities. They can really craft their own, I'd say, custom environment at scale with easy tools. Right. And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers run this effectively? Cost-effectively and how do people migrate? >>Yeah, I, I think that is the key question, right? So we've got this beautiful architecture. You, you know, Amazon nitro is a, is a good example of, of a smart NIC architecture that has been successfully deployed, but enterprises and serve tier two service providers and tier one service providers and governments are not Amazon, right? So they need to migrate there and they need this architecture to be cost-effective. And, and that's, that's super key. I mean, the reality is deep user moving fast, but they're not going to be, um, deployed everywhere on day one. Some servers will have DPS right away, some servers will have use and a year or two. And then there are devices that may never have DPS, right. IOT gateways, or legacy servers, even mainframes. Um, so that's the beauty of a solution that creates a fabric across both the switch and the DPU, right. >>Um, and by leveraging the Nvidia Bluefield DPU, what we really like about it is it's open. Um, and that drives, uh, cost efficiencies. And then, um, uh, you know, with this, with this, our architectural approach effectively, you get a unified solution across switch and DPU workload independent doesn't matter what hypervisor it is, integrated visibility, integrated security, and that can, uh, create tremendous cost efficiencies and, and really extract a lot of the expense from, from a capital perspective out of the network, as well as from an operational perspective, because now I have an SDN automated solution where I'm literally issuing a command to deploy a network service or to create or deploy our security policy and is deployed everywhere, automatically saving the oppor, the network operations team and the security operations team time. >>All right. So let me rewind that because that's super important. Get the unified cloud architecture, I'm the customer guy, but it's implemented, what's the value again, take, take me through the value to me. I have a unified environment. What's the value. >>Yeah. So I mean, the value is effectively, um, that, so there's a few pieces of value. The first piece of value is, um, I'm creating this clean D mark. I'm taking networking to the host. And like I mentioned, we're not running it on the CPU. So in implementations that run networking on the CPU, there's some conflict between the dev ops team who owned the server and the NetApps team who own the network because they're installing software on the, on the CPU stealing cycles from what should be revenue generating. Uh CPU's. So now by, by terminating the networking on the DPU, we click create this real clean DMARC. So the dev ops folks are happy because they don't necessarily have the skills to manage network and they don't necessarily want to spend the time managing networking. They've got their network counterparts who are also happy the NetApps team, because they want to control the networking. >>And now we've got this clean DMARC where the DevOps folks get the services they need and the NetApp folks get the control and agility they need. So that's a huge value. Um, the next piece of value is distributed security. This is essential. I mentioned earlier, you know, put pushing out micro-segmentation and distributed firewall, basically at the application level, right, where I create these small, small segments on an by application basis. So if a bad actor does penetrate the perimeter firewall, they're contained once they get inside. Cause the worst thing is a bad actor, penetrates a perimeter firewall and can go wherever they want and wreak havoc. Right? And so that's why this, this is so essential. Um, and the next benefit obviously is this unified networking operating model, right? Having, uh, uh, uh, an operating model across switch and server underlay and overlay, workload agnostic, making the life of the NetApps teams much easier so they can focus their time on really strategy instead of spending an afternoon, deploying a single villain, for example. >>Awesome. And I think also from my standpoint, I mean, perimeter security is pretty much, I mean, they're out there, it gets the firewall still out there exists, but pretty much they're being breached all the time, the perimeter. So you have to have this new security model. And I think the other thing that you mentioned, the separation between dev ops is cool because the infrastructure is code is about making the developers be agile and build security in from day one. So this policy aspect is, is huge. Um, new control points. I think you guys have a new architecture that enables the security to be handled more flexible. >>Right. >>That seems to be the killer feature here, >>Right? Yeah. If you look at the data processing unit, I think one of the great things about sort of this new architecture, it's really the foundation for zero trust it's. So like you talked about the perimeter is getting breached. And so now each and every compute node has to be protected. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the foundation of zero trust. And pluribus is really building on that vision with, uh, allowing sort of micro-segmentation and being able to protect each and every compute node as well as the underlying network. >>This is super exciting. This is an illustration of how the market's evolving architectures are being reshaped and refactored for cloud scale and all this new goodness with data. So I gotta ask how you guys go into market together. Michael, start with you. What's the relationship look like in the go to market with an Nvidia? >>Sure. Um, I mean, we're, you know, we're super excited about the partnership, obviously we're here together. Um, we think we've got a really good solution for the market, so we're jointly marketing it. Um, uh, you know, obviously we appreciate that Nvidia is open. Um, that's, that's sort of in our DNA, we're about open networking. They've got other ISV who are gonna run on Bluefield too. We're probably going to run on other DPS in the, in the future, but right now, um, we're, we feel like we're partnered with the number one, uh, provider of DPS in the world and, uh, super excited about, uh, making a splash with it. >>I'm in get the hot product. >>Yeah. So Bluefield too, as I mentioned was GA last year, we're introducing, uh, well, we now also have the converged accelerator. So I talked about artificial intelligence or artificial intelligence with the Bluefield DPU, all of that put together on a converged accelerator. The nice thing there is you can either run those workloads. So if you have an artificial intelligence workload and an infrastructure workload, you can warn them separately on the same platform or you can actually use, uh, you can actually run artificial intelligence applications on the Bluefield itself. So that's what the converged accelerator really brings to the table. Uh, so that's available now. Then we have Bluefield three, which will be available late this year. And I talked about sort of, you know, uh, how much better that next generation of Bluefield is in comparison to Bluefield two. So we will see Bluefield three shipping later on this year, and then our software stack, which I talked about, which is called Doka we're on our second version are Doka one dot two. >>We're releasing Doka one dot three, uh, in about two months from now. And so that's really our open ecosystem framework. So allow you to program the Bluefields. So we have all of our acceleration libraries, um, security libraries, that's all packed into this STK called Doka. And it really gives that simplicity to our partners to be able to develop on top of Bluefield. So as we add new generations of Bluefield, you know, next, next year, we'll have, you know, another version and so on and so forth Doka is really that unified unified layer that allows, um, Bluefield to be both forwards compatible and backwards compatible. So partners only really have to think about writing to that SDK once, and then it automatically works with future generations of Bluefields. So that's sort of the nice thing around, um, around Doka. And then in terms of our go to market model, we're working with every, every major OEM. So, uh, later on this year, you'll see, you know, major server manufacturers, uh, releasing Bluefield enabled servers. So, um, more to come >>Awesome, save money, make it easier, more capabilities, more workload power. This is the future of, of cloud operations. >>Yeah. And, and, and, uh, one thing I'll add is, um, we are, um, we have a number of customers as you'll hear in the next segment, um, that are already signed up and we'll be working with us for our, uh, early field trial starting late April early may. Um, we are accepting registrations. You can go to www.pluribusnetworks.com/e F T a. If you're interested in signing up for, um, uh, being part of our field trial and providing feedback on the product, >>Awesome innovation and network. Thanks so much for sharing the news. Really appreciate it. Thanks so much. Okay. In a moment, we'll be back to look deeper in the product, the integration security zero trust use cases. You're watching the cube, the leader in enterprise tech coverage, >>Cloud networking is complex and fragmented slowing down your business. How can you simplify and unify your cloud networks to increase agility and business velocity? >>Pluribus unified cloud networking provides a unified simplify and agile network fabric across all clouds. It brings the simplicity of a public cloud operation model to private clouds, dramatically reducing complexity and improving agility, availability, and security. Now enterprises and service providers can increase their business philosophy and delight customers in the distributed multi-cloud era. We achieve this with a new approach to cloud networking, pluribus unified cloud fabric. This open vendor, independent network fabric, unifies, networking, and security across distributed clouds. The first step is extending the fabric to servers equipped with data processing units, unifying the fabric across switches and servers, and it doesn't stop there. The fabric is unified across underlay and overlay networks and across all workloads and virtualization environments. The unified cloud fabric is optimized for seamless migration to this new distributed architecture, leveraging the power of the DPU for application level micro-segmentation distributed fireball and encryption while still supporting those servers and devices that are not equipped with a DPU. Ultimately the unified cloud fabric extends seamlessly across distributed clouds, including central regional at edge private clouds and public clouds. The unified cloud fabric is a comprehensive network solution. That includes everything you need for clouds, networking built in SDN automation, distributed security without compromises, pervasive wire speed, visibility and application insight available on your choice of open networking switches and DP use all at the lowest total cost of ownership. The end result is a dramatically simplified unified cloud networking architecture that unifies your distributed clouds and frees your business to move at cloud speed, >>To learn more, visit www.pluribusnetworks.com. >>Okay. We're back I'm John ferry with the cube, and we're going to go deeper into a deep dive into unified cloud networking solution from Clovis and Nvidia. And we'll examine some of the use cases with Alessandra Burberry, VP of product management and pullovers networks and Pete Bloomberg who's director of technical marketing and video remotely guys. Thanks for coming on. Appreciate it. >>Yeah. >>So deep dive, let's get into the what and how Alexandra we heard earlier about the pluribus Nvidia partnership and the solution you're working together on what is it? >>Yeah. First let's talk about the water. What are we really integrating with the Nvidia Bluefield, the DPO technology, uh, plugable says, um, uh, there's been shipping, uh, in, uh, in volume, uh, in multiple mission critical networks. So this advisor one network operating systems, it runs today on a merchant silicone switches and effectively it's a standard open network operating system for data center. Um, and the novelty about this system that integrates a distributed control plane for, at water made effective in SDN overlay. This automation is a completely open and interoperable and extensible to other type of clouds is not enclosed them. And this is actually what we're now porting to the Nvidia DPO. >>Awesome. So how does it integrate into Nvidia hardware and specifically how has pluribus integrating its software with the Nvidia hardware? >>Yeah, I think, uh, we leverage some of the interesting properties of the Bluefield, the DPO hardware, which allows actually to integrate, uh, um, uh, our software, our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the DPU card that is completely agnostic to the hypervisor layer or OSTP layer running on, uh, on the host even more, um, uh, we can also independently manage this network, know that the switch on a Neek effectively, um, uh, managed completely independently from the host. You don't have to go through the network operating system, running on x86 to control this network node. So you throw yet the experience effectively of a top of rack for virtual machine or a top of rack for, uh, Kubernetes bots, where instead of, uh, um, if you allow me with the analogy instead of connecting a server knee directly to a switchboard, now you're connecting a VM virtual interface to a virtual interface on the switch on an ache. >>And, uh, also as part of this integration, we, uh, put a lot of effort, a lot of emphasis in, uh, accelerating the entire, uh, data plane for networking and security. So we are taking advantage of the DACA, uh, Nvidia DACA API to program the accelerators. And these accomplished two things with that. Number one, uh, you, uh, have much greater performance, much better performance. They're running the same network services on an x86 CPU. And second, this gives you the ability to free up, I would say around 20, 25% of the server capacity to be devoted either to, uh, additional workloads to run your cloud applications, or perhaps you can actually shrink the power footprint and compute footprint of your data center by 20%, if you want to run the same number of compute workloads. So great efficiencies in the overall approach, >>And this is completely independent of the server CPU, right? >>Absolutely. There is zero code from running on the x86, and this is what we think this enables a very clean demarcation between computer and network. >>So Pete, I gotta get, I gotta get you in here. We heard that, uh, the DPU is enabled cleaner separation of dev ops and net ops. Can you explain why that's important because everyone's talking DevSecOps right now, you've got net ops, net, net sec ops, this separation. Why is this clean separation important? >>Yeah, I think it's a, you know, it's a pragmatic solution in my opinion. Um, you know, we wish the world was all kind of rainbows and unicorns, but it's a little, a little messier than that. And I think a lot of the dev ops stuff and that, uh, mentality and philosophy, there's a natural fit there. Right? You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers. Well, the network has always been this other thing and the network operators have always had a very different approach to things than compute operators. And, you know, I think that we, we in the networking industry have gotten closer together, but there's still a gap there's still some distance. And I think in that distance, isn't going to be closed. And so, you know, again, it comes down to pragmatism and I think, you know, one of my favorite phrases is look good fences, make good neighbors. And that's what this is. >>Yeah. That's a great point because dev ops has become kind of the calling card for cloud, right. But dev ops is as simply infrastructure as code and infrastructure is networking, right? So if infrastructure is code, you know, you're talking about, you know, that part of the stack under the covers under the hood, if you will, this is super important distinction. And this is where the innovation is. Can you elaborate on how you see that? Because this is really where the action is right now. >>Yeah, exactly. And I think that's where, um, one from, from the policy, the security that the zero trust aspect of this, right? If you get it wrong on that network side, all of a sudden you, you can totally open up that those capabilities. And so security is part of that. But the other part is thinking about this at scale, right? So we're taking one top of rack switch and adding, you know, up to 48 servers per rack. And so that ability to automate, orchestrate and manage at scale becomes absolutely critical. >>I'll Sandra, this is really the why we're talking about here, and this is scale. And again, getting it right. If you don't get it right, you're going to be really kind of up, you know what you know, so this is a huge deal. Networking matters, security matters, automation matters, dev ops, net ops, all coming together, clean separation, um, help us understand how this joint solution with Nvidia fits into the pluribus unified cloud networking vision, because this is what people are talking about and working on right now. >>Yeah, absolutely. So I think here with this solution, we're attacking two major problems in cloud networking. One is, uh, operation of, uh, cloud networking. And the second is a distributing security services in the cloud infrastructure. First, let me talk about the first water. We really unifying. If we're unifying something, something must be at least fragmented or this jointed and the, what is this joint that is actually the network in the cloud. If you look holistically, how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers, you'll build your IP clause fabric leaf in spine typologies. This is actually a well understood the problem. I, I would say, um, there are multiple vendors, uh, uh, with, uh, um, uh, let's say similar technologies, um, very well standardized, whether you will understood, um, and almost a commodity, I would say building an IP fabric these days, but this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint, two services are actually now moved into the compute layer where you actually were called builders, have to instrument the, a separate, uh, network virtualization layer, where they deploy segmentation and security closer to the workloads. >>And this is where the complication arise. These high value part of the cloud network is where you have a plethora of options that they don't talk to each other. And they are very dependent on the kind of hypervisor or compute solution you choose. Um, for example, the networking API to be between an GSXI environment or an hyper V or a Zen are completely disjointed. You have multiple orchestration layers. And when, and then when you throw in also Kubernetes in this, in this, in this type of architecture, uh, you're introducing yet another level of networking. And when Kubernetes runs on top of VMs, which is a prevalent approach, you actually just stacking up multiple networks on the compute layer that they eventually run on the physical fabric infrastructure. Those are all ships in the nights effectively, right? They operate as completely disjointed. And we're trying to attack this problem first with the notion of a unified fabric, which is independent from any workloads, whether it's this fabric spans on a switch, which can be con connected to a bare metal workload, or can span all the way inside the DPU, uh, where, um, you have, uh, your multi hypervisor compute environment. >>It's one API, one common network control plane, and one common set of segmentation services for the network. That's probably the number one, >>You know, it's interesting you, man, I hear you talking, I hear one network month, different operating models reminds me of the old serverless days. You know, there's still servers, but they call it serverless. Is there going to be a term network list? Because at the end of the day, it should be one network, not multiple operating models. This, this is a problem that you guys are working on. Is that right? I mean, I'm not, I'm just joking server listen network list, but the idea is it should be one thing. >>Yeah, it's effectively. What we're trying to do is we are trying to recompose this fragmentation in terms of network operation, across physical networking and server networking server networking is where the majority of the problems are because of the, uh, as much as you have standardized the ways of building, uh, physical networks and cloud fabrics with IP protocols and internet, you don't have that kind of, uh, uh, sort of, uh, um, um, uh, operational efficiency, uh, at the server layer. And, uh, this is what we're trying to attack first. The, with this technology, the second aspect we're trying to attack is are we distribute the security services throughout the infrastructure, more efficiently, whether it's micro-segmentation is a stateful firewall services, or even encryption. Those are all capabilities enabled by the blue field, uh, uh, the Butte technology and, uh, uh, we can actually integrate those capabilities directly into the nettle Fabrica, uh, limiting dramatically, at least for east-west traffic, the sprawl of, uh, security appliances, whether virtual or physical, that is typically the way the people today, uh, segment and secure the traffic in the cloud. >>Awesome. Pete, all kidding aside about network lists and serverless kind of fun, fun play on words there, the network is one thing it's basically distributed computing, right? So I love to get your thoughts about this distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >>Yeah, I think what's, what's beautiful and kind of what the DPU brings. That's new to this model is a completely isolated compute environment inside. So, you know, it's the, uh, yo dog, I heard you like a server, so I put a server inside your server. Uh, and so we provide, uh, you know, armed CPU's memory and network accelerators inside, and that is completely isolated from the host. So the server, the, the actual x86 host just thinks it has a regular Nick in there, but you actually have this full control plane thing. It's just like taking your top of rack switch and shoving it inside of your compute node. And so you have not only the separation, um, within the data plane, but you have this complete control plane separation. So you have this element that the network team can now control and manage, but we're taking all of the functions we used to do at the top of rack switch, and we're just shooting them now. >>And, you know, as time has gone on we've, we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, um, the other option is centralized appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that of aliens good enough, or we hope that if the excellent tunnel is good enough and we can actually apply more advanced techniques there because we can't physically, you know, financially afford that appliance to see all of the traffic. And now that we have a distributed model with this accelerator, we could do it. >>So what's the what's in it for the customer. I real quick, cause I think this is interesting point. You mentioned policy, everyone in networking knows policy is just a great thing and it adds, you hear it being talked about up the stack as well. When you start getting to orchestrating microservices and whatnot, all that good stuff going on there, containers and whatnot and modern applications. What's the benefit to the customers with this approach? Because what I heard was more scale, more edge deployment, flexibility, relative to security policies and application enablement. I mean, is that what what's the customer get out of this architecture? What's the enablement. >>It comes down to, uh, taking again the capabilities that were in that top of rack switch and asserting them down. So that makes simplicity smaller blast radiuses for failure, smaller failure domains, maintenance on the networks, and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. Um, and again, you know, we always want to kind of separate each one of those layers. So just as in say, a VX land network, my leaf and spine don't have to be tightly coupled together. I can now do this at a different layer. And so you can run a DPU with any networking in the core there. And so you get this extreme flexibility. You can start small, you can scale large. Um, you know, to me, the, the possibilities are endless. Yes, >>It's a great security control plan. Really flexibility is key. And, and also being situationally aware of any kind of threats or new vectors or whatever's happening in the network. Alessandra, this is huge upside, right? You've already identified some successes with some customers on your early field trials. What are they doing and why are they attracted to the solution? >>Yeah, I think the response from customers has been, uh, the most, uh, encouraging and, uh, exciting, uh, for, uh, for us to, uh, to sort of continue and work and develop this product. And we have actually learned a lot in the process. Um, we talked to tier two tier three cloud providers. Uh, we talked to, uh, SP um, software Tyco type of networks, uh, as well as a large enterprise customers, um, in, uh, one particular case. Um, uh, one, uh, I think, um, let me, let me call out a couple of examples here, just to give you a flavor. Uh, there is a service provider, a cloud provider, uh, in Asia who is actually managing a cloud, uh, where they are offering services based on multiple hypervisors. They are native services based on Zen, but they also are on ramp into the cloud, uh, workloads based on, uh, ESI and, uh, uh, and KVM, depending on what the customer picks from the piece on the menu. >>And they have the problem of now orchestrating through their orchestrate or integrating with the Zen center with vSphere, uh, with, uh, open stack to coordinate these multiple environments and in the process to provide security, they actually deploy virtual appliances everywhere, which has a lot of costs, complication, and eats up into the server CPU. The problem is that they saw in this technology, they call it actually game changing is actually to remove all this complexity of in a single network and distribute the micro-segmentation service directly into the fabric. And overall, they're hoping to get out of it, uh, uh, tremendous, uh, um, opics, uh, benefit and overall, um, uh, operational simplification for the cloud infrastructure. That's one potent a use case. Uh, another, uh, large enterprise customer global enterprise customer, uh, is running, uh, both ESI and hyper V in that environment. And they don't have a solution to do micro-segmentation consistently across hypervisors. >>So again, micro-segmentation is a huge driver security looks like it's a recurring theme, uh, talking to most of these customers and in the Tyco space, um, uh, we're working with a few types of customers on the CFT program, uh, where the main goal is actually to our Monet's network operation. They typically handle all the VNF search with their own homegrown DPDK stack. This is overly complex. It is frankly also as low and inefficient, and then they have a physical network to manage the, the idea of having again, one network, uh, to coordinate the provision in our cloud services between the, the take of VNF, uh, and, uh, the rest of the infrastructure, uh, is extremely powerful on top of the offloading capability of the, by the bluefin DPOs. Those are just some examples. >>That was a great use case, a lot more potential. I see that with the unified cloud networking, great stuff, feed, shout out to you guys at Nvidia had been following your success for a long time and continuing to innovate as cloud scales and pluribus here with the unified networking, kind of bring it to the next level. Great stuff. Great to have you guys on. And again, software keeps driving the innovation again, networking is just a part of it, and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in, in this, uh, new architecture and solution, uh, learn more because this is an architectural shift. People are working on this problem. They're trying to think about multiple clouds of trying to think about unification around the network and giving more security, more flexibility, uh, to their teams. How can people learn more? >>Yeah, so, uh, all Sandra and I have a talk at the upcoming Nvidia GTC conference. Um, so that's the week of March 21st through 24th. Um, you can go and register for free and video.com/at GTC. Um, you can also watch recorded sessions if you ended up watching us on YouTube a little bit after the fact. Um, and we're going to dive a little bit more into the specifics and the details and what we're providing in the solution. >>Alexandra, how can people learn more? >>Yeah, absolutely. People can go to the pluribus, a website, www boost networks.com/eft, and they can fill up the form and, uh, they will contact durables to either know more or to know more and actually to sign up for the actual early field trial program, which starts at the end of April. >>Okay. Well, we'll leave it there. Thanks. You both for joining. Appreciate it up next. You're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. I'm John ferry with the >>Cube. Thanks for watching. >>Okay. We've heard from the folks at networks and Nvidia about their effort to transform cloud networking and unify bespoke infrastructure. Now let's get the perspective from an independent analyst and to do so. We welcome in ESG, senior analysts, Bob LA Liberte, Bob. Good to see you. Thanks for coming into our east coast studios. >>Oh, thanks for having me. It's great to be >>Here. Yeah. So this, this idea of unified cloud networking approach, how serious is it? What's what's driving it. >>Yeah, there's certainly a lot of drivers behind it, but probably the first and foremost is the fact that application environments are becoming a lot more distributed, right? So the, it pendulum tends to swing back and forth. And we're definitely on one that's swinging from consolidated to distributed. And so applications are being deployed in multiple private data centers, multiple public cloud locations, edge locations. And as a result of that, what you're seeing is a lot of complexity. So organizations are having to deal with this highly disparate environment. They have to secure it. They have to ensure connectivity to it and all that's driving up complexity. In fact, when we asked in one of our last surveys and last year about network complexity, more than half 54% came out and said, Hey, our network environment is now either more or significantly more complex than it used to be. >>And as a result of that, what you're seeing is it's really impacting agility. So everyone's moving to these modern application environments, distributing them across areas so they can improve agility yet it's creating more complexity. So a little bit counter to the fact and, you know, really counter to their overarching digital transformation initiatives. From what we've seen, you know, nine out of 10 organizations today are either beginning in process or have a mature digital transformation process or initiative, but their top goals, when you look at them, it probably shouldn't be a surprise. The number one goal is driving operational efficiency. So it makes sense. I've distributed my environment to create agility, but I've created a lot of complexity. So now I need these tools that are going to help me drive operational efficiency, drive better experience. >>I mean, I love how you bring in the data yesterday. Does a great job with that. Uh, questions is, is it about just unifying existing networks or is there sort of a need to rethink kind of a do-over network, how networks are built? >>Yeah, that's a, that's a really good point because certainly unifying networks helps right. Driving any kind of operational efficiency helps. But in this particular case, because we've made the transition to new application architectures and the impact that's having as well, it's really about changing and bringing in new frameworks and new network architectures to accommodate those new application architectures. And by that, what I'm talking about is the fact that these new modern application architectures, microservices, containers are driving a lot more east west traffic. So in the old days, it used to be easier in north south coming out of the server, one application per server, things like that. Right now you've got hundreds, if not thousands of microservices communicating with each other users communicating to them. So there's a lot more traffic and a lot of it's taking place within the servers themselves. The other issue that you starting to see as well from that security perspective, when we were all consolidated, we had those perimeter based legacy, you know, castle and moat security architectures, but that doesn't work anymore when the applications aren't in the castle, right. >>When everything's spread out that that no longer happens. So we're absolutely seeing, um, organizations trying to, trying to make a shift. And, and I think much, like if you think about the shift that we're seeing with all the remote workers and the sassy framework to enable a secure framework there, this it's almost the same thing. We're seeing this distributed services framework come up to support the applications better within the data centers, within the cloud data centers, so that you can drive that security closer to those applications and make sure they're, they're fully protected. Uh, and that's really driving a lot of the, you know, the zero trust stuff you hear, right? So never trust, always verify, making sure that everything is, is, is really secure micro-segmentation is another big area. So ensuring that these applications, when they're connected to each other, they're, they're fully segmented out. And that's again, because if someone does get a breach, if they are in your data center, you want to limit the blast radius, you want to limit the amount of damage that's done. So that by doing that, it really makes it a lot harder for them to see everything that's in there. >>You know, you mentioned zero trust. It used to be a buzzword, and now it's like become a mandate. And I love the mode analogy. You know, you build a moat to protect the queen and the castle, the Queens left the castles, it's just distributed. So how should we think about this, this pluribus and Nvidia solution. There's a spectrum, help us understand that you've got appliances, you've got pure software solutions. You've got what pluribus is doing with Nvidia, help us understand that. >>Yeah, absolutely. I think as organizations recognize the need to distribute their services to closer to the applications, they're trying different models. So from a legacy approach, you know, from a security perspective, they've got these centralized firewalls that they're deploying within their data centers. The hard part for that is if you want all this traffic to be secured, you're actually sending it out of the server up through the rack, usually to in different location in the data center and back. So with the need for agility, with the need for performance, right, that adds a lot of latency. Plus when you start needing to scale, that means adding more and more network connections, more and more appliances. So it can get very costly as well as impacting the performance. The other way that organizations are seeking to solve this problem is by taking the software itself and deploying it on the servers. Okay. So that's a, it's a great approach, right? It brings it really close to the applications, but the things you start running into there, there's a couple of things. One is that you start seeing that the DevOps team start taking on that networking and security responsibility, which they >>Don't want to >>Do, they don't want to do right. And the operations teams loses a little bit of visibility into that. Um, plus when you load the software onto the server, you're taking up precious CPU cycles. So if you're really wanting your applications to perform at an optimized state, having additional software on there, isn't going to, isn't going to do it. So, you know, when we think about all those types of things, right, and certainly the other side effects of that is the impact of the performance, but there's also a cost. So if you have to buy more servers because your CPU's are being utilized, right, and you have hundreds or thousands of servers, right, those costs are going to add up. So what, what Nvidia and pluribus have done by working together is to be able to take some of those services and be able to deploy them onto a smart Nick, right? >>To be able to deploy the DPU based smart SMARTNICK into the servers themselves. And then pluribus has come in and said, we're going to unify create that unified fabric across the networking space, into those networking services all the way down to the server. So the benefits of having that are pretty clear in that you're offloading that capability from the server. So your CPU's are optimized. You're saving a lot of money. You're not having to go outside of the server and go to a different rack somewhere else in the data center. So your performance is going to be optimized as well. You're not going to incur any latency hit for every trip round trip to the, to the firewall and back. So I think all those things are really important. Plus the fact that you're going to see from a, an organizational aspect, we talked about the dev ops and net ops teams. The network operations teams now can work with the security teams to establish the security policies and the networking policies. So that they've dev ops teams. Don't have to worry about that. So essentially they just create the guardrails and let the dev op team run. Cause that's what they want. They want that agility and speed. >>Yeah. Your point about CPU cycles is key. I mean, it's estimated that 25 to 30% of CPU cycles in the data center are wasted. The cores are wasted doing storage offload or, or networking or security offload. And, you know, I've said many times everybody needs a nitro like Amazon nugget, but you can't go, you can only buy Amazon nitro if you go into AWS. Right. Everybody needs a nitro. So is that how we should think about this? >>Yeah. That's a great analogy to think about this. Um, and I think I would take it a step further because it's, it's almost the opposite end of the spectrum because pluribus and video are doing this in a very open way. And so pluribus has always been a proponent of open networking. And so what they're trying to do is extend that now to these distributed services. So leverage working with Nvidia, who's also open as well, being able to bring that to bear so that organizations can not only take advantage of these distributed services, but also that unified networking fabric, that unified cloud fabric across that environment from the server across the switches, the other key piece of what pluribus is doing, because they've been doing this for a while now, and they've been doing it with the older application environments and the older server environments, they're able to provide that unified networking experience across a host of different types of servers and platforms. So you can have not only the modern application supported, but also the legacy environments, um, you know, bare metal. You could go any type of virtualization, you can run containers, et cetera. So a wide gambit of different technologies hosting those applications supported by a unified cloud fabric from pluribus. >>So what does that mean for the customer? I don't have to rip and replace my whole infrastructure, right? >>Yeah. Well, think what it does for, again, from that operational efficiency, when you're going from a legacy environment to that modern environment, it helps with the migration helps you accelerate that migration because you're not switching different management systems to accomplish that. You've got the same unified networking fabric that you've been working with to enable you to run your legacy as well as transfer over to those modern applications. Okay. >>So your people are comfortable with the skillsets, et cetera. All right. I'll give you the last word. Give us the bottom line here. >>So yeah, I think obviously with all the modern applications that are coming out, the distributed application environments, it's really posing a lot of risk on these organizations to be able to get not only security, but also visibility into those environments. And so organizations have to find solutions. As I said, at the beginning, they're looking to drive operational efficiency. So getting operational efficiency from a unified cloud networking solution, that it goes from the server across the servers to multiple different environments, right in different cloud environments is certainly going to help organizations drive that operational efficiency. It's going to help them save money for visibility, for security and even open networking. So a great opportunity for organizations, especially large enterprises, cloud providers who are trying to build that hyperscaler like environment. You mentioned the nitro card, right? This is a great way to do it with an open solution. >>Bob, thanks so much for, for coming in and sharing your insights. Appreciate it. >>You're welcome. Thanks. >>Thanks for watching the program today. Remember all these videos are available on demand@thekey.net. You can check out all the news from today@siliconangle.com and of course, pluribus networks.com many thanks diplomas for making this program possible and sponsoring the cube. This is Dave Volante. Thanks for watching. Be well, we'll see you next time.
SUMMARY :
And one of the best examples is Amazon's nitro. So if you can eliminate that waste, and Pete Lummus from Nvidia to take a deeper dive into the technology. Great to have you welcome folks. Thank you. So let's get into the, the problem situation with cloud unified network. and the first mandate for them is to become as agile as a hyperscaler. How does this tie together? Each of the public clouds have different networks that needs to be unified. So that's the fourth tenant How do customers get this vision realized? And I appreciate the tee up. That's the blue field and video. And so that is the first that's, that's the first step in the getting into realizing What is the relationship with clothes? So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, the host, from the switch to the host, and really have that single pane of glass for So it really is a magical partnership between the two companies with pulled out of the market and, and you guys step up and create these new solutions. Um, so that, you know, if you sort of think about what, So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers So they need to migrate there and they need this architecture to be cost-effective. And then, um, uh, you know, with this, with this, our architectural approach effectively, Get the unified cloud architecture, I'm the customer guy, So now by, by terminating the networking on the DPU, Um, and the next benefit obviously So you have to have this new security model. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the the go to market with an Nvidia? in the future, but right now, um, we're, we feel like we're partnered with the number one, And I talked about sort of, you know, uh, how much better that next generation of Bluefield So as we add new generations of Bluefield, you know, next, This is the future of, of cloud operations. You can go to www.pluribusnetworks.com/e Thanks so much for sharing the news. How can you simplify and unify your cloud networks to increase agility and business velocity? Ultimately the unified cloud fabric extends seamlessly across And we'll examine some of the use cases with Alessandra Burberry, Um, and the novelty about this system that integrates a distributed control So how does it integrate into Nvidia hardware and specifically So the first byproduct of this approach is that whatever And second, this gives you the ability to free up, I would say around 20, and this is what we think this enables a very clean demarcation between computer and So Pete, I gotta get, I gotta get you in here. And so, you know, again, it comes down to pragmatism and I think, So if infrastructure is code, you know, you're talking about, you know, that part of the stack And so that ability to automate, into the pluribus unified cloud networking vision, because this is what people are talking but this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint, on the kind of hypervisor or compute solution you choose. That's probably the number one, I mean, I'm not, I'm just joking server listen network list, but the idea is it should the Butte technology and, uh, uh, we can actually integrate those capabilities directly So I love to get your thoughts about Uh, and so we provide, uh, you know, armed CPU's memory scale large enough, the question is, can you afford it? What's the benefit to the customers with this approach? And so you can run a DPU You've already identified some successes with some customers on your early field trials. couple of examples here, just to give you a flavor. And overall, they're hoping to get out of it, uh, uh, tremendous, and then they have a physical network to manage the, the idea of having again, one network, So I got to ask both of you to wrap this up. Um, so that's the week of March 21st through 24th. more or to know more and actually to sign up for the actual early field trial program, You're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. Now let's get the perspective It's great to be What's what's driving it. So organizations are having to deal with this highly So a little bit counter to the fact and, you know, really counter to their overarching digital transformation I mean, I love how you bring in the data yesterday. So in the old days, it used to be easier in north south coming out of the server, So that by doing that, it really makes it a lot harder for them to see And I love the mode analogy. but the things you start running into there, there's a couple of things. So if you have to buy more servers because your CPU's are being utilized, the server and go to a different rack somewhere else in the data center. So is that how we should think about this? environments and the older server environments, they're able to provide that unified networking experience across environment, it helps with the migration helps you accelerate that migration because you're not switching different management I'll give you the last word. that it goes from the server across the servers to multiple different environments, right in different cloud environments Bob, thanks so much for, for coming in and sharing your insights. You're welcome. You can check out all the news from today@siliconangle.com and of course,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Donnie | PERSON | 0.99+ |
Bob Liberte | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Alessandra Burberry | PERSON | 0.99+ |
Sandra | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Pete Bloomberg | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Asia | LOCATION | 0.99+ |
Alexandra | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Pete Lummus | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Bob LA Liberte | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
John | PERSON | 0.99+ |
ESG | ORGANIZATION | 0.99+ |
Bob | PERSON | 0.99+ |
two companies | QUANTITY | 0.99+ |
25 | QUANTITY | 0.99+ |
Alessandra Bobby | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Bluefield | ORGANIZATION | 0.99+ |
NetApps | ORGANIZATION | 0.99+ |
demand@thekey.net | OTHER | 0.99+ |
20% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
a year | QUANTITY | 0.99+ |
March 21st | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
www.pluribusnetworks.com/e | OTHER | 0.99+ |
Tyco | ORGANIZATION | 0.99+ |
late April | DATE | 0.99+ |
Doka | TITLE | 0.99+ |
400 gig | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
second version | QUANTITY | 0.99+ |
two services | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
third area | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
second aspect | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Each | QUANTITY | 0.99+ |
www.pluribusnetworks.com | OTHER | 0.99+ |
Pete | PERSON | 0.99+ |
last year | DATE | 0.99+ |
one application | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Vasanth Kumar, MongoDB Principal Solutions Architect | Io-Tahoe Episode 7
>> Okay. We're here with Vasanth Kumar who's the Principal Solutions Architect for MongoDB. Vasanth, welcome to "theCube." >> Thanks Dave. >> Hey, listen, I feel like you were born to be an architect in technology. I mean, you've worked for big SIs, you've worked with many customers, you have experience in financial services and banking. Tell us, the audience, a little bit more about yourself, and what you're up to these days. >> Yeah. Hi, thanks for the for inviting me for this discussion. I'm based out of Bangalore, India, having around 18 years experience in IT industry, building enterprise products for different domains, verticals, finance built and enterprise banking applications, IOT platforms, digital experience solutions. Now being with MongoDB nearly two years, been working in a partner team as a principal solutions architect, especially working with ISBs to build the best practices of handling the data and embed the right database as part of their product. I also worked with technology partners to integrate the compatible technology compliance with MongoDB. And also worked with the private cloud providers to provide a database as a service. >> Got it. So, you know, I have to Vasanth, I think Mongo, you kind of nailed it. They were early on with the trends of managing unstructured data, making it really simple. There was always a developer appeal, which has lasted and then doing so with an architecture that scales out, and back in the early days when Mongo was founded, I remember those days, I mean, digital transformation, wasn't a thing, it wasn't a buzz word, but it just so happens that Mongo's approach, it dovetails very nicely with a digital business. So I wonder if you could talk about that, talk about the fit and how MongoDB thinks about accelerating digital transformation and why you're different from like a traditional RDBMS. >> Sure, exactly, yeah. You had a right understanding, let me elaborate it. So we all know that the customer expectation changes day by day, because of the business agility functionality changes, how they want to experience the applications, or in apps that changes okay. And obviously this yields to the agility of the information which transforms between the multiple systems or layers. And to achieve this, obviously the way of architecting or developing the product as completely a different shift, might be moving from the monolith to microservices or event-based architecture and so on. And obviously the database has to be opt for these environment to adopt these changes, to adopt the scale of load and the other thing. Okay. And also like we see that the common, the protocol for the information exchange is JSON, and something like you, you adopt it. The database adopts it natively to that is a perfect fit. Okay. So that's where the MongoDB fits perfectly for billing or transforming the modern applications, because it's a general purpose database which accepts the JSON as a payload and stores it in a BSON format. You don't need to be, suppose like to develop any particular application or to transfer an existing application, typically they see the what is the effort required and how much, what is the cost involved in it, and how quickly I can do that. That's main important thing without disturbing the functionality here where, since it is a multimodal database in a JSON format, you don't easily build an application. Okay? Don't need a lot of transformation in case of an RDBMS, you get the JSON payload, you transform into a tabular structure or a different format, and then probably you build an ORM layer and then map it and save it. There are lot of work involved in it. There are a lot of components need to be written in between. But in case of MongoDB, what they can do is you get the information from the multiple sources. And as is, you can put it in a DB based on where, or you can transform it based on the access patterns. And then you can store it quickly. >> Dave: Got it. And I tell Dave, because today you haven't context data, which has a selected set of information. Probably tomorrow the particular customer has more information to put it. So how do you capture that? In case of an RDBMS, you need to change the schema. Once you scheme change the schema, your application breaks down. But here it magically adopts it. Like you pass the extra information, it's open for extension. It adopts it easily. You don't need to redeploy or change the schema or do something like that. >> Right. That's the genius of Mongo. And then of course, you know, in the early days people say, oh, you know, Mongo, it won't scale. And then of course we, through the cloud. And I follow very closely Atlas. I look at the numbers every quarter. I mean, overall cloud adoption is increasing like crazy, you know, our Wiki Bon analyst team. We got the big four cloud vendors just in IAS growing beyond a 115 billion this year. That's 35% on top of, you know, 80-90 billion last year. So talk more about how MongoDB fits with the cloud and how it helps with the whole migration story. 'Cause you're killing it in that space. >> Yeah. Sure. Just to add one more point on the previous question. So for continuously, for past four to five years, we have been the number one in the wanted database. >> Dave: Right Okay. That that's how like the popularity is getting done. That's how the adoption has happened. >> Dave: Right. >> I'm coming back to your question- >> Yeah let's talk about the cloud and database as a service, you guys actually have packaged that very nicely I have to say. >> Yeah. So we have spent lot of effort and time in developing Atlas, our managed database as a service, which typically gives the customer the way of just concentrating on their application rather than maintaining and managing the whole set of database or how to scale infrastructure. All those things on work is taken care. You don't need to be an expert of DB, like when you are using an Atlas. So we provide the managed database in three major cloud providers, AWS, GCP, and Azure, and also it's a purely a multicloud, you know, like you can have a primary in AWS and you have the replicated nodes in GCP or Azure. It's a purely multicloud. So that like, you don't have a cloud blocking. You feel that, okay, your business is, I mean, if this is the right for your business you are choosing the model, you think that I need to move to GCP. You don't need to bother, you easily migrate this to GCP. Okay. No vendor lock in, no cloud lock in this particular- >> So Vasanth, maybe you could talk a little bit more about Atlas and some of the differentiable features and things that you can do with Atlas that maybe people don't know about. >> Yeah, sure Dave like, Atlas is not just a manage database as a service, you know, like it's a complete data platform and it provides many features. Like for example, you build an application and probably down the line of three years, the data which you captured three years back might be an old data. Like how do you do it? Like there's no need for you to manually purge or do thing. Like we do have an online archival where you configure the data. So that like the data, which is older than two years, just purge it. So automatically this is taken care. So that like you have hot data kept in Atlas cluster and the cold data moved up to an ARKit. And also like we have a data lake where you can run a federated queries . For example, you've done an archival, but what if people want to access the data? So with data lake, what it can do is, on a single connection, you can fire a- you can run a federated queries both on the active and the archival data. That's the beauty, like you archive the data, but still you can able to query it. And we do also have a charts where like, you can build in visualization on top of the data, what you have captured. You can build in graphs or you can build in graphs and also embed these graphs as part of your application, or you can collaborate to the customers, to the CXOs and other theme. >> Dave: Got it. >> It's a complete data platform. >> Okay. Well, speaking of data platform, let's talk about Io-Tahoe's data RPA platform, and coupling that with Mongo DB. So maybe you could help us understand how you're helping with process automation, which is a very hot topic and just this whole notion of a modern application development. >> Sure. See, the process automation is more with respect to the data and how you manage this data and what to derive and build a business process on top of it. I see there are two parts into it. Like one is the source of data. How do you identify, how do you discover the data? How do you enrich the context or transform it, give a business context to it. And then you build a business rules or act on it, and then you store the data or you derive the insights or enrich it and store it into DB. The first part is completely taken by Io-Tahoe, where you can tag the data for the multiple data sources. For example, if we take an customer 360 view, you can grab the data from multiple data sources using Io-Tahoe and you discover this data, you can tag it, you can label it and you build a view of the complete customer context, and use a realm web book and then the data is ingested back to Mongo. So that's all like more sort of like server-less fashion. You can build this particular customer 360 view for example. And just to talk about the realm I spoke, right? The realm web book, realm is a backend APA that you can create on top of the data on Mongo cluster, which is available in addclass. Okay. Then once you run, the APS are ready. Data as a service, you build it as a data as a service, and you fully secure APIs, which are available. These APS can be integrated within a mobile app or an web application to build in a built in modern application. But what left out is like, just build a UI artifacts and integrate these APIs. >> Yeah, I mean we live in this API economy companies. People throw that out as sort of a buzz phrase, but Mongo lives that. I mean, that's why developers really like the Mongo. So what's your take on DevOps? Maybe you could talk a little bit about, you know, your perspective there, how you help Devs and data engineers build faster pipelines. >> Yeah, sure. Like, okay, this is the most favorite topic. Like, no, and it's a buzzword along, like all the DevOps moving out from the traditional deployment, what I learned online. So like we do support like the deployment automation in multiple ways okay, and also provide the diagnostic under the hood. We have two options in Mongo DB. One is an enterprise option, which is more on the on-prem's version. And Atlas is more with respect to the cloud one manage database service. Okay. In case of an enterprise advanced, like we do have an Ops manager and the Kubernetes operator, like a Ops manager will manage all sort of deployment automation. Upgrades, provides your diagnostics, both with respect to the hardwares, and also with respect to the MongoDB gives you a profiling, slow running queries and what you can get a context of what's working on the data using that. I'm using an enterprise operator. You can integrate with existing Kubernetes cluster, either in a different namespace on an existing namespace. And orchestrate the deployment. And in case of Atlas, we do have an Atlas-Kubernetes operator, which helps you to integrate your Kubernetes operator. And you don't need to leave your Kubernetes. And also we have worked with the cloud providers. For example, we have we haven't cloud formation templates where you can just in one click, you can just roll out an Atlas cluster with a complete platform. So that's one, like we are continuously working, evolving on the DevOps site to roll out the might be a helm chart, or we do have an operator, which has a standard (indistinct) for different types of deployments. >> You know, some really important themes here. Obviously, anytime you talk about Mongo, simplicity comes in, automation, you know, that big, big push that Io-Tahoe was making. What you said about data context was interesting because a lot of data systems, organizations, they lack context and context is very important. So auto classification and things like that. And the other thing you said about federated queries I think fits very well into the trend toward decentralized data architecture. So very important there. And of course, hybridisity. I call it hybridisity. On-prem, cloud, abstracting that complexity away and allowing people to really focus on their digital transformations. I tell ya, Vasanth, it's great stuff. It's always a pleasure chatting with Io-Tahoe partners, and really getting into the tech with folks like yourself. So thanks so much for coming on theCube. >> Thanks. Thanks, Dave. Thanks for having a nice discussion with you. >> Okay. Stay right there. We've got one more quick session that you don't want to miss.
SUMMARY :
Okay. We're here with Vasanth Kumar you have experience in of handling the data and and back in the early days And then you can store it quickly. So how do you capture that? And then of course, you know, on the previous question. That's how the adoption has happened. you guys actually have So that like, you don't So Vasanth, maybe you could talk the data which you So maybe you could help us and then you store the data little bit about, you know, and what you can get a context And the other thing you discussion with you. that you don't want to miss.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Vasanth Kumar | PERSON | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
two parts | QUANTITY | 0.99+ |
Vasanth | PERSON | 0.99+ |
35% | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
last year | DATE | 0.99+ |
115 billion | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
Bangalore, India | LOCATION | 0.99+ |
three years | QUANTITY | 0.99+ |
JSON | TITLE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Io-Tahoe | ORGANIZATION | 0.99+ |
80-90 billion | QUANTITY | 0.99+ |
MongoDB | ORGANIZATION | 0.99+ |
ARKit | TITLE | 0.98+ |
two options | QUANTITY | 0.98+ |
one click | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Atlas | TITLE | 0.98+ |
this year | DATE | 0.98+ |
both | QUANTITY | 0.97+ |
older than two years | QUANTITY | 0.97+ |
around 18 years | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
nearly two years | QUANTITY | 0.96+ |
Azure | TITLE | 0.96+ |
Wiki Bon | ORGANIZATION | 0.96+ |
MongoDB | TITLE | 0.95+ |
three years back | DATE | 0.94+ |
Vasanth | ORGANIZATION | 0.93+ |
Io-Tahoe | TITLE | 0.92+ |
DevOps | TITLE | 0.91+ |
Atlas | ORGANIZATION | 0.91+ |
Kubernetes | TITLE | 0.91+ |
360 view | QUANTITY | 0.89+ |
one | QUANTITY | 0.89+ |
single connection | QUANTITY | 0.88+ |
five years | QUANTITY | 0.85+ |
one more quick session | QUANTITY | 0.83+ |
GCP | ORGANIZATION | 0.83+ |
four cloud vendors | QUANTITY | 0.82+ |
GCP | TITLE | 0.79+ |
three major cloud providers | QUANTITY | 0.76+ |
one more point | QUANTITY | 0.73+ |
Io | TITLE | 0.72+ |
Azure | ORGANIZATION | 0.72+ |
-Tahoe | ORGANIZATION | 0.68+ |
four | QUANTITY | 0.67+ |
Mongo DB | TITLE | 0.65+ |
APA | TITLE | 0.58+ |
ISBs | ORGANIZATION | 0.54+ |
Keynote Analysis | PTC Liveworx 2018
>> From Boston Massachusetts, it's The Cube! Covering LiveWorx 18. Brought to you by PTC. >> Welcome to Boston everybody. You're watching The Cube, the leader in live tech coverage. And we're here with a special presentation in coverage of the LiveWorx show sponsored by PTC of Needham, soon to be of Boston. My name is Dave Vellante. I'm here with my co-host Stu Miniman. And Stu, this is quite a show. There's 6,000 people here. Jim Heppelmann this morning was up giving the keynote. PTC is a company that kind of hit the doldrums in the early 2000s. A company that as manufacturing moved offshore, its core business was CAD software for manufacturers, and it went through a pretty dramatic transformation that we're going to be talking about today. Well, fast forward 10 years, 12 years, 15 years on, this company is smokin, the stock's up 50 percent this year. They got a billion dollars plus in revenue. They're growing at 10 to 15 percent a year. They've shifted their software business from a perpetual software license to a recurring revenue model. And they're booming. And we're here at the original site of The Cube, as you remember well in 2010, the Boston Convention Center down at the seaport. And Stu, what are your initial impressions of LiveWorx? >> Yeah, it's great to be here, Dave. Good to be here with you and they dub this the largest digital transformation conference in the world. (laughing) So, I mean, Dave, you and I have been to much bigger conferences and we've been to a lot of conferences that are talking about digital transformation. But, IOT, AI, Augmented Reality, Block Chain, Robotics, all of these things really are about software, it's about digital transformation, and a really interesting space as you mentioned kind of the legacy of PTC. I have been around long enough. I remember when we used to call them Parametric Technologies. They kind of rebranded themselves as PTC. Windchill brings back some memories for me. When I worked for a high tech manufacturing company, it was that's the life cycle management tool that we used back in the early 2000s. So, I had a little bit of background in them. And, as you said, they're based in Needham, and they're moving to the Seaport. Hot area, especially, as we've said Dave, Boston has the opportunity to be the hub of IOT. And it's companies like PTC that are going to help bring those partnerships and lots of companies to an event like this. >> Well PTC has always been an inquisitive company, as you were pointing out to me off camera. They brought Prime Computer, Computer Vision. A number of acquisitions that they made back in the late 90s, which essentially didn't pan out the way they had hoped. But now again, fast forward to the modern era, Jim Heppelmann came in I think around 2010, exceeded ThingWorx, a company called Cold Light, Kept Ware is another company that they purchased. And took these really sort of independent software components and put them together and created a platform. Everybody talks about platform. We'll be talking about that a lot today, where the number of customers and partners of PTC. And we even have some folks from PTC on. But, basically, talking about digital transformation earlier, Stu, IOT is a huge tailwind for a company like PTC. But they had to really deliberately pivot to take advantage of this market. And if you think about it, yes, it's about connecting and instrumenting devices and machines, it's about reaching them, creating whatever wireless connections. But it's also about the data. We talk about that all the time. And constructing data that goes from edge to core, and even into the cloud, whether that cloud's on prem or in the data center. So you're seeing the transformation of this company. Obviously, I talked about some of the financials. We'll go into some of that. But an evolving ecosystem we heard Accenture's here, Infosys is here, Deloitte is here. As I like to say, the SI's like to eat at the trough. If the SI's are here, that means there's money here, right? >> Yeah Dave and actually a number that jumped out at me when Microsoft was up on stage, and it wasn't that Microsoft is investing five billion dollars in diode, the number that caught my ear was the 20 to 25 partners that it takes to deploy a single IOT solution. So, anybody that's been in tech for a long time, when you see these complicated stack solutions, the SIs need to be here. It takes a long time to work through them, and integration is a big challenge. How do I get all of these pieces together? It's not something that I just tit buy off the shelf. It's not shrink wrap software. This is complicated solution. It is very fragmented in how we make them up. Very specific to the industry that we're building, so really fascinating stuff that's going on. But we are still very early in the life-cycle of IOT. Huge, huge, huge opportunities but big players like Microsoft, like Google, like Amazon are going to be here making sure that they're going to simplify that environment over time. Huge, you know Dave, what's the original forecast I think we did at Wiki Bon, was a 1.2 trillion dollar opportunity, which most of that, that was actually for the industrial Internet, which is not the commercial things that we think about all the time, when we talk about the home sensors and some of the things, some of the consumer stuff, but also the industrial here. >> Well, I think a couple of key points that you're making here. First of all, the market is absolutely enormous. It's almost impossible to size. I mean you're talking about a trillion dollars in sort of spending on hardware, software, services, virtually everything. But to your point, Stu. It's highly highly fragmented, virtually every industry. And a lot of different segmented technologies. But it's also important to point out this is the mashing together of operations technology, OT with Information Technology, IT, and those four leading companies IT is actually leaning in and embracing this notion of edge, computing, and IOT. Now, I wouldn't even say that IT and OT are Hatfield and McCoy's. They're not. They're parts of the organization that don't talk to each other. So they are cultural differences. They use different languages. They think differently. One is largely engineers who make machines work. The other IT guys, which we obviously know what they do, they keep information technology systems running. They deploy a lot of new IT projects. So, really different worlds that have to start coming together. Jim Heppelmann today I thought did a really good job in his keynote. He talked about innovation. Usually you start with okay we're here at point A, we want to go here. We want to get to point B. And we're going to take a straight line and have a bunch of linear steps and milestones to get there. He pointed out that innovation today is really sort of a non-linear process. And he talked about the combinatorial effects of really three things. Machines, or the physical, computers and humans. Machines are strong, they can do heavy lifting. Computers are fast, and they can do repetitive tasks very accurately. And humans are creative. And he talked about innovation in this new world coming together by combining those three aspects, finding new ways to attack problems, to solve nature's challenges. And bringing nature into that problem solving. He gave a lot of examples of how mother nature mimicking mother nature is now possible with AI and other technologies. Pretty cool. >> Yeah, absolutely Dave. I'm sure we'll be talking a lot today about the fourth Industrial Revolution. A lot of discussion as to what jobs are Robots going to take. I look around the show floor here and there's a lot of cool robotics going on. But as Eric Manou said and Aaron McAfee, the folks from MIT that we've interviewed a couple of times talked about the second machine age. Really the marring of people and machines that are going to be powerful. And absolutely Jim Heppelmann talked about that a lot. It's humans, it's physical, and it's digital. Putting those together and then, the other thing that he talked about is we're talking a lot about voice lightly with all of these assistants, but, you're really limited as to how much input and how fast you can take information in from an auditory standpoint. I mean, I know that I listen to podcasts at 1.5 to 2 X to try to get more information in faster, but it is sight that we're going to get 80 percent of the information in, and therefore, it's the VR and AR that are huge opportunities. I know when I've been talking to some of the large manufacturers, what they used to have in written documentations and then they went digital with, they're now getting you inside to be able to configure the systems with the hollow lens, or some of the AR headsets, the VR headsets, to be able to play with that. So, we're really early but excited to see where this technology has come so far. >> Yeah, we're seeing a lot of practical applications of VR and AR. We go to a lot of these shows and they'll have the demos, and you go, okay, what will I do with this? Well, you're really seeing here at LiveWorx some of the things you actually can do. One good example I thought they did was BEA Systems up in Nashua, actually showing the folks that are doing the manufacturing, little tutorial in how to do that. We're going to see some surgical examples today. Remote surgery. There are thousands, literally thousands of examples. In the time we have remaining, I want to just do the rundown on PTC. Cause it really is quite an amazing transformation story. You're talking about a company with 1.1 billion dollars in revenue. Their aspiration is by 2021 to be a two billion dollar company. They're growing at ten percent a year, their software business has grown at 12 to 15 percent a year. 15 percent is that annual recurring revenue. So this is an example of a company that has successfully shifted from that perpetual model to that recurring model. They got 200 million dollars this year in free cash flow. Their stock, as I said, is up 50 percent this year. They got 350 million dollars in cash, but they just got a billion dollar investment from Rockwell Automation that took about 8.4 percent of the company given them an implied evaluation of almost 11 billion dollars, which has got a little uplift from the stock market there. They're selling a lot of seven figure deals. Really, the core is manufacturing product life-cycle management, CAD. That's the stuff that we know PTC well from. And I talked about some of those acquisitions that they made. They sell products like Creo, which is their 3D CAD software. I think they're on Rev five or six by now. So they've taken their sort of legacy software and sort of updated that for the digital world. >> Yep ,it is version five that they were just announced today. Talking about really the 3D effort they're doing there. Some partnerships around it, and like every other software Dave that we've been hearing about AI is getting infused in here because with so many devices and so much data, we really need the machines to help us process that and do things that humans can't keep up with. >> And the ecosystem's grown. This is a complicated marketplace. If you look at the Gartner Magic Quadrant, there is no leader, even though PTC is the leader. But there is no leader. They're all sort of in the lower right, PTC is up highest. GE is interestingly is not in there, because it doesn't have an on prem solution. I don't know why GE doesn't have an on prem solution. And I don't know why they're not in there. >> Is there another version of the magic quadrant that includes the Amazons and GEs of the world? >> I don't know. So that's kind of interesting. We'll try to unpack that as we go on here. PTC announced today a relationship with a company called Ansys, which does simulation software. Normally, simulation comes sort of after the design. They're bringing those two worlds together. The CAD design piece and the simulation piece, sort of closer to real time. So, there's a lot of stuff going on. As you said, it's data, analytics, edge computing. It's cloud, it's on prim, it's block chain for security. We haven't talked about security. A lot bigger threat metrix, so block chain comes into play. >> Yeah, Dave. I saw a great joke. Do you realize that the S in IOT stands for security? Did you know that? (laughing) Oh wait, there's no S in IOT. Well, that's the point. >> All right, good. So Stu and I will be here all day today. This is actually a three day conference. The Cube will only be there for day one. Keep right there everybody. And we'll be right back. You're watching The Cube, Live from Liveworx in Boston. (upbeat music)
SUMMARY :
Brought to you by PTC. kind of hit the doldrums kind of the legacy of PTC. We talk about that all the time. the SIs need to be here. And he talked about the I mean, I know that I listen to podcasts that are doing the manufacturing, Talking about really the 3D And the ecosystem's grown. sort of after the design. Well, that's the point. So Stu and I will be here all day today.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Heppelmann | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Eric Manou | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Aaron McAfee | PERSON | 0.99+ |
Rockwell Automation | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
80 percent | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
12 years | QUANTITY | 0.99+ |
350 million dollars | QUANTITY | 0.99+ |
Cold Light | ORGANIZATION | 0.99+ |
Ansys | ORGANIZATION | 0.99+ |
1.1 billion dollars | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
15 percent | QUANTITY | 0.99+ |
Needham | LOCATION | 0.99+ |
Infosys | ORGANIZATION | 0.99+ |
12 | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Hatfield | ORGANIZATION | 0.99+ |
200 million dollars | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
LiveWorx | ORGANIZATION | 0.99+ |
2021 | DATE | 0.99+ |
6,000 people | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
five billion dollars | QUANTITY | 0.99+ |
Kept Ware | ORGANIZATION | 0.99+ |
PTC | ORGANIZATION | 0.99+ |
two billion dollar | QUANTITY | 0.99+ |
The Cube | TITLE | 0.99+ |
1.2 trillion dollar | QUANTITY | 0.99+ |
GE | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Seaport | LOCATION | 0.99+ |
BEA Systems | ORGANIZATION | 0.99+ |
early 2000s | DATE | 0.99+ |
about 8.4 percent | QUANTITY | 0.99+ |
GEs | ORGANIZATION | 0.99+ |
three day | QUANTITY | 0.99+ |
1.5 | QUANTITY | 0.99+ |
25 partners | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
three aspects | QUANTITY | 0.98+ |
late 90s | DATE | 0.98+ |
Nashua | LOCATION | 0.98+ |
50 percent | QUANTITY | 0.98+ |
two worlds | QUANTITY | 0.97+ |
Parametric Technologies | ORGANIZATION | 0.97+ |
almost 11 billion dollars | QUANTITY | 0.97+ |
2 X | QUANTITY | 0.97+ |
Boston Massachusetts | LOCATION | 0.97+ |
Gartner | ORGANIZATION | 0.97+ |
First | QUANTITY | 0.96+ |
Boston Convention Center | LOCATION | 0.96+ |
Understanding Container Architecture - Wikibon Whiteboard
hello my name is Brian Grace Lee analysts with wiki Bond and on today's wiki bond whiteboards we're gonna begin to understand container architectures containers are really the big technology talked about these days especially for infrastructure teams there's a component of it that's both application and infrastructure but in this whiteboard we're really going to understand the basics of how it applies to the infrastructure and we're going to try and put it in the context of things that most infrastructure teams understand today which is virtualization so let's go ahead and begin what we've done and again this is for context we've tried to take a standard environment that people are used to seeing for virtualization and in this case we're going to use VMware as the example because obviously broadest market share and a lot of people understand what they do so let's talk about the basics of what happens here people understand what happens at the host level I've got servers within each server I've got a hypervisor so in VMware's case ESX or ESXi within that hypervisor I'm going to go ahead and create virtual machines so every single virtual machine has a copy of the full operating system and then within that virtual machine I've got a an operating an application itself for multiple applications so everybody understands that pretty well now how I manage those hype with those hypervisors and virtual machines is through a centralized control plane and that's called V Center and V Center may be a single instance it may be a clustered instance but think of it as the thing that's going to manage the scheduling of the resources and the management of those resources and it's really only focused on virtual machines okay now above that we're going to have if we're deploying applications I can either deploy them by hand or I may begin to deploy them through application templates so I may deploy the same type of application over and over again a web server a sequel database something else do that consistently I'm going to use some sort of typically a templating function and a lot of that can come in the management flame framework from something like V realize VMware via realized and then on top of that I'm going to have my applications whatever those might be sequel databases s AP Oracle Microsoft applications whatever those things might be so the key things I want you to understand is at the host level its hypervisor virtual machine full operating system and application and at the control plane it's this sort of structured format of V Center cluster V Center is going to make sure that virtual machines get deployed on to those hosts and it's going to keep track of where they are and make sure that they stay alive using things like VMware H a VM or V motion and VMware fault tolerance okay so now that we have that basic context in place let's take a look at how the container ecosystem is beginning to evolve and in this example we're gonna use docker because similar to VMware right now docker is the most frequently used container technology there are other ones in the marketplace but we're going to use docker just as an example the rest of what we talked about will be applicable whether it's docker core OS rocket or a number of the other container technologies that are out there so let's begin down at the host level just like we did over here in the simplest form I'm gonna have a host I'm gonna have a server we're not going to have a hypervisor we're just going to have the operating system today in most container environments that operating system is going to be Linux now there's a lot going on in the marketplace where this will eventually be Linux and Windows Microsoft is is working quite a bit on this but for right now let's just say that operating system is Linux ok I'm going to have my container runtime which in this case is his docker and you can think about that as sort of being like a hypervisor but it's almost a lightweight hypervisor and then that container runtime is going to create my containers themselves and each one of those containers now what's unique about this that's different from this environment is each one of those containers only uses they all share the same operating system so again all of your containers within a single host have to run the same operating system either all Linux or eventually would be all Windows they're going to use the bits that they need from that operating system so the net-net of it is it's a lighter-weight footprint I should be able to boot them quicker and the reason people get very very fixated on I can boot a container fast is because in this container environment the types of applications that I'm building tend to be more what they call ephemeral pieces of them are going to go away they're going to come back I'm gonna want to spin them up quickly if I have a scalable application spin them up or spin them down and so what you're looking for is a operating environment that will come up very very quickly so just to put that in context to spin up a virtual machine it may take three four minutes because of the operating system coming up to spin up a container usually is on the order of a second or a couple of seconds so big you know order of difference between there now the second piece that's really important and this is where a lot of people kind of get confused about what's going on in the container ecosystem is what happens at that control plane and the first thing to understand is when we talked about you know virtualized applications we tend to talk about very stateful sometimes they're called platform two sometimes they're called legacy applications but they're more or less stateful so the expectation is once you deploy them other than maybe Vee motioning them around for availability you're not scaling them up and down you don't expect them to fail frequently and so the scalability needed at the control plane is fairly well-defined maybe it's a thousand hosts or 10,000 hosts when we start dealing with containers the types of applications we deploy tend to be more what they call 12 factor applications sometimes you hear them call modular applications cloud native applications the idea being they're much more modular they tend to be more state less so the idea of maintaining state tends to get pushed somewhere else but they're designed for scale they're designed for mobile applications for real-time data applications and so the control plane unlike here which tends to be somewhat stateful and more confined in terms of scale has to be designed to be a distributed control plane it has to be designed to scale much much larger and so as part of that what we see is we're seeing technologies come out that sort of break up the things that were functions inside of a vCenter control plane into sort of distinct technologies that number one tend to scale more because they're written in distributed manner and number two they've got a certain amount of sort of mix-and-match that you can have with them depending on what your applications gonna do so let's talk through the basic things that are in here the first layer that you'll often hear about is clustering how do I cluster together sets of container hosts an example of this is docker swarm technology another example of this is something like Etsy D from core OS it's a technology to sort of figure out where my clusters of hosts are going to be the next layer is what's called service discovery if I'm deploying hundreds and hundreds or thousands of devices I want to you know containers I want to be able to figure out what services are available queuing services database services you know notification services the things that are out there I need to do that dynamically and automatically the next piece is going to be scheduling those containers just like vCenter is going to put it on the right host to make sure that it's load balanced properly there's a scheduling function to make sure that containers get deployed to the right container and then the next piece is what they call application scheduling so in these environments I don't tend to schedule my applications in these environments they could be a mix of batch applications Hadoop applications long running applications short running applications I need a more advanced intelligent scheduler to make sure that I'm getting the containers and the applications deployed on the right place and as efficient as possible and then on top of that I have my actual applications so the takeaway from this is at the host level some difference between how heavy a virtualized environment is going to be versus a container environment and that you want that to match how your application requirements are and if the control plane a more structured model for doing the functions that you need to manage the environment in a container model a more distributed model so with that I'm gonna go ahead and wrap that up we're going to get into some more depth in other videos we hope you enjoy these once again this has been a wiki bound whiteboard video you can find more information about all of our research and all the information about these technologies at wiki bond com and again if you want to follow me again my name is brian grace lee i'm at be grace lee on twitter or you can follow at wiki bon on twitter as well thank you and have a great day
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
10,000 hosts | QUANTITY | 0.99+ |
ESXi | TITLE | 0.99+ |
Brian Grace Lee | PERSON | 0.99+ |
second piece | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ESX | TITLE | 0.99+ |
Linux | TITLE | 0.99+ |
brian grace lee | PERSON | 0.99+ |
first layer | QUANTITY | 0.99+ |
Windows | TITLE | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
first thing | QUANTITY | 0.97+ |
VMware | TITLE | 0.97+ |
ORGANIZATION | 0.97+ | |
today | DATE | 0.96+ |
each server | QUANTITY | 0.95+ |
hundreds and | QUANTITY | 0.95+ |
12 factor | QUANTITY | 0.95+ |
a thousand hosts | QUANTITY | 0.95+ |
single instance | QUANTITY | 0.93+ |
wiki bond com | ORGANIZATION | 0.93+ |
three four minutes | QUANTITY | 0.93+ |
a couple of seconds | QUANTITY | 0.92+ |
Understanding Container Architecture | TITLE | 0.91+ |
each one | QUANTITY | 0.91+ |
single host | QUANTITY | 0.91+ |
thousands of devices | QUANTITY | 0.91+ |
a lot of people | QUANTITY | 0.89+ |
a lot of people | QUANTITY | 0.86+ |
a second | QUANTITY | 0.83+ |
wiki bond | TITLE | 0.8+ |
vCenter | TITLE | 0.8+ |
two | QUANTITY | 0.78+ |
both application | QUANTITY | 0.76+ |
wiki Bond | TITLE | 0.76+ |
hundreds | QUANTITY | 0.75+ |
docker | TITLE | 0.73+ |
Etsy D | TITLE | 0.72+ |
number one | QUANTITY | 0.7+ |
V | TITLE | 0.67+ |
grace lee | PERSON | 0.67+ |
every single virtual | QUANTITY | 0.65+ |
each one | QUANTITY | 0.64+ |
wiki bon | ORGANIZATION | 0.61+ |
V Center | TITLE | 0.57+ |
platform two | QUANTITY | 0.52+ |
Wikibon | ORGANIZATION | 0.51+ |
Center | COMMERCIAL_ITEM | 0.48+ |