Paul Perez, Dell Technologies and Kit Colbert, VMware | Dell Technologies World 2020
>> Narrator: From around the globe, it's theCUBE! With digital coverage of Dell Technologies World Digital Experience. Brought to you by Dell Technologies. >> Hey, welcome back, everybody. Jeffrey here with theCUBE coming to you from our Palo Altos studios with continuing coverage of the Dell Technology World 2020, The Digital Experience. We've been covering this for over 10 years. It's virtual this year, but still have a lot of great content, a lot of great announcements, and a lot of technology that's being released and talked about. So we're excited. We're going to dig a little deep with our next two guests. First of all we have Paul Perez. He is the SVP and CTO of infrastructure solutions group for Dell technologies. Paul's great to see you. Where are you coming in from today? >> Austin, Texas. >> Austin Texas Awesome. And joining him returning to theCUBE on many times, Kit Colbert. He is the Vice President and CTO of VMware cloud for VMware Kit great to see you as well. Where are you joining us from? >> Yeah, thanks for having me again. I'm here in San Francisco. >> Awesome. So let's jump into it and talk about project Monterrey. You know, it's funny I was at Intel back in the day and all of our passwords used to go out and they became like the product names. It's funny how these little internal project names get a life of their own and this is a big one. And, you know, we had Pat Gelsinger on a few weeks back at VM-ware talking about how significant this is and kind of this evolution within the VMware cloud development. And, you know, it's kind of past Kubernetes and past Tanzu and past project Pacific and now we're into project Monterey. So first off, let's start with Kit, give us kind of the basic overview of what is project Monterey. >> Yep. Yeah, well, you're absolutely right. What we did last year, we announced project Pacific, which was really a fundamental rethinking of VMware cloud foundation with Kubernetes built in right. Kubernetes is still a core to core part of the architecture and the idea there was really to better support modern applications to enable developers and IT operations to come together to work collaboratively toward modernizing a company's application fleet. And as you look at companies starting to be successful, they're starting to run these modern applications. What you found is that the hardware architecture itself needed to evolve, needed to update, to support all the new requirements brought on by these modern apps. And so when you're looking at project Monterey, it's exactly that it's a rethinking of the VMware cloud foundation, underlying hardware architecture. And so you think about a project model or excuse me, product Pacific is really kind of the top half if you will, Kubernetes consumption experiences great for applications. Project Monterey comes along as the second step in that journey, really being the bottom half, fundamentally rethinking the hardware architecture and leveraging SmartNic technology to do that. >> It's pretty interesting, Paul, you know, there's a great shift in this whole move from, you know, infrastructure driving applications to applications driving infrastructure. And then we're seeing, you know, obviously the big move with big data. And again, I think as Pat talked about in his interview with NVIDIA being at the right time, at the right place with the right technology and this, you know, kind of groundswell of GPU, now DPU, you know, helping to move those workloads beyond just kind of where the CPU used to do all the work, this is even, you know, kind of taking it another level you guys are the hardware guys and the solutions guys, as you look at this kind of continuing evolution, both of workloads as well as their infrastructure, how does this fit in? >> Yeah, well, how all this fit it in is modern applications and modern workloads, require a modern infrastructure, right? And a Kit was talking about the infrastructure overlay. That VMware is awesome at that all being, I was coming at this from the emerging data centric workloads, and some of the implications for that, including Phillip and diversity has ever been used for computing. The need to this faculty could be able to combine maybe resources together, as opposed to trying to shoehorn something into a mechanical chassis. And, and if you do segregate, you have to be able to compose on demand. And when you start comparing those, we realized that we were humping it up on our conversion trajectory and we started to team up and partner. >> So it's interesting because part of the composable philosophy, if you will, is to, you know, just to break the components of compute store and networking down to a small pieces as possible, and then you can assemble the right amount when you need it to attack a particular problem. But when you're talking about it's a whole different level of, of bringing the right hardware to bear for the solution. When you talk about SmartNics and you talk about GPS in DPS data processing units, you're now starting to offload and even FPG is that some of these other things offload a lot of work from the core CPU to some of these more appropriate devices that said, how do people make sure that the right application ends up on the right infrastructure? This is that I'm, if it's appropriate using more of a, of a Monterey based solution versus more of a traditional one, depending on the workload, how is that going to get all kind of sorted out and, and routed within the actual cloud infrastructure itself? That was probably back to you a Kit? >> Yeah, sure. So I think it's important to understand kind of what a smart NIC is and how it works in order to answer that question, because what we're really doing is to kind of jump right to it. I guess it's, you know, giving an API into the infrastructure and this is how we're able to do all the things that you just mentioned, but what does a SmartNic? Well, SmartNic is essentially a NIC with a general purpose CPU on it, really a whole CPU complex, in fact, kind of a whole system on server right there on that, on that Nic. And so what that enables is a bunch of great things. So first of all, to your point, we can do a lot of offload. We can actually run ESX. >> SXI on that. Nic, we can take a lot of the functionality that we were doing before on the main server CPU, things like network virtualization, storage, virtualization, security functionality, we can move that all off on the Nic. And it makes a lot of sense because really what we're doing when we're doing all those things is really looking at different sort of IO data paths. You know, as, as the network traffic comes through looking at doing automatic load balancing firewall and for security, delivering storage, perhaps remotely. And so the NIC is actually a perfect place to place all of these functionalities, right? You can not only move it off the core server CPU, but you can get a lot better performance cause you're now right there on the data path. So I think that's the first really key point is that you can get that offload, but then once you have all of that functionality there, then you can start doing some really amazing things. And this ability to expose additional virtual devices onto the PCI bus, this is another great capability of a SmartNic. So when you plug it in physically into the motherboard, it's a Nic, right. You can see that. And when it starts up, it looks like a Nic to the motherboard, to the system, but then via software, you can have it expose additional devices. It could look like a storage controller, or it could look like an FPGA look really any sort of device. And you can do that. Not only for the local machine where it's plugged in, but potentially remote machines as well with the right sorts of interconnects. So what this creates is a whole new sort of cluster architecture. And that's why we're really so excited about it because you got all these great benefits in terms of offload performance improvement, security improvement, but then you get this great ability to get very dynamic, just aggregation. And composability. >> So Kit, how much of it is the routing of the workload to the right place, right? That's got the right amount of say, it's a super data intensive once a lot of GPU versus actually better executing the operation. Once it gets to the place where it's going to run. >> Yeah. It's a bit of a combination actually. So the powerful thing about it is that in a traditional world, where are you want an application? You know, the server that you run it, that app can really only use the local devices there. Yes, there is some newer stuff like NVMe over fabric where you can remote certain types of storage capabilities, but there's no real general purpose solution to that. Yet that generally speaking, that application is limited to the local hardware devices. Well, the great part about what we're doing with Monterey and with the SmartNic technology is that we can now dynamically remote or expose remote devices from other hosts. And so wherever that application runs matters a little bit less now, in a sense that we can give it the right sorts of hardware it needs in order to operate. You know, if you have, let's say a few machines with a FPGA is normally if you have needed that a Fiji had to run locally, but now can actually run remotely and you can better balance out things like compute requirements versus, you know, specialized Accella Requirements. And so I think what we're looking at is, especially in the context of VMware cloud foundation, is bringing that all together. We can look through the scheduling, figure out what the best host for it to let run on based on all these considerations. And that's it, we are missing, let's say a physical device that needs, well, we can remote that and sort of a deal at that, a missing gap there. >> Right, right. That's great. Paul, I want to go back to you. You just talked about, you know, kind of coming at this problem from a data centric point of view, and you're running infrastructure and you're the poor guy that's got to catch all the ASAM Todd i the giant exponential curves up into the right on the data flow and the data quantity. How is that impacting the way you think about infrastructure and designing infrastructure and changing infrastructure and kind of future proofing infrastructure when, you know, just around the corners, 5g and IOT and, Oh, you ain't seen nothing yet in terms of the data flow. >> Yeah. So I come at this from two angles. One that we talked about briefly is the evolution of the workloads themselves. The other angle, which is just as important is the operating model that customers are wanting to evolve to. And in that context, we thought a lot about how cloud, if an operating model, not necessarily a destination, right? So what I, and when way we laid out, what Kit was talking about is that in data center computing, you have operational control and data plane. Where did data plane run from the optimized solution? GPU's, PGA's, offload engines? And the control plane can run on stuff like it could be safe and are then I'm thinking about SmartNic is back codes have arm boards, so you can implement some data plane and some control plane, and they can also be the gateway. Cause, you know, you've talked about composability, what has been done up until now is early for sprint, right? We're carving out software defined infrastructure out of predefined hardware blocks. What we're talking about is making, you know, a GPUs residents in our fabric consistent memory residence of a fabric NVME over fabric and being able to tile computing topologies on demand to realize and applications intent. And we call that intent based computer. >> Right. Well, just, and to follow up on that too, as the, you know, cloud is an attitude or as an operating model or whatever you want to say, you know, not necessarily a place or a thing has changed. I mean, how has that had to get you to shift your infrastructure approach? Cause you've got to support, you know, old school, good old data centers. We've got, you know, some stuff running on public clouds. And then now you've got hybrid clouds and you have multi clouds, right. So we know, you know, you're out in the field that people have workloads running all over the place. So, but they got to control it and they've got compliance issues and they got a whole bunch of other stuff. So from your point of view, as you see the desire for more flexibility, the desire for more infrastructure centric support for the workloads that I want to buy and the increasing amount of those that are more data centric, as we move to hopefully more data driven decisions, how's it changed your strategy. And what does it mean to partner and have a real nice formal relationship with the folks over at VMR or excuse me, VMware? >> Well, I think that regardless of how big a company is, it's always prudent. As I say, when I approached my job, right, architecture is about balance and efficiency and it's about reducing contention. And we like to leverage industry R and D, especially in cases where one plus one equals two, right? In the case of, project Monterey for example, one of the collaboration areas is in improving the security model and being able to provide more air gap isolation, especially when you consider that enterprise wants to behave as service providers is concerned or to their companies. And therefore this is important. And because of that, I think that there's a lot of things that we can do between VMware and Dell lending hardware, and for example, assets like NSX and a different way that will give customers higher scalability and performance and more control, you know, beyond VMware and Dell EMC i think that we're partnering with obviously the SmartNic vendors, cause they're smart interprets and the gateway to those that are clean. They're not really analysis, but also companies that are innovating in data center computing, for example, NVIDIA. >> Right. Right. >> And I think that what we're seeing is while, you know, ambivalent has done an awesome job of targeting their capability, AIML type of workloads, what we realized this applications today depend on platform services, right. And up until recently, those platform services have been debases messaging PI active directory, moving forward. I think that within five years, most applications will depend on some form of AIML service. So I can see an opportunity to go mainstream with this >> Right. Right. Well, it's great. You bring up in NVIDIA and I'm just going to quote one of Pat's lines from, from his interview. And he talked about Jensen from NVIDIA actually telling Pat, Hey Pat, I think you're thinking too small. I love it. You know, let's do the entire AI landscape together and make AI and enterprise class workloads from being more in TANZU, you know, first class citizens. So I, I love the fact, you know, Pat's been around a long time industry veteran, but still, kind of accepted the challenge from Jensen to really elevate AI and machine learning via GPS to first class citizen status. And the other piece, obviously this coming up is ed. So I, you know, it's a nice shot of a, of adrenaline and Kit I wonder if you can share your thoughts on that, you know, in kind of saying, Hey, let's take it up a notch, a significant notch by leveraging a whole another class of compute power within these solutions. >> Yeah. So, I mean, I'll, I'll go real quick. I mean, I, it's funny cause like not many people really ever challenged Pat to say he doesn't think big enough, cause usually he's always blown us away with what he wants to do next, but I think it's, I think it's a, you know, it's good though. It's good to keep us on our toes and push us a bit. Right. All of us within the industry. And so I think a couple of things you have to go back to your previous point around this is like cloud as a model. I think that's exactly what we're doing is trying to bring cloud as a model, even on prem. And it's a lot of these kinds of core hardware architecture capabilities that we do enable the biggest one in my mind, just being enabling an API into the hardware. So the applications can get what they need. And going back to Paul's point, this notion of these AI and ML services, you know, they have to be rooted in the hardware, right? We know that in order for them to be performing for them to run, to support what our customers want to do, we need to have that deeply integrated into the hardware all the way up. But that also becomes a software problem. Once we got the hardware solved, once we get that architecture locked in, how can we as easy as possible, as seamlessly as possible, deliver all those great capabilities, software capabilities. And so, you know, you look at what we've done with the NVIDIA partnership, things around the NVIDIA GPU cloud, and really bringing that to bear. And so then you start having this, this really great full stack integration all the way from the hardware, very powerful hardware architecture that, you know, again, driven by API, the infrastructure software on top of that. And then all these great AI tools, tool chains, capabilities with things like the NVIDIA NGC. So that's really, I think where the vision's going. And we got a lot of the basic parts there, but obviously a lot more work to do going forward. >> I would say that, you know, initially we had dream, we wanted this journey to happen very fast and initially we're baiting infrastructure services. So there's no contention with applications, customer full workload applications, and also in enabling how productive it is to get the data over time, have to have sufficient control over a wide area. there's an opportunity to do something like that to make sure that you think about the probation from bare metal vms (conversation fading) environments are way more dynamic and more spreadable. Right. And they expect hardware. It could be as dynamic and compostable to suit their needs. And I think that's where we're headed. >> Right. So let me, so let me throw a monkey wrench in, in terms of security, right? So now this thing is much more flexible. It's much more software defined. How is that changing the way you think about security and basic security and throughout the stack go to you first, Paul. >> Yeah. Yeah. So like it's actually enables a lot of really powerful things. So first of all, from an architecture and implementation standpoint, you have to understand that we're really running two copies of VXI on each physical server. Now we've got the one running on the X86 side, just like normal, and now we've got one running on the SmartNIC as well. And so, as I mentioned before, we can move a lot of that networking security, et cetera, capabilities off to the SmartNic. And so what does this going toward as what we call a zero trust security architecture, this notion of having really defense in depth at many different layers and many different areas while obviously the hypervisor and the virtualization layer provides a really strong level of security. even when we were doing it completely on the X86 side, now that we're running on a SmartNic that's additional defense in depth because the X86 ESX doesn't really know it doesn't have direct access to the ESX. I run it on the SmartNic So the ESXI running on the SmartNic, it can be this kind of more well defended position. Moreover, now that we're running the security functionality is directly on the data path. In the SmartNic. We can do a lot more with that. We can run a lot deeper analysis, can talk about AI and ML, bring a lot of those capabilities to bear here to actually improve the security profile. And so finally I'd say this notion of kind of distributed security as well, that you don't, that's what I want to have these individual points on the physical network, but I actually distribute the security policies and enforcement to everywhere where a server's running, I everywhere where a SmartNic is, and that's what we can do here. And so it really takes a lot of what we've been doing with things like NSX, but now connects it much more deeply into hardware, allowing for better performance and security. >> A common attack method is to intercept the boot of the server physical server. And, you know, I'm actually very proud of our team because the us national security agency recently published a white paper on best practices for secure group. And they take our implementation across and secure boot as the reference standard. >> Right? Moving forward, imagine an environment that even if you gain control of the server, that doesn't allow you to change bios or update it. So we're moving the root of trust to be in that air gap, domain that Kit talked about. And that gives us a way more capability for zero across the operations. Right. >> Right, right. Paul, I got to ask you, I had Sam bird on the other day, your peer who runs the P the PC group. >> I'm telling you, he is not a peer He's a little bit higher up. >> Higher than you. Okay. Well, I just promoted you so that's okay. But, but it's really interesting. Cause we were talking about, it was literally like 10 years ago, the death of the PC article that came out when, when Apple introduced the tablet and, you know, he's talked about what phenomenal devices that PCs continue to be and evolve. And then it's just funny how, now that dovetails with this whole edge conversation, when people don't necessarily think of a PC as a piece of the edge, but it is a great piece of the edge. So from an infrastructure point of view, you know, to have that kind of presence within the PCs and kind of potentially that intelligence and again, this kind of whole another layer of interaction with the users and an opportunity to define how they work with applications and prioritize applications. I just wonder if you can share how nice it is to have that kind of in your back pocket to know that you've got a whole another, you know, kind of layer of visibility and connection with the users beyond just simply the infrastructure. >> So I actually, within the company we've developed within a framework that we call four edge multicloud, right. Core data centers and enterprise edge IOP, and then off premise. it is a multicloud world. And, and within that framework, we consider our client solutions group products to be part of the yes. And we see a lot of benefit. I'll give an example about a healthcare company that wants to develop real time analytics, regardless of whether it's on a laptop or maybe move into a backend data center, right? Whether it's at a hospital clinic or a patient's home, it gives us a broader innovation surface and a little sooner to get actually the, a lot of people may not appreciate that the most important function within Centene, I considered to be the experienced design thing. So being able to design user flows and customer experience looked at all of use is a variable. >> That's great. That's great. So we're running out of time. I want to give you each the last word you both been in this business for a long time. This is brand new stuff, right? Container aren't new, Kubernetes is still relatively new and exciting. And project Pacific was relatively new and now project Monterrey, but you guys are, you know, you're, multi-decade veterans in this thing. as you look forward, what does this moment represent compared to some of the other shifts that we've seen in IT? You know, generally, but you know, kind of consumption of compute and you know, kind of this application centric world that just continues to grow. I mean, as a software is eating everything, we know it, you guys live it every day. What is, where are we now? And you know, what do you see? Maybe I don't want to go too far out, but the next couple of years within the Monterey framework. And then if you have something else, generally you can add as well. Paul, why don't we start with you? >> Well, I think on a personal level, ingenuity aside I have a long string of very successful endeavor in my career when I came back couple years ago, one of the things that I told Jeff, our vice chairman is a big canvas and I intend to paint my masterpiece and I think, you know, Monterey and what we're doing in support of Monterey is also part of that. I think that you will see, you will see our initial approach focus on, on coordinator. I can tell you that you know how to express it. And we know also how to express even in a multicloud world. So I'm very excited and I know that I'm going to be busy for the next few years. (giggling) >> A Kit to you. >> Yeah. So, you know, it's funny you talk to people about SmartNic and especially those folks that have been around for awhile. And what you hear is like, Hey, you know, people were talking about SmartNic 10 years ago, 20 years ago, that sort of thing. Then they kind of died off. So what's different now. And I think the big difference now is a few things, you know, first of all, it's the core technology of sworn and has dramatically improved. We now have a powerful software infrastructure layer that can take advantage of it. And, you know, finally, you know, applications have a really strong need for it, again, with all the things we've talked about, the need for offload. So I think there's some real sort of fundamental shifts that have happened over the past. Let's say decade that have driven the need for this. And so this is something that I believe strongly as here to last, you know, both ourselves at VMware, as well as Dell are making a huge bet on this, but not only that, and not only is it good for customers, it's actually good for all the operators as well. So whether this is part of VCF that we deliver to customers for them to operate themselves, just like they always have, or if it's part of our own cloud solutions, things like being more caught on Dell, this is going to be a core part about how we deliver our cloud services and infrastructure going forward. So we really do believe this is kind of a foundational transition that's taking place. And as we talked about, there is a ton of additional innovation that's going to come out of it. So I'm really, really excited for the next few years, because I think we're just at the start of a very long and very exciting journey. >> Awesome. Well, thank you both for spending some time with us and sharing the story and congratulations. I'm sure a whole bunch of work for, from a whole bunch of people in, into getting to getting where you are now. And, and as you said, Paul, the work is barely just begun. So thanks again. All right. He's Paul's He's Kit. I'm Jeff. You're watching the cubes, continuing coverage of Dell tech world 2020, that digital experience. Thanks for watching. We'll see you next time. (Upbeat music)
SUMMARY :
Brought to you by Dell Technologies. coming to you from our Palo Altos studios Kit great to see you as well. I'm here in San Francisco. And, you know, it's of the top half if you will, and this, you know, kind And when you start comparing those, how is that going to get So first of all, to your point, really key point is that you can Once it gets to the place You know, the server that you run it, How is that impacting the way is making, you know, how has that had to get you you know, beyond VMware and Dell EMC i think Right. seeing is while, you know, So I, I love the fact, you know, and really bringing that to bear. sure that you think about the the stack go to you first, is directly on the data And, you know, server, that doesn't allow you Sam bird on the other day, He's a little bit higher up. the tablet and, you know, of the yes. of compute and you know, that I'm going to be busy for And what you hear is like, Hey, you know, and as you said, Paul, the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Paul Perez | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Jeffrey | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Pat | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
two angles | QUANTITY | 0.99+ |
second step | QUANTITY | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
couple years ago | DATE | 0.99+ |
Jensen | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
Palo Altos | LOCATION | 0.99+ |
SmartNics | ORGANIZATION | 0.98+ |
Monterey | LOCATION | 0.98+ |
Monterey | ORGANIZATION | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
20 years ago | DATE | 0.98+ |
10 years ago | DATE | 0.98+ |
ESX | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
VCF | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
over 10 years | QUANTITY | 0.98+ |
VMR | ORGANIZATION | 0.97+ |
Austin Texas | LOCATION | 0.97+ |
today | DATE | 0.97+ |
this year | DATE | 0.97+ |
NSX | ORGANIZATION | 0.97+ |
First | QUANTITY | 0.96+ |
X86 | COMMERCIAL_ITEM | 0.96+ |
two guests | QUANTITY | 0.95+ |
Dell Technology World 2020 | EVENT | 0.95+ |
two copies | QUANTITY | 0.95+ |
zero | QUANTITY | 0.95+ |
SmartNic | ORGANIZATION | 0.95+ |
Sam bird | PERSON | 0.94+ |
Centene | ORGANIZATION | 0.94+ |
each physical server | QUANTITY | 0.92+ |
SmartNic | TITLE | 0.92+ |
theCUBE | ORGANIZATION | 0.92+ |
VMware cloud | ORGANIZATION | 0.9+ |
Pacific | ORGANIZATION | 0.9+ |
SmartNic | COMMERCIAL_ITEM | 0.9+ |