Mark Nickerson & Paul Turner | VMware Explore 2022
(soft joyful music) >> Welcome back everyone to the live CUBE coverage here in San Francisco for VMware Explore '22. I'm John Furrier with my host Dave Vellante. Three days of wall to wall live coverage. Two sets here at the CUBE, here on the ground floor in Moscone, and we got VMware and HPE back on the CUBE. Paul Turner, VP of products at vSphere and cloud infrastructure at VMware. Great to see you. And Mark Nickerson, Director of Go to Mark for Compute Solutions at Hewlett-Packard Enterprise. Great to see you guys. Thanks for coming on. >> Yeah. >> Thank you for having us. >> So we, we are seeing a lot of traction with GreenLake, congratulations over there at HPE. The customers changing their business model consumption, starting to see that accelerate. You guys have the deep partnership, we've had you guys on earlier yesterday. Talked about the technology partnership. Now, on the business side, where's the action at with the HP and you guys with the customer? Because, now as they go cloud native, third phase of the inflection point, >> Yep. >> Multi-cloud, hybrid-cloud, steady state. Where's the action at? >> So I think the action comes in a couple of places. Um, one, we see increased scrutiny around, kind of not only the cost model and the reasons for moving to GreenLake that we've all talked about there, but it's really the operational efficiencies as well. And, this is an area where the long term partnership with VMware has really been a huge benefit. We've actually done a lot of joint engineering over the years, continuing to do that co-development as we bring products like Project Monterey, or next generations of VCF solutions, to live in a GreenLake environment. That's an area where customers not only see the benefits of GreenLake from a business standpoint, um, on a consumption model, but also around the efficiency operationally as well. >> Paul, I want to, I want to bring up something that we always talk about on the CUBE, which is experience in the enterprise. Usually it's around, you know, technology strategy, making the right product market fit, but HPE and VMware, I mean, have exceptional depth and experience in the enterprise. You guys have a huge customer base, doesn't churn much, steady state there, you got vSphere, killer product, with a new release coming out, HP, unprecedented, great sales force. Everyone knows that you guys have great experience serving customers. And, it seems like now the fog is clearing, we're seeing clear line of sight into value proposition, you know, what it's worth, how do you make money with it, how do partners make money? So, it seems like the puzzle's coming together right now with consumption, self-service, developer focus. It just seems to be clicking. What's your take on all this because... >> Oh, absolutely. >> you got that engine there at VMware. >> Yeah. I think what customers are looking for, customers want that cloud kind of experience, but they want it on their terms. So, the work that we're actually doing with the GreenLake offerings that we've done, we've released, of course, our subscription offerings that go along with that. But, so, customers can now get cloud on their terms. They can get systems services. They know that they've got the confidence that we have integrated those services really well. We look at something like vSphere 8, we just released it, right? Well, immediately, day zero, we come out, we've got trusted integrated servers from HPE, Mark and his team have done a phenomenal job. We make sure that it's not just the vSphere releases but VSAN and we get VSAN ready nodes available. So, the customers get that trusted side of things. And, you know, just think about it. We've... 200,000 joined customers. >> Yeah, that's a lot. >> We've a hundred thousand kind of enabled partners out there. We've an enormous kind of install base of customers. But also, those customers want us to modernize. And, you know, the fact that we can do that with GreenLake, and then of course with our new features, and our new releases. >> Yeah. And it's nice that the products market fits going well on both sides. But can you guys share, both of you share, the cadence of the relationship? I mean, we're talking about vSphere, every two years, a major release. Now since 6, vSphere 6, you guys are doing three months' releases, which is amazing. So you guys got your act together there, doing great. But, you guys, so many joint customers, what's the cadence? As stuff comes out, how do you guys put that together? How tightly integrated? Can you share a quick... insight into that dynamic? >> Yeah, sure. So, I mean Mark can and add to this too, but the teams actually work very closely, where it's every release that we do is jointly qualified. So that's a really, really important thing. But it's more interesting is this... the innovation side of things. Right? If you just think about it, 'cause it's no use to just qualify. That's not that interesting. But, like I said, we've released with vSphere 8 you know... the new enhanced storage architecture. All right? The new, next generation of vSphere. We've got that immediately qualified, ready on HPE equipment. We built out new AI servers, actually with Invidia and with HPE. And, we're able to actually push the extremes of... AI and intelligence... on systems. So that's kind of work. And then, of course, our Project Monterey work. Project Monterey Distributed Services Engine. That's something we're really excited about, because we're not just building a new server anymore, we're actually going to change the way servers are built. Monterey gives us a new platform to build from that we're actually jointly working. >> So double click on that, and then to explain how HPE is taking advantage of it. I mean, obvious you have more diversity of XPU's, you've got isolation, you've got now better security, and confidential computing, all that stuff. Explain that in some detail, and how does HPE take advantage of that? >> Yeah, definitely. So, if you think about vSphere 8, vSphere 8 I can now virtualize anything. I can virtualize your CPU's, your GPU's, and now what we call DPU's, or data processing units. A data processing unit, it's... think of it as we're running, actually, effectively another version of ESX, sitting down on this processor. But, that gives us an ability to run applications, and some of the virtualization services, actually down on that DPU. It's separated away from where you run your application. So, all your applications get to consume all your CPU. It's all available to you. Your DPU is used for that virtualization and virtualization services. And that's what we've done. We've been working with HPE and HPE and Pensando. Maybe you can talk some of the new systems that we've built around this too. >> Yeah. So, I mean, that's one of the... you talked about the cadence and that... back to the cadence question real briefly. Paul hit on it. Yeah, there's a certain element of, "Let's make sure that we're certified, we're qualified, we're there day zero." But, that cadence goes a lot beyond it. And, I think Project Monterey is a great example of where that cadence expands into really understanding the solutioning that goes into what the customer's expecting from us. So, to Paul's point, yeah, we could have just qualified the ESX version to go run on a DPU and put that in the market and said, "Okay, great. Customers, We know that it works." We've actually worked very tightly with VMware to really understand the use case, what the customer needs out of that operating environment, and then provide, in the first instantiation, three very discrete product solutions aimed at different use cases, whether that's a more robust use case for customers who are looking at data intensive, analytic intensive, environments, other customers might be looking at VDI or even edge applications. And so, we've worked really closely with VMware to engineer solutions specific to those use cases, not just to a qualification of an operating environment, not just a qualification of certain software stack, but really into an understanding of the use case, the customer solution, and how we take that to market with a very distinct point of view alongside our partners. >> And you can configure the processors based on that workload. Is that right? And match the workload characteristics with the infrastructure is that what I'm getting? >> You do, and actually, well, you've got the same flexibility that we've actually built in why you love virtualization, why people love it, right? You've got the ability to kind of bring harness hardware towards your application needs in a very dynamic way. Right? So if you even think about what we built in vSphere 8 from an AI point of view, we're able to scale. We built the ability to actually take network device cards, and GPU cards, you're to able to build those into a kind of composed device. And, you're able to provision those as you're provisioning out VM's. And, the cool thing about that, is you want to be able to get extreme IO performance when you're doing deep learning applications, and you can now do that, and you can do it very dynamically, as part of the provisioning. So, that's the kind of stuff. You've got to really think, like, what's the use case? What's the applications? How do we build it? And, for the DPU side of things, yes, we've looked at how do we take some of our security services, some of our networking services, and we push those services down onto the SmartNIC. It frees up processors. I think the most interesting thing, that you probably saw on the keynote, was we did benchmarks with Reddit databases. We were seeing 20 plus, I'm sure the exact number, I think it was 27%, I have to get exact number, but a 27% latency improvement, to me... I came from the database background, latency's everything. Latency's king. It's not just... >> Well it's... it's number one conversation. >> I mean, we talk about multi-cloud, and as you start getting into hybrid. >> Right. >> Latency, data movement, efficiency, I mean, this is all in the workload mindset that the workhorses that you guys have been working at HPE with the compute, vSphere, this is heart center of the discussion. I mean, it is under the hood, and we're talking about the engine here, right? >> Sure. >> And people care about this stuff, Mark. This is like... Kubernetes only helps this better with containers. I mean, it's all kind of coming together. Where's that developer piece? 'Cause remember, infrastructure is code, what everybody wants. That's the reality. >> Right. Well, I think if you take a look at... at where the Genesis of the desire to have this capability came from, it came directly out of the fact that you take a look at the big cloud providers, and sure, the ability to have a part of that operating environment, separated out of the CPU, free up as much processing as you possibly can, but it was all in this very lockdown proprietary, can't touch it, can't develop on it. The big cloud guys owned it. VMware has come along and said, "Okay, we're going to democratize that. We're going to make this available for the masses. We're opening this up so that developers can optimize workloads, can optimize applications to run in this kind of environment." And so, really it's about bringing that cloud experience, that demand that customers have for that simplicity, that flexibility, that efficiency, and then marrying it with the agility and security of having your on premises or hybrid cloud environment. And VMware is kind of helping with that... >> That's resonating with the customer, I got to imagine. >> Yeah. >> What's the feedback you're hearing? When you talk to customers about that, the like, "Wait a minute, we'd have to like... How long is that going to take? 'Cause that sounds like a one off." >> Yeah. I'll tell you what... >> Everything is a one off now. You could do a one off. It scales. >> What I hear is give me more. We love where we're going in the first instantiation of what we can do with the Distributed Services Engine. We love what we're seeing. How do we do more? How do we drive more workloads in here? How do we get more efficiency? How can we take more of the overhead out of the CPU, free up more cores. And so, it's a tremendously positive response. And then, it's a response that's resonating with, "Love it. Give me more." >> Oh, if you're democratizing, I love that word because it means democratization, but someone's being democratized. Who's... What's... Something when... that means good things are happening, which means someone's not going to be winning out. Who's that? What... >> Well it, it's not necessarily that someone's not winning out. (laughs) What you read, it comes down to... Democratizing means you've got to look at it, making it widely available. It's available to all. And these things... >> No silos. No gatekeepers. Kind of that kind of thing. >> It's a little operationally difficult to use. You've got... Think about the DPU market. It was a divergent market with different vendors going into that market with different kind of operating systems, and that doesn't work. Right? You've got to actually go and virtualize those DPU's. So then, we can actually bring application innovation onto those DPU's. We can actually start using them in smart ways. We did the same thing with GPU's. We made them incredibly easy to use. We virtualized those GPU's, we're able to, you know, you can provision them in a very simple way. And, we did the same thing with Kubernetes. You mentioned about container based applications and modern apps in the one platform now, you can just set a cluster and you can just say, "Hey I want that as a modern apps enabled cluster." And boom. It's done. And, all of the configurations, set up, Kubernetes, it's done for you. >> But the thing that just GreenLake too, the democratization aspect of how that changed the business model unleashes... >> Right. >> ...efficiency and just simplicity. >> Oh yeah, absolutely. >> But the other thing was the 20% savings on the Reddit's benchmark, with no change required at the application level, correct? >> No change at the application level. In the vCenter, you have to set a little flag. >> Okay. You got to tick a box. >> You got to tick a little box... >> So I can live with that. But the point I'm making is that traditionally, we've had... We have an increasing amount of waste to do offloads, and now you're doing them much more efficiently, right? >> Yes. >> Instead of using the traditional x86 way of doing stuff, you're now doing purpose built, applying that to be much more efficient >> Totally agree. And I think it's becoming, it's going to become even more important. Look at, we are... our run times for our applications, We've got to move to a world where we're building completely confidential applications at all time. And that means that they are secured, encrypted, all traffic is encrypted, whether it's storage traffic, whether it's IO traffic, we've got to make sure we've got complete route of trust of the applications. And so, to do all of that is actually a... compute intensive. It just is. And so, I think as we move forward and people build much more complete, confidential, compute secured environments, you're going to be encrypting all traffic all the time. You're going to be doing micro-zoning and firewalling down at the VM level so that you've got the protection. You can take a VM, you can move it up to the cloud, it will inherit all of its policies, will move with it. All of that will take compute capacity. >> Yup. >> The great thing is that the DPU's give us this ability to offload and to use some of that spare compute capacity. >> And isolate so the application chance can't just tunnel in and get access to that >> You guys got so much going on. You can have your own CUBE show, just on the updating, what's going on between the two companies, and then the innovation. We got one minute left. Just quickly, what's the goal in the partnership? What's next? You guys going to be in the field together, doing joint customer work? Is there bigger plans? Is there events out there? What are some of your plans together in the marketplace? >> That's you. >> Yup. So, I think, Paul kind of alluded to it. Talk about the fact that you've got a hundred thousand partners in common. The venn diagram of looking at the HPE channel and the VMware channel, clearly there's an opportunity there to continue to drive a joint, go to market message, through both of our sales organizations, and through our shared channel. We have a 25,000 strong... solution architect... force that we can leverage. So as we get these exciting things to talk about, I mean, you talk about Project Monterey, the Distributed Services Engine. That's big news. There's big news around vSphere 8. And so, having those great things to go talk about with that strong sales team, with that strong channel organization, I think you're going to see a lot stronger partnership between VMware and HPE as we continue to do this joint development and joint selling >> Lots to get enthused about, pretty much there. >> Oh yeah! >> Yeah, I would just add in that we're actually in a very interesting point as well, where Intel's just coming out with Next Rev systems, we're building the next gen of these systems. I think this is a great time for customers to look at that aging infrastructure that they have in place. Now is a time we can look at upgrading it, but when they're moving it, they can move it also to a cloud subscription based model, you know can modernize not just what you have in terms of the capabilities and densify and get much better efficiency, but you can also modernize the way you buy from us and actually move to... >> Real positive change transformation. Checks the boxes there. And put some position for... >> You got it. >> ... cloud native development. >> Absolutely. >> Guys, thanks for coming on the CUBE. Really appreciate you coming out of that busy schedule and coming on and give us the up... But again, we can do a whole show some... all the moving parts and innovation going on with you guys. So thanks for coming on. Appreciate it. Thank you. I'm John Dave Vellante we're back with more live coverage day two, two sets, three days of wall to wall coverage. This is the CUBE at VMware Explorer. We'll be right back.
SUMMARY :
Great to see you guys. You guys have the deep partnership, Where's the action at? kind of not only the cost and experience in the enterprise. just the vSphere releases and then of course with our new features, both of you share, but the teams actually work very closely, and then to explain how HPE and some of the virtualization services, and put that in the market and said, And match the workload characteristics We built the ability to actually number one conversation. and as you start getting into hybrid. that the workhorses that That's the reality. the ability to have a part of customer, I got to imagine. How long is that going to take? Everything is a one off now. in the first instantiation I love that word because It's available to all. Kind of that kind of thing. We did the same thing with GPU's. But the thing that just GreenLake too, In the vCenter, you have But the point I'm making and firewalling down at the VM level the DPU's give us this ability just on the updating, and the VMware channel, Lots to get enthused about, the way you buy from us Checks the boxes there. and innovation going on with you guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Mark Nickerson | PERSON | 0.99+ |
Paul Turner | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
John Dave Vellante | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
27% | QUANTITY | 0.99+ |
Hewlett-Packard Enterprise | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Moscone | LOCATION | 0.99+ |
two companies | QUANTITY | 0.99+ |
Monterey | ORGANIZATION | 0.99+ |
Pensando | ORGANIZATION | 0.99+ |
25,000 | QUANTITY | 0.99+ |
two sets | QUANTITY | 0.99+ |
one minute | QUANTITY | 0.99+ |
vSphere | TITLE | 0.99+ |
both sides | QUANTITY | 0.99+ |
vSphere 8 | TITLE | 0.99+ |
three months' | QUANTITY | 0.99+ |
ESX | TITLE | 0.99+ |
three days | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Invidia | ORGANIZATION | 0.99+ |
Two sets | QUANTITY | 0.99+ |
vSphere 6 | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
one platform | QUANTITY | 0.98+ |
20 plus | QUANTITY | 0.98+ |
first instantiation | QUANTITY | 0.98+ |
Project Monterey | ORGANIZATION | 0.97+ |
6 | TITLE | 0.97+ |
GreenLake | ORGANIZATION | 0.97+ |
VMware Explorer | ORGANIZATION | 0.95+ |
Kubernetes | TITLE | 0.94+ |
Three days | QUANTITY | 0.94+ |
day two | QUANTITY | 0.94+ |
vCenter | TITLE | 0.93+ |
hundred thousand | QUANTITY | 0.92+ |
third phase | QUANTITY | 0.92+ |
200,000 joined customers | QUANTITY | 0.92+ |
one | QUANTITY | 0.91+ |
Project Monterey | ORGANIZATION | 0.89+ |
Intel | ORGANIZATION | 0.85+ |
8 | TITLE | 0.84+ |
VCF | ORGANIZATION | 0.84+ |
vSphere | COMMERCIAL_ITEM | 0.83+ |
vSphere | ORGANIZATION | 0.81+ |
20% savings | QUANTITY | 0.81+ |
VMware Explore '22 | EVENT | 0.81+ |
every two years | QUANTITY | 0.8+ |
CUBE | ORGANIZATION | 0.79+ |
hundred thousand partners | QUANTITY | 0.79+ |
three very discrete product | QUANTITY | 0.79+ |
Distributed Services Engine | ORGANIZATION | 0.76+ |
Whats new in Cloud Networking
(upbeat music) >> Okay, we've heard from the folks at Pluribus Networks and NVIDIA about their effort to transform cloud networking and unify bespoke infrastructure. Now, let's get the perspective from an independent analyst. And to do so, we welcome in ESG senior analyst, Bob Laliberte. Bob, good to see you. Thanks for coming into our East Coast studios. >> Oh, thanks for having me. It's great to be here. >> So this idea of unified cloud networking approach, how serious is it? What's driving it? >> There's certainly a lot of drivers behind it, but probably the first and foremost is the fact that application environments are becoming a lot more distributed, right? So the IT pendulum tends to swing back and forth, and we're definitely on one that's swinging from consolidated to distributed. And so applications are being deployed in multiple private data centers, multiple public cloud locations, edge locations. And as a result of that, what you're seeing is a lot of complexity. So organizations are having to deal with this highly disparate environment. They have to secure it. They have to ensure connectivity to it. And all that's driving up complexity. In fact, when we asked, in one of our last surveys last year about network complexity, more than half, 54% came out and said, "Hey, our network environment is now either more or significantly more complex than it used to be." And as a result of that, what you're seeing is it's really impacting agility. So everyone's moving to these modern application environments, distributing them across areas so they can improve agility, yet it's creating more complexity. So a little bit counter to the fact and really counter to their overarching digital transformation initiatives. From what we've seen, 9 out 10 organizations today are either beginning, in process, or have a mature digital transformation process or initiative, but their top goals, when you look at them, and it probably shouldn't be a surprise, the number one goal is driving operational efficiency. So it makes sense. I've distributed my environment to create agility but I've created a lot of complexity. So now, I need these tools that are going to help me drive operational efficiency, drive better experiences. >> Got it. I mean, I love how you bring in the data. ESG does a great job with that. The question is, is it about just unifying existing networks or is there sort of a need to rethink, kind of do over how networks are built? >> That's a really good point. Because certainly, unifying networks helps, right. Driving any kind of operational efficiency helps. But in this particular case, because we've made the transition to new application architectures and the impact that's having as well, it's really about changing and bringing in new frameworks and new network architectures to accommodate those new application architectures. And by that, what I'm talking about is the fact that these new modern application architectures, microservices, containers, are driving a lot more east-west traffic. So in the old days, it used to be easier. North-south coming out of the server, one application per server, things like that. Right now, you've got hundreds, if not thousands, of microservices communicating with each other, users communicating to 'em. So there's a lot more traffic, and a lot of it's taking place within the servers themselves. The other issue that you're starting to see as well, from that security perspective, when we were all consolidated, we had those perimeter-based, legacy, castle-and-moat security architectures, but that doesn't work anymore when the applications aren't in the castle, right. When everything's spread out, that no longer happens. So we're absolutely seeing organizations trying to make a shift. And I think much like, if you think about the shift that we're seeing with all the remote workers in the SASE framework to enable a secure framework there, it's almost the same thing. We're seeing this distributed services framework come up to support the applications better within the data centers, within the cloud data centers, so that you can drive that security closer to those applications and make sure they're fully protected. And that's really driving a lot of the zero trust stuff you hear, right? So never trust, always verify, making sure that everything is really secure. Microsegmentation's another big area. So ensuring that these applications, when they're connected to each other, they're fully segmented out. And again, because if someone does get a breach, if they are in your data center, you want to limit the blast radius, you want to limit the amount of damage that's done. So that by doing that, it really makes it a lot harder for them to see everything that's in there. >> You mentioned zero trust. It used to be a buzzword and now it's become a mandate. And I love the moat analogy. You build a moat to protect the queen in the castle. The queen's left the castle. It's just distributed. So how should we think about this Pluribus and NVIDIA solution? There's a spectrum. Help us understand that. You got appliances. You got pure software solutions. You got what Pluribus is doing with NVIDIA. Help us understand that. >> Yeah, absolutely. I think as organizations recognize the need to distribute their services closer to the applications, they're trying different models. So from a legacy approach, from a security perspective, they've got decentralized firewalls that they're deploying within their data centers. The hard part for that is, if you want all this traffic to be secured, you're actually sending it out of the server, up through the rack, usually to a different location in the data center and back. So with the need for agility, with the need for performance, right, that adds a lot of latency. Plus, when you start needing to scale, that means adding more and more network connections, more and more appliances. So it can get very costly, as well as impacting the performance. The other way that organizations are seeking to solve this problem is by taking the software itself and deploying it on the servers, okay? So it's a great approach, right? It brings it really close to the applications. But the things you start running into there, there's a couple of things. One is that you start seeing that the DevOps team start taking on that networking and security responsibility. >> Which they don't want to do. >> They don't want to do, right. And the operations teams loses a little bit of visibility into that. Plus, when you load the software onto the server, you're taking up precious CPU cycles. So if you're really wanting your applications to perform at an optimized state, having additional software on there isn't going to do it. So when we think about all those types of things, right, and certainly, the other side effects of that is the impact in the performance, but there's also a cost. So if you have to buy more servers, because your CPUs are being utilized, right, and you have hundreds or thousands of servers, right, those costs are going to add up. So what NVIDIA and Pluribus have done by working together is to be able to take some of those services and be able to deploy them onto a SmartNIC, right, be able to deploy the DPU-based SmartNIC into the servers themselves, and then Pluribus has come in and said, "We're going to create that unified fabric across the networking space into those networking services all the way down to the server." So the benefits of having that are pretty clear in that you're offloading that capability from the server. So your CPUs are optimized. You're saving a lot of money. You're not having to go outside of the server and go to a different rack somewhere else in the data center. So your performance is going to be optimized as well. You're not going to incur any latency hit for every round trip to the firewall and back. So I think all those things are really important, plus the fact that you're going to see, from an organizational aspect, we talked about the DevOps and NetOps teams, the network operations teams now can work with the security teams to establish the security policies and the networking policies so that the DevOps teams don't have to worry about that. So essentially, they just create the guardrails and let the DevOps team run, 'cause that's what they want. They want that agility and speed. >> Your point about CPU cycles is key. I mean, it's estimated that 25 to 30% of CPU cycles in the data center are wasted. The cores are wasted doing storage offload or networking or security offload. And I've said many times, everybody needs a Nitro, like the Amazon Nitro. You can only buy Amazon Nitro if you go into AWS, right. But everybody needs a Nitro. So is that how we should think about this? >> Yeah, that's a great analogy to think about this. And I think I would take it a step further because it's almost the opposite end of the spectrum, because Pluribus and NVIDIA are doing this in a very open way. And so Pluribus has always been a proponent of open networking. And so what they're trying to do is extend that now to these distributed services. So Leverage working with NVIDIA is also open as well, being able to bring that to bear so that organizations cannot only take advantage of these distributed services, but also that unified networking fabric, that unified cloud fabric, across that environment from the server across the switches. The other key piece of what Pluribus is doing, because they've been doing this for a while now and they've been doing it with the older application environments and the older server environments, they're able to provide that unified networking experience across a host of different types of servers and platforms. So you can have not only the modern application supported, but also the legacy environments, bare metal. You could go any type of virtualization, you can run containers, et cetera. So a wide gambit of different technologies hosting those applications, supported by a unified cloud fabric from Pluribus. >> So what does that mean for the customer? I don't have to rip and replace my whole infrastructure right? >> Yeah, well, think what it does, again, from that operational efficiency, when you're going from a legacy environment to that modern environment, it helps with the migration, it helps you accelerate that migration because you're not switching different management systems to accomplish that. You've got the same, unified networking fabric that you've been working with to enable you to run your legacy as well as transfer over to those modern applications as well. >> Got it, so your people are comfortable with the skillsets, et cetera. All right, I'll give you the last word. Give us the bottom line here. >> So I think, obviously, with all the modern applications that are coming out, the distributed application environments, it's really posing a lot of risk on these organizations to be able to get not only security, but also visibility into those environments. And so organizations have to find solutions. As I said at the beginning, they're looking to drive operational efficiency. So getting operational efficiency from a unified cloud networking solution, that it goes from the server across the servers to multiple different environments, right, in different cloud environments, is certainly going to help organizations drive that operational efficiency, it's going to help them save money for visibility, for security, and even open networking. So a great opportunity for organizations, especially large enterprises, cloud providers, who are trying to build that hyperscale-like environment. You mentioned the Nitro card. This is a great way to do it with an open solution. >> Love it. Bob, thanks so much for coming in and sharing your insights. I appreciate it. >> You're welcome, thanks. >> All right, in a moment, I'll be back to give you some closing thoughts on unified cloud networking and the key takeaways from today. You're watching "theCUBE", your leader in enterprise tech coverage. (upbeat music)
SUMMARY :
And to do so, It's great to be here. So a little bit counter to or is there sort of a need to rethink, in the SASE framework to enable And I love the moat analogy. that the DevOps team start taking effects of that is the impact So is that how we should think about this? and the older server environments, to enable you to run your legacy All right, I'll give you the last word. across the servers to multiple and sharing your insights. and the key takeaways from today.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Bob Laliberte | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Bob | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Pluribus Networks | ORGANIZATION | 0.99+ |
Pluribus | ORGANIZATION | 0.99+ |
25 | QUANTITY | 0.99+ |
9 | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Nitro | COMMERCIAL_ITEM | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
one application | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.98+ |
30% | QUANTITY | 0.98+ |
One | QUANTITY | 0.97+ |
ESG | ORGANIZATION | 0.96+ |
East Coast | LOCATION | 0.95+ |
one | QUANTITY | 0.94+ |
today | DATE | 0.94+ |
more than half, 54% | QUANTITY | 0.93+ |
SASE | TITLE | 0.92+ |
10 organizations | QUANTITY | 0.9+ |
zero trust | QUANTITY | 0.88+ |
theCUBE | TITLE | 0.84+ |
SmartNIC | COMMERCIAL_ITEM | 0.68+ |
DevOps | ORGANIZATION | 0.66+ |
Nitro card | COMMERCIAL_ITEM | 0.61+ |
Ami Badani, NVIDIA & Mike Capuano, Pluribus Networks
(upbeat music) >> Let's kick things off. We're here at Mike Capuano the CMO of Pluribus Networks, and Ami Badani VP of Networking, Marketing, and Developer of Ecosystem at NVIDIA. Great to have you welcome folks. >> Thank you. >> Thanks. >> So let's get into the the problem situation with cloud unified networking. What problems are out there? What challenges do cloud operators have Mike? Let's get into it. >> The challenges that we're looking at are for non hyperscalers that's enterprises, governments Tier 2 service providers, cloud service providers. And the first mandate for them is to become as agile as a hyperscaler. So they need to be able to deploy services and security policies in seconds. They need to be able to abstract the complexity of the network and define things in software while it's accelerated in hardware. Really ultimately they need a single operating model everywhere. And then the second thing is they need to distribute networking and security services out to the edge of the host. We're seeing a growth cyber attacks. It's not slowing down. It's only getting worse and solving for this security problem across clouds is absolutely critical. And the way to do it is to move security out to the host. >> With that goal in mind, what's the Pluribus vision how does this tie together? >> So basically what we see is that this demands a new architecture and that new architecture has four tenets. The first tenet is unified and simplified cloud networks. If you look at cloud networks today, there's sort of like discreet bespoke cloud networks per hypervisor, per private cloud, edge cloud, public cloud. Each of the public clouds have different networks, that needs to be unified. If we want these folks to be able to be agile they need to be able to issue a single command or instantiate a security policy across all of those locations with one command and not have to go to each one. The second is, like I mentioned distributed security. Distributed security without compromise, extended out to the host is absolutely critical. So micro segmentation and distributed firewalls. But it doesn't stop there. They also need pervasive visibility. It's sort of like with security you really can't see you can't protect you can't see. So you need visibility everywhere. The problem is visibility to date has been very expensive. Folks have had to basically build a separate overlay network of taps, packet brokers, tap aggregation infrastructure, that really needs to be built in to this unified network I'm talking about. And the last thing is automation. All of this needs to be SDN enabled. So this is related to my comment about abstraction. Abstract the complexity of all these discreet networks whatever's down there in the physical layer. I don't want to see it. I want to abstract it. I want to define things in software but I do want to leverage the power of hardware to accelerate that. So that's the fourth tenet is SDN automation. >> Mike, we've been talking on theCUBE a lot about this architectural shift and customers are looking at this. This is a big part of everyone who's looking at cloud operations, NextGen. How do we get there? How do customer customers get this vision realized? >> That's a great question. And I appreciate the tee up. We're here today for that reason. We're introducing two things today. The first is a unified cloud networking vision. And that is a vision of where Pluribus is headed with our partners like NVIDIA long term. And that is about deploying a common operating model SDN enabled, SDN automated, hardware accelerated across all clouds. And whether that's underlay and overlay switch or server, any hypervisor infrastructure containers, any workload doesn't matter. So that's ultimately where we want to get. And that's what we talked about earlier. The first step in that vision is what we call the unified cloud fabric. And this is the next generation of our adaptive cloud fabric. And what's nice about this is we're not starting from scratch. We have an award-winning adaptive cloud fabric product that is deployed globally. And in particular, we're very proud of the fact that it's deployed in over 100 Tier 1 mobile operators as the network fabric for their 4G and 5G virtualized cores. We know how to build carrier grade networking infrastructure. What we're doing now to realize this next generation unified cloud fabric is we're extending from the switch to this NVIDIA BlueField-2 DPU. We know there's. >> Hold that up real quick. That's a good prop. That's the BlueField NVIDIA card. >> It's the NVIDIA BlueField-2 DPU, data processing unit. What we're doing fundamentally is extending our SDN automated fabric, the unified cloud fabric, out to the host. But it does take processing power. So we knew that we didn't want to do we didn't want to implement that running on the CPUs which is what some other companies do. Because it consumes revenue generating CPUs from the application. So a DPU is a perfect way to implement this. And we knew that NVIDIA was the leader with this BlueField-2. And so that is the first, that's the first step into getting, into realizing this vision. >> NVIDIA has always been powering some great workloads of GPUs, now you got DPUs. Networking and NVIDIA as here. What is the relationship with Pluribus? How did that come together? Tell us the story. >> We've been working with Pluribus for quite some time. I think the last several months was really when it came to fruition. And what Pluribus is trying to build and what NVIDIA has. So we have, this concept of a blue field data processing unit, which, if you think about it, conceptually does really three things, offload, accelerate, and isolate. So offload your workloads from your CPU to your data processing unit, infrastructure workloads that is. Accelerate, so there's a bunch of acceleration engines. You can run infrastructure workloads much faster than you would otherwise. And then isolation, So you have this nice security isolation between the data processing unit and your other CPU environment. And so you can run completely isolated workloads directly on the data processing unit. So we introduced this, a couple years ago. And with Pluribus we've been talking to the Pluribus team for quite some months now. And I think really the combination of what Pluribus is trying to build, and what they've developed around this unified cloud fabric fits really nicely with the DPU and running that on the DPU and extending it really from your physical switch all the way to your host environment, specifically on the data processing unit. So if you think about what's happening as you add data processing units to your environment. So every server we believe over time is going to have data processing units. So now you'll have to manage that complexity from the physical network layer to the host layer. And so what Pluribus is really trying to do is extending the network fabric from the host from the switch to the host and really have that single pane of glass for network operators to be able to configure, provision, manage all of the complexity of the network environment. So that's really how the partnership truly started. And so it started really with extending the network fabric and now we're also working with them on security. If you sort of take that concept of isolation and security isolation, what Pluribus has within their fabric is the concept of micro segmentation. And so now you can take that extend it to the data processing unit and really have isolated micro segmentation workloads whether it's bare metal, cloud native environments, whether it's virtualized environments, whether it's public cloud, private cloud, hybrid cloud. So it really is a magical partnership between the two companies with their unified cloud fabric running on the DPU. >> You know what I love about this conversation is it reminds me of when you have these changing markets. The product gets pulled out of the market and you guys step up and create these new solutions. And I think this is a great example. So I have to ask you how do you guys differentiate what sets this apart for customers? What's in it for the customer? >> So I mentioned three things in terms of the value of what the BlueField brings. There's offloading, accelerating and isolating. And that's sort of the key core tenets of BlueField. So that, if you sort of think about what BlueField what we've done, in terms of the differentiation. We're really a robust platform for innovation. So we introduced BlueField-2 last year. We're introducing BlueField-3 which is our next generation of blue field. It'll have 5X the ARM compute capacity. It will have 400 gig line rate acceleration, 4X better crypto acceleration. So it will be remarkably better than the previous generation. And we'll continue to innovate and add, chips to our portfolio every 18 months to two years. So that's sort of one of the key areas of differentiation. The other is that if you look at NVIDIA, what we're sort of known for is really known for our AI, our artificial intelligence and our artificial intelligence software, as well as our GPU. So you look at artificial intelligence and the combination of artificial intelligence plus data processing. This really creates faster, more efficient secure AI systems from, the core of your data center, all the way out to the edge. And so with NVIDIA we really have these converged accelerators where we've combined the GPU, which does all your AI processing with your data processing with the DPU. So we have this convergence really nice convergence of that area. And I would say the third area is really around our developer environment. One of the key, one of our key motivations at NVIDIA is really to have our partner ecosystem embrace our technology and build solutions around our technology. So if you look at what we've done with the DPU we've created an SDK, which is an open SDK called DOCA. And it's an open SDK for our partners to really build and develop solutions using BlueField and using all these accelerated libraries that we expose through DOCA. And so part of our differentiation is really building this open ecosystem for our partners to take advantage and build solutions around our technology. >> What's exciting is when I hear you talk it's like you realize that there's no one general purpose network anymore. Everyone has their own super environment, super cloud or these new capabilities. They can really craft their own I'd say custom environment at scale with easy tools. And it's all kind of that again this is the new architecture Mike, you were talking about. How does customers run this effectively, cost effectively? And how do people migrate? >> I think that is the key question. So we've got this beautiful architecture. Amazon Nitro is a good example of a SmartNIC architecture that has been successfully deployed but, enterprises and Tier 2 service providers and Tier 1 service providers and governments are not Amazon. So they need to migrate there and they need this architecture to be cost of effective. And that's super key. I mean, the reality is DPU are moving fast but they're not going to be deployed everywhere on day one. Some servers will have have DPUs right away. Some servers will have DPUs in a year or two. And then there are devices that may never have DPUs. IOT gateways, or legacy servers, even mainframes. So that's the beauty of a solution that creates a fabric across both the switch and the DPU. And by leveraging the NVIDIA BlueField DPU what we really like about it is, it's open and that drives cost efficiencies. And then, with this our architectural approach effectively you get a unified solution across switch and DPU, workload independent. It doesn't matter what hypervisor it is. Integrated visibility, integrated security and that can create tremendous cost efficiencies and really extract a lot of the expense from a capital perspective out of the network as well as from an operational perspective because now I have an SDN automated solution where I'm literally issuing a command to deploy a network service, or to deploy a security policy and is deployed everywhere automatically saving the network operations team and the security operations team time. >> So let me rewind that 'cause that's super important. Got the unified cloud architecture. I'm the customer, it's implemented. What's the value again, take me through the value to me. I have a unified environment. What's the value? >> I mean the value is effectively, there's a few pieces of value. The first piece of value is I'm creating this clean demark. I'm taking networking to the host. And like I mentioned, we're not running it on the CPU. So in implementations that run networking on the CPU there's some conflict between the DevOps team who own the server, and the NetOps team who own the network because they're installing software on the CPU stealing cycles from what should be revenue generating CPUs. So now by terminating the networking on the DPU we create this real clean demark. So the DevOps folks are happy because they don't necessarily have the skills to manage network and they don't necessarily want to spend the time managing networking. They've got their network counterparts who are also happy the NetOps team because they want to control the networking. And now we've got this clean demark where the DevOps folks get the services they need and the NetOps folks get the control and agility they need. So that's a huge value. The next piece of value is distributed security. This is essential I mentioned it earlier, pushing out micro segmentation and distributed firewall basically at the application level, where I create these small segments on an application by application basis. So if a bad actor does penetrate the perimeter firewall they're contained once they get inside. 'Cause the worst thing is a bad actor penetrates perimeter firewall and can go wherever they want in wreak havoc. And so that's why this is so essential. And the next benefit obviously is this unified networking operating model. Having an operating model across switch and server, underlay and overlay, workload agnostic, making the life of the NetOps teams much easier so they can focus their time on really strategy instead of spending an afternoon deploying a single VLAN for example. >> Awesome, and I think also for my stand point I mean perimeter security is pretty much, that out there, I guess the firewall still out there exists but pretty much they're being breached all the time the perimeter. You have to have this new security model. And I think the other thing that you mentioned the separation between DevOps is cool because the infrastructure is code is about making the developers be agile and build security in from day one. So this policy aspect is huge new control plan. I think you guys have a new architecture that enables the security to be handled more flexible. That seems to be the killer feature here. >> If you look at the data processing unit, I think one of the great things about sort of this new architecture it's really the foundation for zero trust. So like you talked about the perimeter is getting breached. And so now each and every compute node has to be protected. And I think that's sort of what you see with the partnership between Pluribus and NVIDIA is the DPU is really the foundation of zero trust and Pluribus is really building on that vision with allowing sort of micro-segmentation and being able to protect each and every compute node as well as the underlying network. >> This is super exciting. This is illustration of how the market's evolving architectures are being reshaped and refactored for cloud scale and all this new goodness with data. So I got to ask how you guys go into market together. Michael, start with you. What's the relationship look like in the go to market with NVIDIA? >> We're super excited about the partnership. Obviously we're here together. We think we've got a really good solution for the market so we're jointly marketing it. Obviously we appreciate that NVIDIA's open that's sort of in our DNA, we're about a open networking. They've got other ISVs who are going to run on BlueField-2. We're probably going to run on other DPUs in the future. But right now we feel like we're partnered with the number one provider of DPUs in the world and super excited about making a splash with it. >> Oh man NVIDIA got the hot product. >> So BlueField-2 as I mentioned was GA last year, we're introducing, well we now also have the converged accelerator. So I talked about artificial intelligence our artificial intelligence software with the BlueField DPU, all of that put together on a converged accelerator. The nice thing there is you can either run those workloads, so if you have an artificial intelligence workload and an infrastructure workload, you can work on them separately on the same platform or you can actually use you can actually run artificial intelligence applications on the BlueField itself. So that's what the converged accelerator really brings to the table. So that's available now. Then we have BlueField-3 which will be available late this year. And I talked about sort of, how much better that next generation of BlueField is in comparison to BlueField-2. So we'll see BlueField-3 shipping later on this year. And then our software stack which I talked about, which is called DOCA. We're on our second version, our DOCA 1.2 we're releasing DOCA 1.3 in about two months from now. And so that's really our open ecosystem framework. So allow you to program the BlueField. So we have all of our acceleration libraries, security libraries, that's all packed into this SDK called DOCA. And it really gives that simplicity to our partners to be able to develop on top of BlueField. So as we add new generations of BlueField, next year we'll have another version and so on and so forth. DOCA is really that unified layer that allows BlueField to be both forwards compatible and backwards compatible. So partners only really have to think about writing to that SDK once. And then it automatically works with future generations of BlueField. So that's sort of the nice thing around DOCA. And then in terms of our go to market model we're working with every major OEM. Later on this year you'll see, major server manufacturers releasing BlueField enabled servers, so more to come. >> Awesome, save money, make it easier, more capabilities, more workload power. This is the future of cloud operations. >> And one thing I'll add is we are, we have a number of customers as you'll hear in the next segment that are already signed up and will be working with us for our early field trial starting late April early May. We are accepting registrations. You can go to www.pluribusnetworks.com/eft. If you're interested in signing up for being part of our field trial and providing feedback on the product >> Awesome innovation and networking. Thanks so much for sharing the news. Really appreciate, thanks so much. In a moment we'll be back to look deeper in the product the integration, security, zero trust use cases. You're watching theCUBE, the leader in enterprise tech coverage. (upbeat music)
SUMMARY :
the CMO of Pluribus Networks, So let's get into the And the way to do it is to So that's the fourth and customers are looking at this. And I appreciate the tee up. That's the BlueField NVIDIA card. And so that is the first, What is the relationship with Pluribus? DPU and running that on the DPU So I have to ask you how So that's sort of one of the And it's all kind of that again So that's the beauty of a solution that Got the unified cloud architecture. and the NetOps team who own the network that enables the security is the DPU is really the in the go to market with NVIDIA? on other DPUs in the future. So that's sort of the This is the future of cloud operations. and providing feedback on the product Thanks so much for sharing the news.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tom | PERSON | 0.99+ |
Stefanie | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Manasi | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Pluribus | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Stephanie Chiras | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
Ami Badani | PERSON | 0.99+ |
Stefanie Chiras | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2008 | DATE | 0.99+ |
Mike Capuano | PERSON | 0.99+ |
two companies | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Mike | PERSON | 0.99+ |
RHEL | TITLE | 0.99+ |
Chicago | LOCATION | 0.99+ |
2021 | DATE | 0.99+ |
Pluribus Networks | ORGANIZATION | 0.99+ |
second version | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
next year | DATE | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
Craig Nunes & Tobias Flitsch, Nebulon | CUBEconversations
(upbeat intro music) >> More than a decade ago, the team at Wikibon coined the term Server SAN. We saw the opportunity to dramatically change the storage infrastructure layer and predicted a major change in technologies that would hit the market. Server SAN had three fundamental attributes. First of all, it was software led. So all the traditionally expensive controller functions like snapshots and clones and de-dupe and replication, compression, encryption, et cetera, they were done in software directly challenging a two to three decade long storage controller paradigm. The second principle was it leveraged and shared storage inside of servers. And the third it enabled any-to-any typology between servers and storage. Now, at the time we defined this coming trend in a relatively narrow sense inside of a data center location, for example, but in the past decade, two additional major trends have emerged. First the software defined data center became the dominant model, thanks to VMware and others. And while this eliminated a lot of overhead, it also exposed another problem. Specifically data centers today allocate probably we estimate around 35% of CPU cores and cycles to managing things like storage and network and security, offloading those functions. This is wasted cores and doing this with traditional general purpose x86 processors is expensive and it's not efficient. This is why we've been reporting so aggressively on ARM's ascendancy into the enterprise. It's not only coming it's here and we're going to talk about that today. The second mega trend is cloud computing. Hyperscale infrastructure has allowed technology companies to put a management and control plane into the cloud and expand beyond our narrow server SAN scope within a single data center and support the management of distributed data at massive scale. And today we're on the cusp of a new era of infrastructure. And one of the startups in this space is Nebulon. Hello everybody, this is Dave Vellante, and welcome to this Cube Conversation where we welcome in two great guests, Craig Nunes, Cube alum, co-founder and COO at Nebulon and Tobias Flitsch who's director of product management at Nebulon. Guys, welcome. Great to see you. >> So good to be here Dave. Feels awesome. >> Soon, face to face. Craig, I'm heading your way. >> I can't wait. >> Craig, you heard my narrative upfront and I'm wondering are those the trends that you guys saw when you, when you started the company, what are the major shifts in the world today that, that caused you and your co-founders to launch Nebulon? >> Yeah, I'll give you sort of the way we think about the world, which I think aligns super right with, with what you're talking about, you know, over the last several years, organizations have had a great deal of experience with public cloud data centers. And I think like any platform or technology that is, you know, gets its use in a variety of ways, you know, a bit of savvy is being developed by organizations on, you know, what do I put where, how do I manage things in the most efficient way possible? And there are, in terms of the types of folks we're focused on in Nebulon's business, we see now kind of three groups of people emerging, and, and we sort of simply coined three terms, the returners, the removers, and the remainers. I'll explain what I mean by each of those, the returners are folks who maybe early on, you know, hit the gas on cloud, moved, you know, everything in, a lot in, and realize that while it's awesome for some things, for other things, it was less optimal. Maybe cost became a factor or visibility into what was going on with their data was a factor, security, service levels, whatever. And they've decided to move some of those workloads back. Returners. There are what I call the removers that are taking workloads from, you know, born in the cloud. On-prem, you know, and this was talked a lot about in Martine's blog that, you know, talked about a lot of the growth companies that built up such a large footprint in the public cloud, that economics were kind of working against them. You can, depending on the knobs you turn, you know, you're probably spending two and a half X, two X, what you might spend if you own your own factory. And you can argue about, you know, where your leverage is in negotiating your pricing with the cloud vendors, but there's a big gap. The last one is, and I think probably the most significant in terms of who we've engaged with is the remainers. And the remainers are, you know, hybrid IT organizations. They've got assets in the cloud and on-prem, they aspire to an operational model that is consistent across everything and, you know, leveraging all the best stuff that they observed in their cloud-based assets. And it's kind of our view that frankly we take from, from this constituency that, when people talk about cloud or cloud first, they're moving to something that is really more an operating model versus a destination or data center choice. And so, we get people on the phone every day, talking about cloud first. And when you kind of dig into what they're after, it's operating model characteristics, not which data center do I put it in, and those, those decisions are separating. And so that, you know, it's really that focus for us is where, we believe we're doing something unique for that group of customers. >> Yeah. Cloud first doesn't doesn't mean cloud only. And of course followers of this program know, we talk a lot about this, this definition of cloud is changing, it's evolving, It's moving to the edge, it's moving to data centers, data centers are moving to the cloud. Cross-cloud, it's that big layer that's expanding. And so I think the definition of cloud, even particularly in customer's minds is evolving. There's no question about it. People, they'll look at what VMware is doing in AWS and say, okay, that's cloud, but they'll also look at things like VMware cloud foundation and say oh yeah, that's cloud too. So to me, the beauty of cloud is in the eye of the customer beholder. So I buy that. Tobias. I wonder if you could talk about how this all translates into product, because you guys start up, you got to sell stuff, you use this term smart infrastructure, what is that? How does this all turn into stuff you can sell? >> Right. Yeah. So let me back up a little bit and talk a little bit about, you know, what we at Nebulon do. So we are a cloud based software company, and we're delivering sort of a new category of smart infrastructure. And if you think about things that you would know from your everyday surroundings, smart infrastructure is really all around us. Think smart home technology like Google Nest as an example. And what this all has in common is that there's a cloud control plane that is managing some IOT end points and smart devices in various locations. And by doing that, customers gain benefits like easy remote management, right? You can manage your thermostat, your temperature from anywhere in the world basically. You don't have to worry about automated software updates anymore, and you can easily automate your home, your infrastructure, through this cloud control plane and translating this idea to the data center, right? This idea is not necessarily new, right? If you look into the networking space with Meraki networks, now Cisco or Mist Systems now Juniper, they've really pioneered efforts in cloud management. So smart network infrastructure, and the key problem that they solved there is, you know, managing these vast amount of access points and switches that are scattered across the data centers across campuses, and, you know, the data center. Now, if you translate that to what Nebulon does, it's really applying this smart infrastructure idea, this methodology to application infrastructure, specifically to compute and storage infrastructure. And that's essentially what we're doing. So with smart infrastructure, basically our offering it at Nebulon, the product, that comes with the benefits of this cloud experience, public cloud operating model, as we've talked about, some of our customers look at the cloud as an operating model rather than a destination, a physical location. And with that, we bring to us this model, this, this experience as operating a model to on-premises application infrastructure, and really what you get with this broad offering from Nebulon and the benefits are really circling it out, you know, four areas, first of all, rapid time to value, right? So application owners think people that are not specialists or experts when it comes to IT infrastructure, but more generalists, they can provision on-premise application infrastructure in less than 10 minutes, right? It can go from, from just bare metal physical racks to the full application stack in less than 10 minutes, so they're up and running a lot quicker. And they can immediately deliver services to their end customers, cloud-like operations, this, this notion of zero touch remote management, which now with the last couple of months with this strange time that we're with COVID is, you know, turnout is becoming more and more relevant really as in remotely administrating and management of infrastructure that scales from just hundreds of nodes to thousands of nodes. It doesn't really matter with behind the scenes software updates, with global AI analytics and insights, and basically overall combined reduce the operational overhead when it comes to on-premises infrastructure by up to 75%, right? The other thing is support for any application, whether it's containerized, virtualized, or even bare metal applications. And the idea here is really consistent leveraging server-based storage that doesn't require any Nebulon-specific software on the server. So you get the full power of your application servers for your applications. Again, as the servers intended. And then the fourth benefit when it comes to smart infrastructure is, is of course doing this all at a lower cost and with better data center density. And that is really comparing it to three-tier architectures where you have your server, your SAN fabric, and then you have an external storage, but also when you compare it with hyper-converged infrastructure software, right, that is consuming resources of the application servers, think CPU, think memory and networking. So basically you get a lot more density with that approach compared to those architectures. >> Okay, I want to dig into some of that differentiation too, but what exactly do I buy from you? Do I buy a software subscription? Is that right? Can you explain that a little bit? >> Right. So basically the way we do this is it's really leveraging two key new innovations, right? So, and you see why I made the bridge to smart home technology, because the approach is civil, right? The one is, you know, the introduction of a cloud control plane that basically manage this on-premise application infrastructure, of course, that is delivered to customers as a service. The second one is, you know, a new infrastructure model that uses IOT endpoint technology, and that is embedded into standard application servers and the storage within this application servers. Let me add a couple of words to that to explain a little bit more, so really at the heart of smart infrastructure, in order to deliver this public cloud experience for any on-prem application is this cloud-based control plane, right? So we've built this, how we recommend our customers to use a public cloud, and that is built, you know, building your software on modern technologies that are vendor-agnostic. So it could essentially run anywhere, whether it is, you know, any public cloud vendor, or if we want to run in our own data centers, when regulatory requirements change, it's massively scalable and responsive, no matter how large the managed infrastructure is. But really the interesting part here, Dave, is that the customer doesn't really have to worry about any of that, it's delivered as a service. So what a customer gets from this cloud control plane is a single API end point, how they get it with a public cloud. They get a web user interface, from which they can manage all of their infrastructure, no matter how many devices, no matter where it is, can be in the data center, can be in an edge location anywhere in the world, they get template-based provisioning much like a marketplace in a public cloud. They get analytics, predictive support services, and super easy automation capabilities. Now the second thing that I mentioned is this server embedded software, the server embedded infrastructure software, and that is running on a PCIE based offload engine. And that is really acting as this managed IOT endpoint within the application server that I managed that I mentioned earlier. And that approach really further converges modern application infrastructure. And it really replaces the software defined storage approach that you'll find in hyper-converged infrastructure software. And that is really by embedding the data services, the storage data service into silicon within the server. Now this offload engine, we call that a services processing unit or SPU in short. And that is really what differentiates us from hyper-converged infrastructure. And it's quite different than a regular accelerator card that you get with some of the hyper-converged infrastructure offerings. And it's different in the sense that the SPU runs basically all of the shared and local data services, and it's not just accelerating individual algorithms, individual functions. And it basically provides all of these services aside the CPU with the boot drive, with data drives. And in essence provides you with this a separate fall domain from the service, so for example, if you reboot your server, the data plan remains intact. You know, it's not impacted for that. >> Okay. So I want to stay on that for just a second, Craig, if I could, I get very clear how you're different from, as Tobias said, the three-tier server SAN fabric, external array, the HCI thing's interesting because in some respects, the HCI has, you know, guys take Nutanix, they talk about cloud and becoming more friendly with developers and API piece, but what's your point of view Craig on how you position relative to say HCI? >> Yeah, absolutely. So everyone gets what three-tier architecture is and was, and HCI software, you know, emerged as an alternative to the three-tier architectures. Everyone I think today understands that data services are, you know, SDS is software hosted in the operating system of each HCI device and consume some amount of CPU, memory, network, whatever. And it's typically constrained to a hypervisor environment, kind of where we're most of that stuff is done. And over time, these platforms have added some monitoring capabilities, predictive analytics, typically provided by the vendor's cloud, right? And as Tobias mentioned, some HCIS vendors have augmented this approach by adding an accelerator to make things like compression and dedupe go faster, right? Think SimpliVity or something like that. The difference that we're talking about here is, the infrastructure software that we deliver as a service is embedded right into server silicon. So it's not sitting in the operating system of choice. And what that means is you get the full power of the server you bought for your workloads. It's not constrained to a hypervisor-only environment, it's OS agnostic. And, you know, it's entirely controlled and administered by the cloud versus with, you know, most HCIS is an on-prem console that manages a cluster or two on-prem. And, you know, think of it from a automation perspective. When you automate something, you've got to set up your playbook kind of cluster by cluster. And depending what versions they're on, APIs are changing, behaviors are changing. So a very different approach at scale. And so again, for us, we're talking about something that gives you a much more efficient infrastructure that is then managed by the cloud and gives you this full kind of operational model you would expect for any kind of cloud-based deployment. >> You know, I got to go back, you guys obviously have some three-part DNA hanging around and you know, of course you remember well, the three-part ASIC, it was kind of famous at the time and it was unique. And I bring that up only because you've mentioned a couple of times the silicon and a lot of people yeah, whatever, but we have been on this, especially, particularly with ARM. And I want to share with the audience, if you follow my breaking analysis, you know this. If you look at the historical curve of Moore's law with x86, it's the doubling of performance every two years, roughly, that comes out to about 40% a year. That's moderated down to about 30% a year now, if you look at the ARM ecosystem and take for instance, apple A15, and the previous series, for example, over the last five years, the performance, when you combine the CPU, GPU, NPU, the accelerators, the DSPs, which by the way, are all customizable. That's growing at 110% a year, and the SOC costs 50 bucks. So my point is that you guys are riding perfect example of doing offloads with a way more efficient architecture. You're now on that curve, that's growing at 100% plus per year. Whereas a lot of the legacy storage is still on that 30% a year curve, and so cheaper, lower power. That's why I love to buy, as you were bringing in the IOT and the smart infrastructure, this is the future of storage and infrastructure. >> Absolutely. And the thing I would emphasize is it's not limited to storage, storage is a big issue, but we're talking about your application infrastructure and you brought up something interesting on the GPU, the SmartNIC of things, and just to kind of level set with everybody there, there's the HCI world, and then there's this SmartNIC DPU world, whatever you want to call it, where it's effectively a network card, it's got that specialized processing onboard and firmware to provide some network security storage services, and think of it as a PCIE card in your server. It connects to an external storage system, so think Nvidia Bluefield 2 connecting to an external NVME storage device. And the interesting thing about that is, you know, storage processing is offloaded from the server. So as we said earlier, good, right, you get the server back to your application, but storage moves out of the server. And it starts to look a little bit like an external storage approach versus a server based approach. And infrastructure management is done by, you know, the server SmartNIC with some monitoring and analytics coming from, you know, your supplier's cloud support service. So complexity creeps back in, if you start to lose that, you know, heavily converged approach. Again, we are taking advantage of storage within the server and, you know, keeping this a real server based approach, but distinguishing ourselves from the HCI approach. Cause there's a real ROI there. And when we talked to folks who are looking at new and different ways, we talk a lot about the cloud and I think we've done a bit of that already, but then at the end of the day, folks are trying to figure out well, okay, but then what do I buy to enable this? And what you buy is your standard server recipe. So think your favorite HPE, Lenovo, Supermicro, whatever, whatever your brand, and it's going to come enabled with this IOT end point within it, so it's really a smart server, if you will, that can then be controlled by our cloud. And so you're effectively buying, you know, from your favorite server vendor, a server option that is this endpoint and a subscription. You don't buy any of this from us, by the way, it's all coming from them. And that's the way we deliver this. >> You know, sorry to get into the plumbing, but this is something we've been on and a facet of it. Is that silicon custom designed or is it pretty much off the shelf, do you guys add any value to it? >> No, there are off the shelf options that can deliver tremendous horsepower on that form factor. And so we take advantage of that to, you know, do what we do in terms of, you know, creating these sort of smart servers with our end point. And so that's where we're at. >> Yeah. Awesome. So guys, what's your sweet spot, you know, why are customers, you know, what are you seeing customers adopting? Maybe some examples you guys can share? >> Yeah, absolutely. So I think Tobias mentioned that because of the architectural approach, there's a lot of flexibility there, you can run virtualized, containerized, bare metal applications. The question is where are folks choosing to get started? And those use cases with our existing customers revolved heavily around virtualization modernization. So they're going back in to their virtualized environment, whether their existing infrastructure is array-based or HCI-based. And they're looking to streamline that, save money, automate more, the usual things. The second area is the distributed edge. You know, the edge is going through tremendous transformation with IOT devices, 5g, and trying to get processing closer to where customers are doing work. And so that distributed edge is a real opportunity because again, it's a more cost-effective, more dense infrastructure. The cloud effectively can manage across all of these sites through a single API. And then the third area is cloud service provider transformation. We do a fair bit of business with, you know, cloud service providers, CTOs, who are looking at trying to build top line growth, trying to create new services and, and drive better bottom line. And so this is really, you know, as much as a revenue opportunity for them as cost saving opportunity. And then the last one is this notion of, you know, bringing the cloud on-prem, we've done a cloud repatriation deal. And I know you've seen a little of that, but maybe not a lot of it. And, you know, I can tell you in our first deals, we've already seen it, so it's out there. Those are the places where people are getting started with us today. >> It's just interesting, you're right. I don't see a ton of it, but if I'm going to repatriate, I don't want to go backwards. I don't want to repatriate to legacy. So it actually does kind of make sense that I repatriate to essentially a component of on-prem cloud that's managed in the cloud, that makes sense to me to buy. But today you're managing from the cloud, you're managing on-prem infrastructure. Maybe you could show us a little leg, share a little roadmap, I mean, where are you guys headed from a product standpoint? >> Right, so I'm not going to go too far on the limb there, but obviously, right. So one of the key benefits of a cloud managed platform is this notion of a single API, right. We talked about the distributed edge where, you know, think of retailer that has, you know, thousands of stores, each store having local infrastructure. And, you know, if you think about the challenges that come with, you know, just administrating those systems, rolling out firmware updates, rolling out updates in general, monitoring those systems, et cetera. So having a single console, a cloud console to administrate all of that infrastructure, obviously, you know, the benefits are easy now. If you think about, if you're thinking about that and spin it further, right? So from the use cases and the types of users that we've see, and Craig talked about them at the very beginning, you can think about this as this is a hybrid world, right. Customers will have data that they'll have in the public cloud. They will have data and applications in their data centers and at the edge, obviously it is our objective to deliver the same experience that they gained from the public cloud on-prem, and eventually, you know, those two things can come closer together. Apart from that, we're constantly improving the data services. And as you mentioned, ARM is, is on a path that is becoming stronger and faster. So obviously we're going to leverage on that and build out our data storage services and become faster. But really the key thing that I'd like to, to mention all the time, and this is related to roadmap, but rather feature delivery, right? So the majority of what we do is in the cloud, our business logic in the cloud, the capabilities, the things that make infrastructure work are delivered in the cloud. And, you know, it's provided as a service. So compared with your Gmail, you know, your cloud services, one day, you don't have a feature, the next day you have a feature, so we're continuously rolling out new capabilities through our cloud. >> And that's about feature acceleration as opposed to technical debt, which is what you get with legacy features, feature creep. >> Absolutely. The other thing I would say too, is a big focus for us now is to help our customers more easily consume this new concept. And we've already got, you know, SDKs for things like Python and PowerShell and some of those things, but we've got, I think, nearly ready, an Ansible SDK. We're trying to help folks better kind of use case by use case, spin this stuff up within their organization, their infrastructure. Because again, part of our objective, we know that IT professionals have, you know, a lot of inertia when they're, you know, moving stuff around in their own data center. And we're aiming to make this, you know, a much simpler, more agile experience to deploy and grow over time. >> We've got to go, but Craig, quick company stats. Am I correct, you've raised just under 20 million. Where are you on funding? What's your head count today? >> I am going to plead the fifth on all of that. >> Oh, okay. Keep it stealth. Staying a little stealthy, I love it. Really excited for you. I love what you're doing. It's really starting to come into focus. And so congratulations. You know, you got a ways to go, but Tobias and Craig, appreciate you coming on The Cube today. And thank you for watching this Cube Conversation. This is Dave Vellante. We'll see you next time. (upbeat outro music)
SUMMARY :
We saw the opportunity to So good to be here Dave. Soon, face to face. hit the gas on cloud, moved, you know, of the customer beholder. that you would know from your and that is built, you know, building your the HCI has, you know, guys take Nutanix, that data services are, you know, So my point is that you guys about that is, you know, or is it pretty much off the of that to, you know, why are customers, you know, And so this is really, you know, the cloud, that makes sense to me to buy. challenges that come with, you know, you get with legacy features, a lot of inertia when they're, you know, Where are you on funding? the fifth on all of that. And thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cisco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tobias Flitsch | PERSON | 0.99+ |
Tobias | PERSON | 0.99+ |
Craig Nunes | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Craig | PERSON | 0.99+ |
Mist Systems | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Supermicro | ORGANIZATION | 0.99+ |
fifth | QUANTITY | 0.99+ |
Nebulon | ORGANIZATION | 0.99+ |
less than 10 minutes | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Juniper | ORGANIZATION | 0.99+ |
50 bucks | QUANTITY | 0.99+ |
three decade | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
second thing | QUANTITY | 0.99+ |
Meraki | ORGANIZATION | 0.99+ |
Nebulon | PERSON | 0.99+ |
less than 10 minutes | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
first deals | QUANTITY | 0.99+ |
each store | QUANTITY | 0.99+ |
PowerShell | TITLE | 0.99+ |
third area | QUANTITY | 0.98+ |
Martine | PERSON | 0.98+ |
today | DATE | 0.98+ |
third | QUANTITY | 0.98+ |
Nutanix | ORGANIZATION | 0.98+ |
A15 | COMMERCIAL_ITEM | 0.98+ |
three-tier | QUANTITY | 0.98+ |
Gmail | TITLE | 0.98+ |
First | QUANTITY | 0.98+ |
second principle | QUANTITY | 0.98+ |
Bluefield 2 | COMMERCIAL_ITEM | 0.98+ |
110% a year | QUANTITY | 0.98+ |
single console | QUANTITY | 0.98+ |
second area | QUANTITY | 0.98+ |
hundreds of nodes | QUANTITY | 0.98+ |
Moore | PERSON | 0.97+ |
about 40% a year | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
ARM | ORGANIZATION | 0.97+ |
VMware | ORGANIZATION | 0.97+ |
Cube | ORGANIZATION | 0.97+ |
three-part | QUANTITY | 0.97+ |
thousands of stores | QUANTITY | 0.97+ |
single | QUANTITY | 0.97+ |
fourth benefit | QUANTITY | 0.96+ |
two great guests | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
each | QUANTITY | 0.96+ |
second one | QUANTITY | 0.96+ |
More than a decade ago | DATE | 0.96+ |
about 30% a year | QUANTITY | 0.96+ |
HPE | ORGANIZATION | 0.96+ |
around 35% | QUANTITY | 0.95+ |
thousands of nodes | QUANTITY | 0.95+ |
up to 75% | QUANTITY | 0.95+ |
apple | ORGANIZATION | 0.95+ |
Pradeep Sindhu CLEAN
>> As I've said many times on theCUBE for years, decades even we've marched to the cadence of Moore's law relying on the doubling of performance every 18 months or so, but no longer is this the main spring of innovation for technology rather it's the combination of data applying machine intelligence and the cloud supported by the relentless reduction of the cost of compute and storage and the build-out of a massively distributed computer network. Very importantly, the last several years alternative processors have emerged to support offloading work and performing specific tests. GPUs are the most widely known example of this trend with the ascendancy of Nvidia for certain applications like gaming and crypto mining and more recently machine learning. But in the middle of last decade we saw the early development focused on the DPU, the data processing unit, which is projected to make a huge impact on data centers in the coming years as we move into the next era of cloud. And with me is Pradeep Sindhu who's the co-founder and CEO of Fungible, a company specializing in the design and development of DPUs. Pradeep, welcome to theCUBE. Great to see you. >> Thank-you, Dave and thank-you for having me. >> You're very welcome. So okay, my first question is don't CPUs and GPUs process data already. Why do we need a DPU? >> That is a natural question to ask. And CPUs have been around in one form or another for almost 55, maybe 60 years. And this is when general purpose computing was invented and essentially all CPUs went to x86 architecture by and large and of course is used very heavily in mobile computing, but x86 is primarily used in data center which is our focus. Now, you can understand that that architecture of a general purpose CPUs has been refined heavily by some of the smartest people on the planet. And for the longest time improvements you refer to Moore's law, which is really the improvements of the price, performance of silicon over time that combined with architectural improvements was the thing that was pushing us forward. Well, what has happened is that the architectural refinements are more or less done. You're not going to get very much, you're not going to squeeze more blood out of that storm from the general purpose computer architecture. what has also happened over the last decade is that Moore's law which is essentially the doubling of the number of transistors on a chip has slowed down considerably and to the point where you're only getting maybe 10, 20% improvements every generation in speed of the transistor if that. And what's happening also is that the spacing between successive generations of technology is actually increasing from two, two and a half years to now three, maybe even four years. And this is because we are reaching some physical limits in CMOS. These limits are well-recognized. And we have to understand that these limits apply not just to general purposive use but they also apply to GPUs. Now, general purpose CPUs do one kind of competition they're really general and they can do lots and lots of different things. It is actually a very, very powerful engine. And then the problem is it's not powerful enough to handle all computations. So this is why you ended up having a different kind of a processor called the GPU which specializes in executing vector floating-point arithmetic operations much, much better than CPU maybe 20, 30, 40 times better. Well, GPUs have now been around for probably 15, 20 years mostly addressing graphics computations, but recently in the last decade or so they have been used heavily for AI and analytics computations. So now the question is, well, why do you need another specialized engine called the DPU? Well, I started down this journey about almost eight years ago and I recognize I was still at Juniper Networks which is another company that I founded. I recognize that in the data center as the workload changes to addressing more and more, larger and larger corpuses of data, number one and as people use scale-out as these standard technique for building applications, what happens is that the amount of east-west traffic increases greatly. And what happens is that you now have a new type of workload which is coming. And today probably 30% of the workload in a data center is what we call data-centric. I want to give you some examples of what is a data-centric workload. >> Well, I wonder if I could interrupt you for a second. >> Of course. >> Because I want those examples and I want you to tie it into the cloud 'cause that's kind of the topic that we're talking about today and how you see that evolving. I mean, it's a key question that we're trying to answer in this program. Of course, early cloud was about infrastructure, little compute, little storage, little networking and now we have to get to your point all this data in the cloud. And we're seeing, by the way the definition of cloud expand into this distributed or I think a term you use is disaggregated network of computers. So you're a technology visionary and I wonder how you see that evolving and then please work in your examples of that critical workload, that data-centric workload. >> Absolutely happy to do that. So if you look at the architecture of our cloud data centers the single most important invention was scale-out of identical or near identical servers all connected to a standard IP ethernet network. That's the architecture. Now, the building blocks of this architecture is ethernet switches which make up the network, IP ethernet switches. And then the server is all built using general purpose x86 CPUs with DRAM, with SSD, with hard drives all connected to inside the CPU. Now, the fact that you scale these server nodes as they're called out was very, very important in addressing the problem of how do you build very large scale infrastructure using general purpose compute. But this architecture did is it compute centric architecture and the reason it's a compute centric architecture is if you open this server node what you see is a connection to the network typically with a simple network interface card. And then you have CPUs which are in the middle of the action. Not only are the CPUs processing the application workload but they're processing all of the IO workload, what we call data-centric workload. And so when you connect SSDs, and hard drives, and GPUs, and everything to the CPU, as well as to the network you can now imagine the CPUs is doing two functions. It's running the applications but it's also playing traffic cop for the IO. So every IO has to go through the CPU and you're executing instructions typically in the operating system and you're interrupting the CPU many, many millions of times a second. Now, general purpose CPUs and the architecture CPUs was never designed to play traffic cop because the traffic cop function is a function that requires you to be interrupted very, very frequently. So it's critical that in this new architecture where there's a lot of data, a lot of these stress traffic the percentage of workload, which is data-centric has gone from maybe one to 2% to 30 to 40%. I'll give you some numbers which are absolutely stunning. If you go back to say 1987 and which is the year in which I bought my first personal computer the network was some 30 times slower than the CPU. The CPU is running at 15 megahertz, the network was running at three megabits per second. Or today the network runs at a 100 gigabits per second and the CPU clock speed of a single core is about three to 2.3 gigahertz. So you've seen that there's a 600X change in the ratio of IO to compute just the raw clock speed. Now, you can tell me that, hey, typical CPUs have lots, lots of cores, but even when you factor that in there's been close to two orders of magnitude change in the amount of IO to compute. There is no way to address that without changing the architecture and this is where the DPU comes in. And the DPU actually solves two fundamental problems in cloud data centers. And these are fundamental there's no escaping it. No amount of clever marketing is going to get around these problems. Problem number one is that in a compute centric cloud architecture the interactions between server nodes are very inefficient. That's number one, problem number one. Problem number two is that these data-centric computations and I'll give you those four examples. The network stack, the storage stack, the virtualization stack, and the security stack. Those four examples are executed very inefficiently by CPUs. Needless to say that if you try to execute these on GPUs you will run into the same problem probably even worse because GPUs are not good at executing these data-centric computations. So what we were looking to do at Fungible is to solve these two basic problems. And you don't solve them by just taking older architectures off the shelf and applying them to these problems because this is what people have been doing for the last 40 years. So what we did was we created this new microprocessor that we call DPU from ground up. It's a clean sheet design and it solves those two problems fundamentally. >> So I want to get into that. And I just want to stop you for a second and just ask you a basic question which is if I understand it correctly, if I just took the traditional scale out, if I scale out compute and storage you're saying I'm going to hit a diminishing returns. It's not only is it not going to scale linearly I'm going to get inefficiencies. And that's really the problem that you're solving. Is that correct? >> That is correct. And the workloads that we have today are very data-heavy. You take AI for example, you take analytics for example it's well known that for AI training the larger the corpus of relevant data that you're training on the better the result. So you can imagine where this is going to go. >> Right. >> Especially when people have figured out a formula that, hey the more data I collect I can use those insights to make money- >> Yeah, this is why I wanted to talk to you because the last 10 years we've been collecting all this data. Now, I want to bring in some other data that you actually shared with me beforehand. Some market trends that you guys cited in your research. And the first thing people said is they want to improve their infrastructure and they want to do that by moving to the cloud. And they also, there was a security angle there as well. That's a whole another topic we could discuss. The other stat that jumped out at me, there's 80% of the customers that you surveyed said there'll be augmenting their x86 CPU with alternative processing technology. So that's sort of, I know it's self-serving, but it's right on the conversation we're having. So I want to understand the architecture. >> Sure. >> And how you've approached this. You've clearly laid out this x86 is not going to solve this problem. And even GPUs are not going to solve the problem. >> They re not going to solve the problem. >> So help us understand the architecture and how you do solve this problem. >> I'll be very happy to. Remember I use this term traffic cop. I use this term very specifically because, first let me define what I mean by a data-centric computation because that's the essence of the problem we're solving. Remember I said two problems. One is we execute data-centric workloads at least an order of magnitude more efficiently than CPUs or GPUs, probably 30 times more efficient. And the second thing is that we allow nodes to interact with each other over the network much, much more efficiently. Okay, so let's keep those two things in mind. So first let's look at the data-centric piece. The data-centric piece for workload to qualify as being data-centric four things have to be true. First of all, it needs to come over the network in the form of packets. Well, this is all workloads so I'm not saying anything. Secondly, this workload is heavily multiplex in that there are many, many, many computations that are happening concurrently, thousands of them, okay? That's the number two. So a lot of multiplexing. Number three is that this workload is stateful. In other words you can't process back it's out of order. You have to do them in order because you're terminating network sessions. And the last one is that when you look at the actual computation the ratio of IO to arithmetic is medium to high. When you put all four of them together you actually have a data-centric workload, right? And this workload is terrible for general purpose CPUs. Not only the general purpose CPU is not executed properly the application that is running on the CPU also suffers because data center workloads are interfering workloads. So unless you designed specifically to them you're going to be in trouble. So what did we do? Well, what we did was our architecture consists of very, very heavily multi-threaded general purpose CPUs combined with very heavily threaded specific accelerators. I'll give you examples of some of those accelerators, DMA accelerators, then ratio coding accelerators, compression accelerators, crypto accelerators, compression accelerators. These are just some, and then look up accelerators. These are functions that if you do not specialize you're not going to execute them efficiently. But you cannot just put accelerators in there, these accelerators have to be multi-threaded to handle. We have something like a 1,000 different treads inside our DPU to address these many, many, many computations that are happening concurrently but handle them efficiently. Now, the thing that is very important to understand is that given the velocity of transistors I know that we have hundreds of billions of transistors on a chip, but the problem is that those transistors are used very inefficiently today if the architecture of a CPU or a GPU. What we have done is we've improved the efficiency of those transistors by 30 times, okay? >> So you can use a real estate much more effectively? >> Much more effectively because we were not trying to solve a general purpose computing problem. Because if you do that we're going to end up in the same bucket where general purpose CPUs are today. We were trying to solve a specific problem of data-centric computations and of improving the note to note efficiency. So let me go to point number two because that's equally important. Because in a scalar or architecture the whole idea is that I have many, many notes and they're connected over a high performance network. It might be shocking for your listeners to hear that these networks today run at a utilization of no more than 20 to 25%. Question is why? Well, the reason is that if I tried to run them faster than that you start to get back at drops because there are some fundamental problems caused by congestion on the network which are unsolved as we speak today. There are only one solution which is to use TCP. Well, TCP is a well-known, is part of the TCP IP suite. TCP was never designed to handle the latencies and speeds inside data center. It's a wonderful protocol but it was invented 43 years ago now. >> Yeah, very reliable and tested and proven. It's got a good track record but you're right. >> Very good track record, unfortunately eats a lot of CPU cycles. So if you take the idea behind TCP and you say, okay, what's the essence of TCP? How would you apply it to the data center? That's what we've done with what we call FCP which is a fabric control protocol, which we intend to open. We intend to publish the standards and make it open. And when you do that and you embed FCP in hardware on top of this standard IP ethernet network you end up with the ability to run at very large-scale networks where the utilization of the network is 90 to 95%, not 20 to 25%. >> Wow, okay. >> And you end up with solving problems of congestion at the same time. Now, why is this important today? That's all geek speak so far. The reason this stuff is important is that it such a network allows you to disaggregate, pull and then virtualize the most important and expensive resources in the data center. What are those? It's computer on one side, storage on the other side. And increasingly even things like DRAM wants to be disaggregated. And well, if I put everything inside a general purpose server the problem is that those resources get stranded because they're stuck behind a CPU. Well, once you disaggregate those resources and we're saying hyper disaggregate meaning the hyper and the hyper disaggregate simply means that you can disaggregate almost all the resources. >> And then you going to reaggregate them, right? I mean, that's obviously- >> Exactly and the network is the key in helping. >> Okay. >> So the reason the company is called Fungible is because we are able to disaggregate, virtualize and then pull those resources. And you can get for so scale-out companies the large AWS, Google, et cetera they have been doing this aggregation tooling for some time but because they've been using a compute centric architecture their disaggregation is not nearly as efficient as we can make. And they're off by about a factor of three. When you look at enterprise companies they are off by another factor of four because the utilization of enterprise is typically around 8% of overall infrastructure. The utilization in the cloud for AWS, and GCP, and Microsoft is closer to 35 to 40%. So there is a factor of almost four to eight which you can gain by dis-aggregating and pulling. >> Okay, so I want to interrupt you again. So these hyperscalers are smart. They have a lot of engineers and we've seen them. Yeah, you're right they're using a lot of general purpose but we've seen them make moves toward GPUs and embrace things like Arm. So I know you can't name names, but you would think that this is with all the data that's in the cloud, again, our topic today. You would think the hyperscalers are all over this. >> Well, the hyperscalers recognized here that the problems that we have articulated are important ones and they're trying to solve them with the resources that they have and all the clever people that they have. So these are recognized problems. However, please note that each of these hyperscalers has their own legacy now. They've been around for 10, 15 years. And so they're not in a position to all of a sudden turn on a dime. This is what happens to all companies at some point. >> They have technical debt, you mean? (laughs) >> I'm not going to say they have technical debt, but they have a certain way of doing things and they are in love with the compute centric way of doing things. And eventually it will be understood that you need a third element called the DPU to address these problems. Now, of course, you've heard the term SmartNIC. >> Yeah, right. >> Or your listeners must've heard that term. Well, a SmartNIC is not a DPU. What a SmartNIC is, is simply taking general purpose ARM cores, putting the network interface and a PCI interface and integrating them all on the same chip and separating them from the CPU. So this does solve a problem. It solves the problem of the data center workload interfering with the application workload, good job, but it does not address the architectural problem of how to execute data center workloads efficiently. >> Yeah, so it reminds me of, I understand what you're saying I was going to ask you about SmartNICs. It's almost like a bridge or a band-aid. >> Band-aid? >> It almost reminds me of throwing a high flash storage on a disc system that was designed for spinning disc. Gave you something but it doesn't solve the fundamental problem. I don't know if it's a valid analogy but we've seen this in computing for a longtime. >> Yeah, this analogy is close because okay, so let's take a hyperscaler X, okay? We won't name names. You find that half my CPUs are crippling their thumbs because they're executing this data-centric workload. Well, what are you going to do? All your code is written in C++ on x86. Well, the easiest thing to do is to separate the cores that run this workload. Put it on a different let's say we use Arm simply because x86 licenses are not available to people to build their own CPUs so Arm was available. So they put a bunch of Arm cores, they stick a PCI express and a network interface and you bought that code from x86 to Arm. Not difficult to do but and it does you results. And by the way if for example this hyperscaler X, shall we called them, if they're able to remove 20% of the workload from general purpose CPUs that's worth billions of dollars. So of course, you're going to do that. It requires relatively little innovation other than to port code from one place to another place. >> Pradeep, that's what I'm saying. I mean, I would think again, the hyperscalers why can't they just do some work and do some engineering and then give you a call and say, okay, we're going to attack these workloads together. That's similar to how they brought in GPUs. And you're right it's worth billions of dollars. You could see when the hyperscalers Microsoft, and Azure, and AWS bolt announced, I think they depreciated servers now instead of four years it's five years. And it dropped like a billion dollars to their bottom line. But why not just work directly with you guys? I mean, let's see the logical play. >> Some of them are working with us. So that's not to say that they're not working with us. So all of the hyperscalers they recognize that the technology that we're building is a fundamental that we have something really special and moreover it's fully programmable. So the whole trick is you can actually build a lump of hardware that is fixed function. But the difficulty is that in the place where the DPU would sit which is on the boundary of a server and the network, is literally on that boundary, that place the functionality needs to be programmable. And so the whole trick is how do you come up with an architecture where the functionality is programmable but it is also very high speed for this particular set of applications. So the analogy with GPUs is nearly perfect because GPUs and particularly Nvidia implemented or they invented CUDA which is the programming language for GPUs. And it made them easy to use, made it fully programmable without compromising performance. Well, this is what we're doing with DPUs. We've invented a new architecture, we've made them very easy to program. And they're these workloads, not workloads, computation that I talked about which is security, virtualization, storage and then network. Those four are quintessential examples of data center workloads and they're not going away. In fact, they're becoming more, and more, and more important over time. >> I'm very excited for you guys, I think, and really appreciate Pradeep, we have your back because I really want to get into some of the secret sauce. You talked about these accelerators, eraser code and crypto accelerators. But I want to understand that. I know there's NBMe in here, there's a lot of hardware and software and intellectual property, but we're seeing this notion of programmable infrastructure extending now into this domain, this build-out of this, I like this term disaggregated, massive disaggregated network. >> Hyper disaggregated. >> It's so hyper disaggregated even better. And I would say this and then I got to go. But what got us here the last decade is not the same as what's going to take us through the next decade. >> That's correct. >> Pradeep, thanks so much for coming on theCUBE. It's a great conversation. >> Thank-you for having me it's really a pleasure to speak with you and get the message of Fungible out there. >> Yeah, I promise we'll have you back. And keep it right there everybody we've got more great content coming your way on theCUBE on cloud. This is Dave Vellante. Stay right there. >> Thank-you, Dave.
SUMMARY :
of compute and storage and the build-out Thank-you, Dave and is don't CPUs and GPUs is that the architectural interrupt you for a second. and I want you to tie it into the cloud in the amount of IO to compute. And that's really the And the workloads that we have And the first thing is not going to solve this problem. and how you do solve this problem. And the last one is that when you look the note to note efficiency. and tested and proven. the network is 90 to 95%, in the data center. Exactly and the network So the reason the data that's in the cloud, recognized here that the problems the compute centric way the data center workload I was going to ask you about SmartNICs. the fundamental problem. Well, the easiest thing to I mean, let's see the logical play. So all of the hyperscalers they recognize into some of the secret sauce. last decade is not the same It's a great conversation. and get the message of Fungible out there. Yeah, I promise we'll have you back.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
90 | QUANTITY | 0.99+ |
Pradeep | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
15 megahertz | QUANTITY | 0.99+ |
30 times | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
30 | QUANTITY | 0.99+ |
Juniper Networks | ORGANIZATION | 0.99+ |
Pradeep Sindhu | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
two problems | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
600X | QUANTITY | 0.99+ |
1987 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
two problems | QUANTITY | 0.99+ |
1,000 different treads | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
30 times | QUANTITY | 0.99+ |
60 years | QUANTITY | 0.99+ |
next decade | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
2.3 gigahertz | QUANTITY | 0.99+ |
2% | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
two functions | QUANTITY | 0.98+ |
25% | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
third element | QUANTITY | 0.98+ |
Fungible | ORGANIZATION | 0.98+ |
95% | QUANTITY | 0.98+ |
40 times | QUANTITY | 0.98+ |
two orders | QUANTITY | 0.98+ |
single | QUANTITY | 0.98+ |
Secondly | QUANTITY | 0.98+ |
last decade | DATE | 0.98+ |
two things | QUANTITY | 0.98+ |
two basic problems | QUANTITY | 0.97+ |
10, 20% | QUANTITY | 0.97+ |
a second | QUANTITY | 0.97+ |
around 8% | QUANTITY | 0.97+ |
one solution | QUANTITY | 0.97+ |
43 years ago | DATE | 0.97+ |
four | QUANTITY | 0.97+ |
four examples | QUANTITY | 0.96+ |
eight | QUANTITY | 0.96+ |
billions of dollars | QUANTITY | 0.96+ |
100 gigabits per second | QUANTITY | 0.96+ |
one side | QUANTITY | 0.95+ |
35 | QUANTITY | 0.94+ |
three megabits per second | QUANTITY | 0.94+ |
GCP | ORGANIZATION | 0.93+ |
Azure | ORGANIZATION | 0.92+ |
two fundamental problems | QUANTITY | 0.91+ |
hundreds of billions of transistors | QUANTITY | 0.91+ |
two and a half years | QUANTITY | 0.91+ |
Problem number two | QUANTITY | 0.9+ |
Paul Perez, Dell Technologies and Kit Colbert, VMware | Dell Technologies World 2020
>> Narrator: From around the globe, it's theCUBE! With digital coverage of Dell Technologies World Digital Experience. Brought to you by Dell Technologies. >> Hey, welcome back, everybody. Jeffrey here with theCUBE coming to you from our Palo Altos studios with continuing coverage of the Dell Technology World 2020, The Digital Experience. We've been covering this for over 10 years. It's virtual this year, but still have a lot of great content, a lot of great announcements, and a lot of technology that's being released and talked about. So we're excited. We're going to dig a little deep with our next two guests. First of all we have Paul Perez. He is the SVP and CTO of infrastructure solutions group for Dell technologies. Paul's great to see you. Where are you coming in from today? >> Austin, Texas. >> Austin Texas Awesome. And joining him returning to theCUBE on many times, Kit Colbert. He is the Vice President and CTO of VMware cloud for VMware Kit great to see you as well. Where are you joining us from? >> Yeah, thanks for having me again. I'm here in San Francisco. >> Awesome. So let's jump into it and talk about project Monterrey. You know, it's funny I was at Intel back in the day and all of our passwords used to go out and they became like the product names. It's funny how these little internal project names get a life of their own and this is a big one. And, you know, we had Pat Gelsinger on a few weeks back at VM-ware talking about how significant this is and kind of this evolution within the VMware cloud development. And, you know, it's kind of past Kubernetes and past Tanzu and past project Pacific and now we're into project Monterey. So first off, let's start with Kit, give us kind of the basic overview of what is project Monterey. >> Yep. Yeah, well, you're absolutely right. What we did last year, we announced project Pacific, which was really a fundamental rethinking of VMware cloud foundation with Kubernetes built in right. Kubernetes is still a core to core part of the architecture and the idea there was really to better support modern applications to enable developers and IT operations to come together to work collaboratively toward modernizing a company's application fleet. And as you look at companies starting to be successful, they're starting to run these modern applications. What you found is that the hardware architecture itself needed to evolve, needed to update, to support all the new requirements brought on by these modern apps. And so when you're looking at project Monterey, it's exactly that it's a rethinking of the VMware cloud foundation, underlying hardware architecture. And so you think about a project model or excuse me, product Pacific is really kind of the top half if you will, Kubernetes consumption experiences great for applications. Project Monterey comes along as the second step in that journey, really being the bottom half, fundamentally rethinking the hardware architecture and leveraging SmartNic technology to do that. >> It's pretty interesting, Paul, you know, there's a great shift in this whole move from, you know, infrastructure driving applications to applications driving infrastructure. And then we're seeing, you know, obviously the big move with big data. And again, I think as Pat talked about in his interview with NVIDIA being at the right time, at the right place with the right technology and this, you know, kind of groundswell of GPU, now DPU, you know, helping to move those workloads beyond just kind of where the CPU used to do all the work, this is even, you know, kind of taking it another level you guys are the hardware guys and the solutions guys, as you look at this kind of continuing evolution, both of workloads as well as their infrastructure, how does this fit in? >> Yeah, well, how all this fit it in is modern applications and modern workloads, require a modern infrastructure, right? And a Kit was talking about the infrastructure overlay. That VMware is awesome at that all being, I was coming at this from the emerging data centric workloads, and some of the implications for that, including Phillip and diversity has ever been used for computing. The need to this faculty could be able to combine maybe resources together, as opposed to trying to shoehorn something into a mechanical chassis. And, and if you do segregate, you have to be able to compose on demand. And when you start comparing those, we realized that we were humping it up on our conversion trajectory and we started to team up and partner. >> So it's interesting because part of the composable philosophy, if you will, is to, you know, just to break the components of compute store and networking down to a small pieces as possible, and then you can assemble the right amount when you need it to attack a particular problem. But when you're talking about it's a whole different level of, of bringing the right hardware to bear for the solution. When you talk about SmartNics and you talk about GPS in DPS data processing units, you're now starting to offload and even FPG is that some of these other things offload a lot of work from the core CPU to some of these more appropriate devices that said, how do people make sure that the right application ends up on the right infrastructure? This is that I'm, if it's appropriate using more of a, of a Monterey based solution versus more of a traditional one, depending on the workload, how is that going to get all kind of sorted out and, and routed within the actual cloud infrastructure itself? That was probably back to you a Kit? >> Yeah, sure. So I think it's important to understand kind of what a smart NIC is and how it works in order to answer that question, because what we're really doing is to kind of jump right to it. I guess it's, you know, giving an API into the infrastructure and this is how we're able to do all the things that you just mentioned, but what does a SmartNic? Well, SmartNic is essentially a NIC with a general purpose CPU on it, really a whole CPU complex, in fact, kind of a whole system on server right there on that, on that Nic. And so what that enables is a bunch of great things. So first of all, to your point, we can do a lot of offload. We can actually run ESX. >> SXI on that. Nic, we can take a lot of the functionality that we were doing before on the main server CPU, things like network virtualization, storage, virtualization, security functionality, we can move that all off on the Nic. And it makes a lot of sense because really what we're doing when we're doing all those things is really looking at different sort of IO data paths. You know, as, as the network traffic comes through looking at doing automatic load balancing firewall and for security, delivering storage, perhaps remotely. And so the NIC is actually a perfect place to place all of these functionalities, right? You can not only move it off the core server CPU, but you can get a lot better performance cause you're now right there on the data path. So I think that's the first really key point is that you can get that offload, but then once you have all of that functionality there, then you can start doing some really amazing things. And this ability to expose additional virtual devices onto the PCI bus, this is another great capability of a SmartNic. So when you plug it in physically into the motherboard, it's a Nic, right. You can see that. And when it starts up, it looks like a Nic to the motherboard, to the system, but then via software, you can have it expose additional devices. It could look like a storage controller, or it could look like an FPGA look really any sort of device. And you can do that. Not only for the local machine where it's plugged in, but potentially remote machines as well with the right sorts of interconnects. So what this creates is a whole new sort of cluster architecture. And that's why we're really so excited about it because you got all these great benefits in terms of offload performance improvement, security improvement, but then you get this great ability to get very dynamic, just aggregation. And composability. >> So Kit, how much of it is the routing of the workload to the right place, right? That's got the right amount of say, it's a super data intensive once a lot of GPU versus actually better executing the operation. Once it gets to the place where it's going to run. >> Yeah. It's a bit of a combination actually. So the powerful thing about it is that in a traditional world, where are you want an application? You know, the server that you run it, that app can really only use the local devices there. Yes, there is some newer stuff like NVMe over fabric where you can remote certain types of storage capabilities, but there's no real general purpose solution to that. Yet that generally speaking, that application is limited to the local hardware devices. Well, the great part about what we're doing with Monterey and with the SmartNic technology is that we can now dynamically remote or expose remote devices from other hosts. And so wherever that application runs matters a little bit less now, in a sense that we can give it the right sorts of hardware it needs in order to operate. You know, if you have, let's say a few machines with a FPGA is normally if you have needed that a Fiji had to run locally, but now can actually run remotely and you can better balance out things like compute requirements versus, you know, specialized Accella Requirements. And so I think what we're looking at is, especially in the context of VMware cloud foundation, is bringing that all together. We can look through the scheduling, figure out what the best host for it to let run on based on all these considerations. And that's it, we are missing, let's say a physical device that needs, well, we can remote that and sort of a deal at that, a missing gap there. >> Right, right. That's great. Paul, I want to go back to you. You just talked about, you know, kind of coming at this problem from a data centric point of view, and you're running infrastructure and you're the poor guy that's got to catch all the ASAM Todd i the giant exponential curves up into the right on the data flow and the data quantity. How is that impacting the way you think about infrastructure and designing infrastructure and changing infrastructure and kind of future proofing infrastructure when, you know, just around the corners, 5g and IOT and, Oh, you ain't seen nothing yet in terms of the data flow. >> Yeah. So I come at this from two angles. One that we talked about briefly is the evolution of the workloads themselves. The other angle, which is just as important is the operating model that customers are wanting to evolve to. And in that context, we thought a lot about how cloud, if an operating model, not necessarily a destination, right? So what I, and when way we laid out, what Kit was talking about is that in data center computing, you have operational control and data plane. Where did data plane run from the optimized solution? GPU's, PGA's, offload engines? And the control plane can run on stuff like it could be safe and are then I'm thinking about SmartNic is back codes have arm boards, so you can implement some data plane and some control plane, and they can also be the gateway. Cause, you know, you've talked about composability, what has been done up until now is early for sprint, right? We're carving out software defined infrastructure out of predefined hardware blocks. What we're talking about is making, you know, a GPUs residents in our fabric consistent memory residence of a fabric NVME over fabric and being able to tile computing topologies on demand to realize and applications intent. And we call that intent based computer. >> Right. Well, just, and to follow up on that too, as the, you know, cloud is an attitude or as an operating model or whatever you want to say, you know, not necessarily a place or a thing has changed. I mean, how has that had to get you to shift your infrastructure approach? Cause you've got to support, you know, old school, good old data centers. We've got, you know, some stuff running on public clouds. And then now you've got hybrid clouds and you have multi clouds, right. So we know, you know, you're out in the field that people have workloads running all over the place. So, but they got to control it and they've got compliance issues and they got a whole bunch of other stuff. So from your point of view, as you see the desire for more flexibility, the desire for more infrastructure centric support for the workloads that I want to buy and the increasing amount of those that are more data centric, as we move to hopefully more data driven decisions, how's it changed your strategy. And what does it mean to partner and have a real nice formal relationship with the folks over at VMR or excuse me, VMware? >> Well, I think that regardless of how big a company is, it's always prudent. As I say, when I approached my job, right, architecture is about balance and efficiency and it's about reducing contention. And we like to leverage industry R and D, especially in cases where one plus one equals two, right? In the case of, project Monterey for example, one of the collaboration areas is in improving the security model and being able to provide more air gap isolation, especially when you consider that enterprise wants to behave as service providers is concerned or to their companies. And therefore this is important. And because of that, I think that there's a lot of things that we can do between VMware and Dell lending hardware, and for example, assets like NSX and a different way that will give customers higher scalability and performance and more control, you know, beyond VMware and Dell EMC i think that we're partnering with obviously the SmartNic vendors, cause they're smart interprets and the gateway to those that are clean. They're not really analysis, but also companies that are innovating in data center computing, for example, NVIDIA. >> Right. Right. >> And I think that what we're seeing is while, you know, ambivalent has done an awesome job of targeting their capability, AIML type of workloads, what we realized this applications today depend on platform services, right. And up until recently, those platform services have been debases messaging PI active directory, moving forward. I think that within five years, most applications will depend on some form of AIML service. So I can see an opportunity to go mainstream with this >> Right. Right. Well, it's great. You bring up in NVIDIA and I'm just going to quote one of Pat's lines from, from his interview. And he talked about Jensen from NVIDIA actually telling Pat, Hey Pat, I think you're thinking too small. I love it. You know, let's do the entire AI landscape together and make AI and enterprise class workloads from being more in TANZU, you know, first class citizens. So I, I love the fact, you know, Pat's been around a long time industry veteran, but still, kind of accepted the challenge from Jensen to really elevate AI and machine learning via GPS to first class citizen status. And the other piece, obviously this coming up is ed. So I, you know, it's a nice shot of a, of adrenaline and Kit I wonder if you can share your thoughts on that, you know, in kind of saying, Hey, let's take it up a notch, a significant notch by leveraging a whole another class of compute power within these solutions. >> Yeah. So, I mean, I'll, I'll go real quick. I mean, I, it's funny cause like not many people really ever challenged Pat to say he doesn't think big enough, cause usually he's always blown us away with what he wants to do next, but I think it's, I think it's a, you know, it's good though. It's good to keep us on our toes and push us a bit. Right. All of us within the industry. And so I think a couple of things you have to go back to your previous point around this is like cloud as a model. I think that's exactly what we're doing is trying to bring cloud as a model, even on prem. And it's a lot of these kinds of core hardware architecture capabilities that we do enable the biggest one in my mind, just being enabling an API into the hardware. So the applications can get what they need. And going back to Paul's point, this notion of these AI and ML services, you know, they have to be rooted in the hardware, right? We know that in order for them to be performing for them to run, to support what our customers want to do, we need to have that deeply integrated into the hardware all the way up. But that also becomes a software problem. Once we got the hardware solved, once we get that architecture locked in, how can we as easy as possible, as seamlessly as possible, deliver all those great capabilities, software capabilities. And so, you know, you look at what we've done with the NVIDIA partnership, things around the NVIDIA GPU cloud, and really bringing that to bear. And so then you start having this, this really great full stack integration all the way from the hardware, very powerful hardware architecture that, you know, again, driven by API, the infrastructure software on top of that. And then all these great AI tools, tool chains, capabilities with things like the NVIDIA NGC. So that's really, I think where the vision's going. And we got a lot of the basic parts there, but obviously a lot more work to do going forward. >> I would say that, you know, initially we had dream, we wanted this journey to happen very fast and initially we're baiting infrastructure services. So there's no contention with applications, customer full workload applications, and also in enabling how productive it is to get the data over time, have to have sufficient control over a wide area. there's an opportunity to do something like that to make sure that you think about the probation from bare metal vms (conversation fading) environments are way more dynamic and more spreadable. Right. And they expect hardware. It could be as dynamic and compostable to suit their needs. And I think that's where we're headed. >> Right. So let me, so let me throw a monkey wrench in, in terms of security, right? So now this thing is much more flexible. It's much more software defined. How is that changing the way you think about security and basic security and throughout the stack go to you first, Paul. >> Yeah. Yeah. So like it's actually enables a lot of really powerful things. So first of all, from an architecture and implementation standpoint, you have to understand that we're really running two copies of VXI on each physical server. Now we've got the one running on the X86 side, just like normal, and now we've got one running on the SmartNIC as well. And so, as I mentioned before, we can move a lot of that networking security, et cetera, capabilities off to the SmartNic. And so what does this going toward as what we call a zero trust security architecture, this notion of having really defense in depth at many different layers and many different areas while obviously the hypervisor and the virtualization layer provides a really strong level of security. even when we were doing it completely on the X86 side, now that we're running on a SmartNic that's additional defense in depth because the X86 ESX doesn't really know it doesn't have direct access to the ESX. I run it on the SmartNic So the ESXI running on the SmartNic, it can be this kind of more well defended position. Moreover, now that we're running the security functionality is directly on the data path. In the SmartNic. We can do a lot more with that. We can run a lot deeper analysis, can talk about AI and ML, bring a lot of those capabilities to bear here to actually improve the security profile. And so finally I'd say this notion of kind of distributed security as well, that you don't, that's what I want to have these individual points on the physical network, but I actually distribute the security policies and enforcement to everywhere where a server's running, I everywhere where a SmartNic is, and that's what we can do here. And so it really takes a lot of what we've been doing with things like NSX, but now connects it much more deeply into hardware, allowing for better performance and security. >> A common attack method is to intercept the boot of the server physical server. And, you know, I'm actually very proud of our team because the us national security agency recently published a white paper on best practices for secure group. And they take our implementation across and secure boot as the reference standard. >> Right? Moving forward, imagine an environment that even if you gain control of the server, that doesn't allow you to change bios or update it. So we're moving the root of trust to be in that air gap, domain that Kit talked about. And that gives us a way more capability for zero across the operations. Right. >> Right, right. Paul, I got to ask you, I had Sam bird on the other day, your peer who runs the P the PC group. >> I'm telling you, he is not a peer He's a little bit higher up. >> Higher than you. Okay. Well, I just promoted you so that's okay. But, but it's really interesting. Cause we were talking about, it was literally like 10 years ago, the death of the PC article that came out when, when Apple introduced the tablet and, you know, he's talked about what phenomenal devices that PCs continue to be and evolve. And then it's just funny how, now that dovetails with this whole edge conversation, when people don't necessarily think of a PC as a piece of the edge, but it is a great piece of the edge. So from an infrastructure point of view, you know, to have that kind of presence within the PCs and kind of potentially that intelligence and again, this kind of whole another layer of interaction with the users and an opportunity to define how they work with applications and prioritize applications. I just wonder if you can share how nice it is to have that kind of in your back pocket to know that you've got a whole another, you know, kind of layer of visibility and connection with the users beyond just simply the infrastructure. >> So I actually, within the company we've developed within a framework that we call four edge multicloud, right. Core data centers and enterprise edge IOP, and then off premise. it is a multicloud world. And, and within that framework, we consider our client solutions group products to be part of the yes. And we see a lot of benefit. I'll give an example about a healthcare company that wants to develop real time analytics, regardless of whether it's on a laptop or maybe move into a backend data center, right? Whether it's at a hospital clinic or a patient's home, it gives us a broader innovation surface and a little sooner to get actually the, a lot of people may not appreciate that the most important function within Centene, I considered to be the experienced design thing. So being able to design user flows and customer experience looked at all of use is a variable. >> That's great. That's great. So we're running out of time. I want to give you each the last word you both been in this business for a long time. This is brand new stuff, right? Container aren't new, Kubernetes is still relatively new and exciting. And project Pacific was relatively new and now project Monterrey, but you guys are, you know, you're, multi-decade veterans in this thing. as you look forward, what does this moment represent compared to some of the other shifts that we've seen in IT? You know, generally, but you know, kind of consumption of compute and you know, kind of this application centric world that just continues to grow. I mean, as a software is eating everything, we know it, you guys live it every day. What is, where are we now? And you know, what do you see? Maybe I don't want to go too far out, but the next couple of years within the Monterey framework. And then if you have something else, generally you can add as well. Paul, why don't we start with you? >> Well, I think on a personal level, ingenuity aside I have a long string of very successful endeavor in my career when I came back couple years ago, one of the things that I told Jeff, our vice chairman is a big canvas and I intend to paint my masterpiece and I think, you know, Monterey and what we're doing in support of Monterey is also part of that. I think that you will see, you will see our initial approach focus on, on coordinator. I can tell you that you know how to express it. And we know also how to express even in a multicloud world. So I'm very excited and I know that I'm going to be busy for the next few years. (giggling) >> A Kit to you. >> Yeah. So, you know, it's funny you talk to people about SmartNic and especially those folks that have been around for awhile. And what you hear is like, Hey, you know, people were talking about SmartNic 10 years ago, 20 years ago, that sort of thing. Then they kind of died off. So what's different now. And I think the big difference now is a few things, you know, first of all, it's the core technology of sworn and has dramatically improved. We now have a powerful software infrastructure layer that can take advantage of it. And, you know, finally, you know, applications have a really strong need for it, again, with all the things we've talked about, the need for offload. So I think there's some real sort of fundamental shifts that have happened over the past. Let's say decade that have driven the need for this. And so this is something that I believe strongly as here to last, you know, both ourselves at VMware, as well as Dell are making a huge bet on this, but not only that, and not only is it good for customers, it's actually good for all the operators as well. So whether this is part of VCF that we deliver to customers for them to operate themselves, just like they always have, or if it's part of our own cloud solutions, things like being more caught on Dell, this is going to be a core part about how we deliver our cloud services and infrastructure going forward. So we really do believe this is kind of a foundational transition that's taking place. And as we talked about, there is a ton of additional innovation that's going to come out of it. So I'm really, really excited for the next few years, because I think we're just at the start of a very long and very exciting journey. >> Awesome. Well, thank you both for spending some time with us and sharing the story and congratulations. I'm sure a whole bunch of work for, from a whole bunch of people in, into getting to getting where you are now. And, and as you said, Paul, the work is barely just begun. So thanks again. All right. He's Paul's He's Kit. I'm Jeff. You're watching the cubes, continuing coverage of Dell tech world 2020, that digital experience. Thanks for watching. We'll see you next time. (Upbeat music)
SUMMARY :
Brought to you by Dell Technologies. coming to you from our Palo Altos studios Kit great to see you as well. I'm here in San Francisco. And, you know, it's of the top half if you will, and this, you know, kind And when you start comparing those, how is that going to get So first of all, to your point, really key point is that you can Once it gets to the place You know, the server that you run it, How is that impacting the way is making, you know, how has that had to get you you know, beyond VMware and Dell EMC i think Right. seeing is while, you know, So I, I love the fact, you know, and really bringing that to bear. sure that you think about the the stack go to you first, is directly on the data And, you know, server, that doesn't allow you Sam bird on the other day, He's a little bit higher up. the tablet and, you know, of the yes. of compute and you know, that I'm going to be busy for And what you hear is like, Hey, you know, and as you said, Paul, the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Paul Perez | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Jeffrey | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Pat | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
two angles | QUANTITY | 0.99+ |
second step | QUANTITY | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
couple years ago | DATE | 0.99+ |
Jensen | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
Palo Altos | LOCATION | 0.99+ |
SmartNics | ORGANIZATION | 0.98+ |
Monterey | LOCATION | 0.98+ |
Monterey | ORGANIZATION | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
20 years ago | DATE | 0.98+ |
10 years ago | DATE | 0.98+ |
ESX | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
VCF | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
over 10 years | QUANTITY | 0.98+ |
VMR | ORGANIZATION | 0.97+ |
Austin Texas | LOCATION | 0.97+ |
today | DATE | 0.97+ |
this year | DATE | 0.97+ |
NSX | ORGANIZATION | 0.97+ |
First | QUANTITY | 0.96+ |
X86 | COMMERCIAL_ITEM | 0.96+ |
two guests | QUANTITY | 0.95+ |
Dell Technology World 2020 | EVENT | 0.95+ |
two copies | QUANTITY | 0.95+ |
zero | QUANTITY | 0.95+ |
SmartNic | ORGANIZATION | 0.95+ |
Sam bird | PERSON | 0.94+ |
Centene | ORGANIZATION | 0.94+ |
each physical server | QUANTITY | 0.92+ |
SmartNic | TITLE | 0.92+ |
theCUBE | ORGANIZATION | 0.92+ |
VMware cloud | ORGANIZATION | 0.9+ |
Pacific | ORGANIZATION | 0.9+ |
SmartNic | COMMERCIAL_ITEM | 0.9+ |
Pat Gelsinger, VMware | VMworld 2020
>> Announcer: From around the globe, it's theCUBE with digital coverage of VMworld 2020 brought to you by VMware and its ecosystem partners. >> Hello, welcome back to theCUBE's coverage of VMworld 2020. This is theCUBE virtual with VMworld 2020 virtual. I'm John Furrier, your host of theCUBE with Dave Vellante. It's our 11th year covering VMware. We're not in-person, we're virtual but all the content is flowing. Of course, we're here with Pat Gelsinger, the CEO of VMware who's been on theCUBE, all 11 years. This year virtual of theCUBE as we've been covering VMware from his early days in 2010 when theCUBE started, 11 years later, Pat, it's still changing and still exciting. Great to see you, thanks for taking the time. >> Hey, you guys are great. I love the interactions that we have, the energy, the fun, the intellectual sparring and of course the audiences have loved it now for 11 years, and I look forward to the next 11 that we'll be doing together. >> It's always exciting 'cause we have great conversations, Dave, and I like to drill in and really kind of probe and unpack the content that you're delivering at the keynotes, but also throughout the entire program. It is virtual this year which highlights a lot of the cloud native changes. Just want to get your thoughts on the virtual aspect, VMworld's not in-person, which is one of the best events of the year, everyone loves it, the great community. It's virtual this year but there's a slew of content, what should people take away from this virtual VMworld? >> Well, one aspect of it is that I'm actually excited about is that we're going to be well over 100,000 people which allows us to be bigger, right? You don't have the physical constraints, you also are able to reach places like I've gone to customers and maybe they had 20 people attend in prior years. This year they're having 100. They're able to have much larger teams also like some of the more regulated industries where they can't necessarily send people to events like this, The International Audience. So just being able to spread the audience much more. A digital foundation for an unpredictable world, and man, what an unpredictable world it has been this past year. And then key messages, lots of key products announcements, technology announcements, partnership announcements, and of course in all of the VMworld is that hands-on labs, the interactions that will be delivering a virtual. You come to VMware because the content is so robust and it's being delivered by the world's smartest people. >> Yeah, we've had great conversations over the years and we've talked about hybrid cloud, I think, 2012. A lot of the stuff I look back at a lot of the videos was early on we're picking out all these waves, but there was that moment four years ago or so, maybe even four three, I can't even remember it seems like yesterday. You gave the seminal keynote and you said, this is the way the world's going to happen. And since that keynote, I'll never forget, was in Moscone and since then, you guys have been performing extremely well both on the business front as well as making technology bets and it's paying off. So what's next, you got the cloud, cloud scale, is it Space, is it Cyber? All these things are going on what is next wave that you're watching and what's coming out and what can people extract out of VMworld this year about this next wave? >> Yeah, one of the things I really am excited about and I went to my buddy Jensen, I said, boy, we're doing this work in smart mix we really like to work with you and maybe some things to better generalize the GPU. And Jensen challenged me. Now usually, I'm the one challenging other people with bigger visions. This time Jensen said, "hey Pat, I think you're thinking too small. Let's do the entire AI landscape together, and let's make AI a enterprise class works load from the data center to the cloud and to the Edge. And so I'm going to bring all of my AI resources and make VMware and Tanzu the preferred infrastructure to deliver AI at scale. I need you guys to make the GPUs work like first-class citizens in the vSphere environment because I need them to be truly democratized for the enterprise, so that it's not some specialized AI Development Team, it's everybody being able to do that. And then we're going to connect the whole network together in a new and profound way with our Monterey program as well being able to use the Smart NIC, the DPU, as Jensen likes to call it. So now with CPU, GPU and DPU, all being managed through a distributed architecture of VMware. This is exciting, so this is one in particular that I think we are now re-architecting the data center, the cloud and the Edge. And this partnership is really a central point of that. >> Yeah, the NVIDIA thing's huge and I know Dave probably has some questions on that but I asked you a question because a lot of people ask me, is that just a hardware deal? Talking about SmartNICs, you talk about data processing units. It sounds like a motherboard in the cloud, if you will, but it's not just hardware. Can you talk about the aspect of the software piece? Because again, NVIDIA is known for GPUs, we all know that but we're talking about AI here so it's not just hardware. Can you just expand and share what the software aspect of all this is? >> Yeah well, NVIDIA has been investing in their AI stack and it's one of those where I say, this is Edison at work, right? The harder I work, the luckier I get. And NVIDIA was lucky that their architecture worked much better for the AI workload. But it was built on two decades of hard work in building a parallel data center architecture. And they have built a complete software stack for all the major AI workloads running on their platform. All of that is now coming to vSphere and Tanzu, that is a rich software layer across many vertical industries. And we'll talk about a variety of use cases, one of those that we highlight at VMworld is the University, California, San Francisco partnership, UCSF, one of the world's leading research hospitals. Some of the current vaccine use cases as well, the financial use cases for threat detection and trading benefits. It really is about how we bring that rich software stack. This is a decade and a half of work to the VMware platform, so that now every developer and every enterprise can take advantage of this at scale. That's a lot of software. So in many respects, yeah, there's a piece of hardware in here but the software stack is even more important. >> It's so well we're on the sort of NVIDIA, the arm piece. There's really interesting these alternative processing models, and I wonder if you could comment on the implications for AI inferencing at the Edge. It's not just as well processor implications, it's storage, it's networking, it's really a whole new fundamental paradigm, but how are you thinking about that, Pat? >> Yeah, and we've thought about there's three aspects, what we said, three problems that we're solving. One is the developer problem where we said now you develop once, right? And the developer can now say, "hey I want to have this new AI-centric app and I can develop and it can run in the data center on the cloud or at the Edge." Secondly, my Operations Team can be able to operate this just like I do all of my infrastructure, and now it's VMs containers and AI applications. And third, and this is where your question really comes to bear most significantly, is data gravity. Right, these data sets are big. Some of them need to be very low latency as well, they also have regulatory issues. And if I have to move these large regulated data sets to the cloud, boy, maybe I can't do that generally for my Apps or if I have low latency heavy apps at the Edge, huh, I can't pull it back to the cloud or to my data center. And that's where the uniform architecture and aspects of the Monterey Program where I'm able to take advantage of the network and the SmartNICs that are being built, but also being able to fully represent the data gravity issues of AI applications at scale. 'Cause in many cases, I'll need to do the processing, both the learning and the inference at the Edge as well. So that's a key part of our strategy here with NVIDIA and I do think is going to unlock a new class of apps because when you think about AI and containers, what am I using it for? Well, it's the next generation of applications. A lot of those are going to be Edge, 5G-based, so very critical. >> We've got to talk about security now too. I'm going to pivot a little bit here, John, if it's okay. Years ago, you said security is a do-over, you said that on theCUBE, it stuck with us. But there's been a lot of complacency. It's kind of if it ain't broke, don't fix it, but but COVID kind of broke it. And so you see three mega trends, you've got cloud security, you'll see in Z-scaler rocket, you've got Identity Access Management and Octo which I hope there's I think a customer of yours and then you got Endpoint, you're seeing Crowdstrike explode you guys paid 2.7 billion, I think, for Carbon Black, yet Crowdstrike has this huge valuation. That's a mega opportunity for you guys. What are you seeing there? How are you bringing that all together? You've got NSX components, EUC components, you've got sort of security throughout your entire stack. How should we be thinking about that? >> Well, one of the announcements that I am most excited about at VMworld is the release of Carbon Black workload. 'Cause we said we're going to take those carbon black assets and we're going to combine it with workspace one, we're going to build it in NSX, we're going to make it part of Tanzu, and we're going to make it part of vSphere. And Carbon Black workload is literally the vSphere embodiment of Carbon Black in an agent-less way. So now you don't need to insert new agents or anything, it becomes part of the hypervisor itself. Meaning that there's no attack surface available for the bad guys to pursue. But not only is this an exciting new product capability, but we're going to make it free, right? And what I'm announcing at VMworld and everybody who uses vSphere gets Carbon Black workload for free for an unlimited number of VMs for the next six months. And as I said in the keynote, today is a bad day for cyber criminals. This is what intrinsic security is about, making it part of the platform. Don't add anything on, just click the button and start using what's built into vSphere. And we're doing that same thing with what we're doing at the networking layer, this is the last line acquisition. We're going to bring that same workload kind of characteristic into the container, that's why we did the Octarine acquisition, and we're releasing the integration of workspace one with Carbon Black client and that's going to be the differentiator, and by the way, Crowdstrike is doing well, but guess what? So are we, and right both of us are eliminating the rotting dead carcasses of the traditional AV approach. So there's a huge market for both of us to go pursue here. So a lot of great things in security, and as you said, we're just starting to see that shift of the industry occur that I promised last year in theCUBE. >> So it'd be safe to say that you're a cloud native and a security company these days? >> Yeah well, absolutely. And the bigger picture of us is that we're this critical infrastructure layer for the Edge, for the cloud, for the Telco environment and for the data center from every endpoint, every application, every cloud. >> So, Pat, I want to ask you a virtual question we got from the community. I'm going to throw it out to you because a lot of people look at Amazon and the cloud and they say, okay we didn't see it coming, we saw it coming, we saw it scale all the benefits that are coming out of cloud well documented. The question for you is, what's next after cloud? As people start to rethink especially with COVID highlighting and all the scabs out there as people look at their exposed infrastructure and their software, they want to be modern, they want the modern apps. What's next after cloud, what's your vision? >> Well, with respect to cloud, we are taking customers on the multicloud vision, right, where you truly get to say, oh, this workload I want to be able to run it with Azure, with amazon, I need to bring this one on-premise, I want to run that one hosted. I'm not sure where I'm going to run that application, so develop it and then run it at the best place. And that's what we mean by our hybrid multicloud strategy, is being able for customers to really have cloud flexibility and choice. And even as our preferred relationship with Amazon is going super well, we're seeing a real uptick, we're also happy that the Microsoft Azure VMware service is now GA. So there in Marketplace, are Google, Oracle, IBM and Alibaba partnerships, and the much broader set of VMware Cloud partner programs. So the future is multicloud. Furthermore, it's then how do we do that in the Telco network for the 5G build out? The Telco cloud, and how do we do that for the Edge? And I think that might be sort of the granddaddy of all of these because increasingly in a 5G world, we'll be enabling Edge use cases, we'll be pushing AI to the Edge like we talked about earlier in this conversation, we'll be enabling these high bandwidth low latency use cases at the Edge, and we'll see more and more of the smart embodiment smart city, smart street, smart factory, the autonomous driving, all of those need these type of capabilities. >> Okay. >> So there's hybrid and there's multi, you just talked about multi. So hybrid are data, are data partner ETR they do quarterly surveys. We're seeing big uptick in VMware Cloud on AWS, you guys mentioned that in your call. We're also seeing the VMware Cloud, VMware Cloud Foundation and the other elements, clearly a big uptick. So how should we think about hybrid? It looks like that's an extension of on-prem maybe not incremental, maybe a share shift, whereas multi looks like it's incremental but today multi is really running on multiple clouds, but a vision toward incremental value. How are you thinking about that? >> Yeah, so clearly, the idea of multi is truly multiple clouds. Am I taking advantage of multiple clouds being my private clouds, my hosted clouds and of course my public cloud partners? We believe everybody will be running a great private cloud, picking a primary public cloud and then a secondary public cloud. Hybrid then is saying, which of those infrastructures are identical, so that I can run them without modifying any aspect of my infrastructure operations or applications? And in today's world where people are wanting to accelerate their move to the cloud, a hybrid cloud is spot-on with their needs. Because if I have to refactor my applications, it's a couple million dollars per app and I'll see you in a couple of years. If I can simply migrate my existing application to the hybrid cloud, what we're consistently seeing is the time is 1/4 and the cost is 1/8 or less. Those are powerful numbers. And if I need to exit a data center, I want to be able to move to a cloud environment to be able to access more of those native cloud services, wow, that's powerful. And that's why for seven years now, we've been preaching that hybrid is the future, it is not a way station to the future. And I believe that more fervently today than when I declared it seven years ago. So we are firmly on that path that we're enabling a multi and hybrid cloud future for all of our customers. >> Yeah, you addressed that like Cube 2013, I remember that interview vividly was not a weigh station I got hammered answered. Thank you, Pat, for clarifying that going back seven years. I love the vision, you always got the right wave, it's always great to talk to you but I got to ask you about these initiatives that you're seeing clearly. Last year, a year and a half ago, Project Pacific came out, almost like a guiding directional vision. It then put some meat on the bone Tanzu and now you guys have that whole cloud native initiative, it's starting to flower up, thousands of flowers are blooming. This year, Project Monterey has announced. Same kind of situation, you're showing out the vision. What are the plans to take that to the next level? And take a minute to explain how Project Monterey, what it means and how you see that filling out. I'm assuming it's going to take the same trajectory as Pacific. >> Yeah, Monterey is a big deal. This is re-architecting the core of vSphere and it really is ripping apart the IO stack from the intrinsic operation of vSphere and the SX itself because in many ways, the IO, we've been always leveraging the NIC and essentially virtual NICs, but we never leverage the resources of the network adapters themselves in any fundamental way. And as you think about SmartNICs, these are powerful resources now where they may have four, eight, 16 even 32 cores running in the SmartNIC itself. So how do I utilize that resource, but it also sits in the right place? In the sense that it is the network traffic cop, it is the place to do security acceleration, it is the place that enables IO bandwidth optimization across increasingly rich applications where the workloads, the data, the latency get more important both in the data center and across data centers, to the cloud and to the Edge. So this re-architecting is a big deal, we announced the three partners, Intel, NVIDIA Mellanox and Pensando that we're working with, and we'll begin the deliveries of this as part of the core vSphere offerings beginning next year. So it's a big re-architecting, these are our key partners, we're excited about the work that we're doing with them and then of course our system partners like Dell and Lenovo who've already come forward and says, "Yeah we're going to to be bringing these to market together with VMware." >> Pat, personal question for you. I want to get your personal take, your career going back to Intel, you've seen it all but the shift is consumer to enterprise and you look at just recently Snowflake IPO, the biggest ever in the history of Wall Street. It's an enterprise data company, and the enterprise is now relevant. The consumer enterprise feels consumery, we talked about consumerization of IT years and years ago. But now more than ever the hottest financial IPO enterprise, you guys are enterprise. You did enterprise at Intel (laughing), you know the enterprise, you're doing it here at VMware. The enterprise is the consumer now with cloud and all this new landscape. What is your view on this because you've seen the waves, have you seen the historical perspective? It was consumer, was the big thing now it's enterprise, what's your take on all this? How do you make sense of it because it's now mainstream, what's your view on this? >> Well, first I do want to say congratulations to my friend, Frank and the extraordinary Snowflake IPO. And by the way they use VMware, so I not only do I feel a sense of ownership 'cause Frank used to work for me for a period of time, but they're also a customer of ours so go Frank, go Snowflake. We're excited about that. But there is this episodic to the industry where for a period of time, it is consumer-driven and CES used to be the hottest ticket in the industry for technology trends. But as you say, it has now shifted to be more business-centric, and I've said this very firmly, for instance, in the case of 5G where I do not see consumer. A faster video or a better Facebook isn't going to be why I buy 5G. It's going to be driven by more business use cases where the latency, the security and the bandwidth will have radically differentiated views of the new applications that will be the case. So we do think that we're in a period of time and I expect that it's probably at least the next five years where business will be the technology drivers in the industry. And then probably, hey there'll be a wave of consumer innovation, and I'll have to get my black turtlenecks out again and start trying to be cool but I've always been more of an enterprise guy so I like the next five to 10 years better. I'm not cool enough to be a consumer guy and maybe my age is now starting to conspire against me as well. >> Hey, Pat I know you got to go but a quick question. So you guys, you gave guidance, pretty good guidance actually. I wonder, have you and Zane come up with a new algorithm to deal with all this uncertainty or is it kind of back to old school gut feel? >> (laughing) Well, I think as we thought about the year, as we came into the year, and obviously, COVID smacked everybody, we laid out a model, we looked at various industry analysts, what we call the Swoosh Model, right? Q2, Q3 and Q4 recovery, Q1 more so, Q2 more so. And basically, we built our own theories behind that, we tested against many analyst perspectives and we had Vs and we had Ws and we had Ls and so on. We picked what we thought was really sort of grounded in the best data that we could, put our own analysis which we have substantial data of our own customers' usage, et cetera and picked the model. And like any model, you put a touch of conservatism against it, and we've been pretty accurate. And I think there's a lot of things we've been able to sort of with good data, good thoughtfulness, take a view and then just consistently manage against it and everything that we said when we did that back in March has sort of proven out incrementally to be more accurate. And some are saying, "Hey things are coming back more quickly" and then, "Oh, we're starting to see the fall numbers climb up a little bit." Hey, we don't think this goes away quickly, there's still a lot of secondary things to get flushed through, the various economies as stimulus starts tailoring off, small businesses are more impacted, and we still don't have a widely deployed vaccine and I don't expect we will have one until second half of next year. Now there's the silver lining to that, as we said, which means that these changes, these faster to the future shifts in how we learn, how we work, how we educate, how we care for, how we worship, how we live, they will get more and more sedimented into the new normal, relying more and more on the digital foundation. And we think ultimately, that has extremely good upsides for us long-term, even as it's very difficult to navigate in the near term. And that's why we are just raving optimists for the long-term benefits of a more and more digital foundation for the future of every industry, every human, every workforce, every hospital, every educator, they are going to become more digital and that's why I think, going back to the last question this is a business-driven cycle, we're well positioned and we're thrilled for all of those who are participating with Vmworld 2020. This is a seminal moment for us and our industry. >> Pat, thank you so much for taking the time. It's an enabling model, it's what platforms are all about, you get that. My final parting question for you is whether you're a VC investing in startups or a large enterprise who's trying to get through COVID with a growth plan for that future. What does a modern app look like, and what does a modern company look like in your view? >> Well, a modern company would be that instead of having a lot of people looking down at infrastructure, the bulk of my IT resources are looking up at building apps, those apps are using modern CICD data pipeline approaches built for a multicloud embodiment, right, and of course VMware is the best partner that you possibly could have. So if you want to be modern cool on the front end, come and talk to us. >> All right, Pat Gelsinger, the CEO of VMware here on theCUBE for VMworld 2020 virtual, here with theCUBE virtual great to see you virtually, Pat, thanks for coming on, thanks for your time. >> Hey, thank you so much, love to see you in person soon enough but this is pretty good. >> Yeah. >> Thank you Dave. Thank you so much. >> Okay, you're watching theCUBE virtual here for VMworld 2020, I'm John Furrier, Dave Vellante with Pat Gelsinger, thanks for watching. (gentle music)
SUMMARY :
brought to you by VMware but all the content is flowing. and of course the audiences best events of the year, and of course in all of the VMworld You gave the seminal keynote and you said, the cloud and to the Edge. in the cloud, if you will, Some of the current for AI inferencing at the Edge. and aspects of the Monterey Program and then you got Endpoint, for the bad guys to pursue. and for the data center and all the scabs out there and the much broader set and the other elements, hybrid is the future, What are the plans to take it is the place to do and the enterprise is now relevant. of the new applications to deal with all this uncertainty in the best data that we could, much for taking the time. and of course VMware is the best partner Gelsinger, the CEO of VMware love to see you in person soon enough Thank you so much. Dave Vellante with Pat
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
IBM | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Frank | PERSON | 0.99+ |
UCSF | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
20 people | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
11 years | QUANTITY | 0.99+ |
March | DATE | 0.99+ |
two decades | QUANTITY | 0.99+ |
2.7 billion | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
16 | QUANTITY | 0.99+ |
seven years | QUANTITY | 0.99+ |
eight | QUANTITY | 0.99+ |
Jensen | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
32 cores | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
VMware Cloud Foundation | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
2012 | DATE | 0.99+ |
last year | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Pacific | ORGANIZATION | 0.99+ |
a year and a half ago | DATE | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
11th year | QUANTITY | 0.99+ |
This year | DATE | 0.99+ |
four years ago | DATE | 0.99+ |
Pensando | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Monterey | ORGANIZATION | 0.99+ |
Carbon Black | ORGANIZATION | 0.99+ |
three partners | QUANTITY | 0.99+ |
seven years ago | DATE | 0.99+ |
Zane | PERSON | 0.99+ |
Moscone | LOCATION | 0.99+ |
three problems | QUANTITY | 0.99+ |
three aspects | QUANTITY | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
11 years later | DATE | 0.98+ |
Crowdstrike | ORGANIZATION | 0.98+ |
CES | EVENT | 0.98+ |
Project Monterey | ORGANIZATION | 0.98+ |
Microsoft | ORGANIZATION | 0.98+ |
third | QUANTITY | 0.98+ |
Pat Gelsinger, VMware | VMworld 2020
>> Narrator: From around the globe. It's theCUBE with digital coverage of VMworld 2020, brought to you by VMware and its ecosystem partners. >> Hello, welcome back to theCUBE's coverage of VMworld 2020. This is theCUBE virtual with VMworld 2020 virtual. I'm John Furrier your host of theCUBE with Dave Vellante. It's our 11th year covering VMware. We're not in person, we're virtual, but all the content is flowing. Of course, we're here with Pat Galsinger, the CEO of VMware. Who's been on theCUBE all 11 years. This year virtual of theCUBE as we've been covering VMware from his early days in 2010, when theCUBE started 11 years later, Pat is still changing and still exciting. Great to see you. Thanks for taking the time. >> Hey, you guys are great. I love the interactions that we have, the energy, the fun, the intellectual sparring. And of course that audiences have loved it now for 11 years. And I look forward to the next 11 that we'll be doing together. >> It's always exciting cause we'd love great conversations. Dave and I like to drill in and really kind of probe and unpack the content that you're delivering at the keynotes, but also throughout the entire program. It is virtual this year, which highlights a lot of the cloud native changes. Just want to get your thoughts on the virtual aspect of VMworld, not in person, which is one of the best events of the year. Everyone loves it. The great community. It's virtual this year, but there's a slew of content. What should people take away from this virtual VMworld? >> Well, one aspect of it is that I'm actually excited about is that we're going to be well over a hundred thousand people, which allows us to be bigger, right? You don't have the physical constraints. You also are able to reach places like I've gone to customers and maybe they had 20 people attend in prior years. This year they're having a hundred, they're able to have much larger teams. Also like some of the more regulated industries where they can't necessarily send people to events like this, the international audience. So just being able to spread the audience much more broadly well, also our key messages a digital foundation for unpredictable world. And man, what an unpredictable world it has been this past year? And then key messages, lots of key products announcements technology, announcements partnership, announcements and of course in all of the VMworld, is that hands on (murmurs) interactions that we'll be delivering our virtual, you come to the VMware because the content is so robust and it's being delivered by the world's smartest people. >> Yeah. We've had great conversations over the years. And we've talked about hybrid clothing 2012, a lot of this stuff I looked back in lot of the videos was early on, we're picking out all these waves, but it was that moment four years ago or so, maybe even four, three, I can't even remember, seems like yesterday. You gave the Seminole keynote and you said, "This is the way the world's going to happen." And since that keynote I'll never forget was in Moscone. And since then you guys have been performing extremely well both on the business as well as making technology bets and is paying off. So what's next? I mean, you've got the cloud scale. Is it space? Is it cyber? I mean, all these things are going on. What is next wave that you're watching and what's coming out and what can people extract out of VMworld this year about this next wave? >> Yeah, one of the things I really am excited about I went to my buddy Jensen. I said, "Boy, we're doing this work and smart. Next We really liked to work with you and maybe some things to better generalize the GPU." And Jensen challenged me. Now, usually, I'm the one challenging other people with bigger visions, this time Jensen said, "Hey Pat, I think you're thinking too small. Let's do the entire AI landscape together. And let's make AI a enterprise classwork stowed from the data center to the cloud and to the Edge. And so I'm going to bring all of my AI resources and make VMware, And Tansu the preferred infrastructure to deliver AI at scale. I need you guys to make the GPS work like first class citizens in the vSphere environment, because I need them to be truly democratized for the enterprise. so that it's not some specialized AI development team, it's everybody being able to do that. And then we're going to connect the whole network together in a new and profound way with our Monterey Program as well being able to use the SmartNIC, the DPU as Jensen likes to call it. So now it's CPU, GPU and DPU, all being managed through a distributed architecture of VMware." This is exciting. So this is one in particular that I think we are now rearchitecting the data center, the cloud in the Edge. And this partnership is really a central point of that. >> Yeah, the Nvid thing's huge. And I know Dave, Perharbs has some questions on that. But I ask you a question because a lot of people ask me, is it just a hardware deal? I mean, talking about SmartNIC, you talking about data processing units. It sounds like a motherboard in the cloud, if you will, but it's not just hardware. Can you talk about the aspect of the software piece? Because again, Nvidia is known for GP use, we all know that, but we're talking about AI here. So it's not just hardware. Can you just expand and share what the software aspect of all this is? >> Yeah. Well, Nvidia has been investing in their AI stack and it's one of those where I say, this is Edison at work, right? The harder I work, the luckier I get. And Nvidia was lucky that their architecture worked much better for the AI workload, but it was built on two decades of hard work in building a parallel data center architecture. And they have built a complete software stack for all of the major AI workloads running on their platform. All of that is now coming to vSphere and Tansu, that is a rich software layer across many vertical industries. And we'll talk about a variety of use cases. One of those that we highlight at Vmworld is the university of California, San Francisco partnership UCSF one of the world's leading research hospitals, some of the current vaccine use cases as well, the financial use cases for threat detection and trading benefits. It really is about how we bring that rich software stack. this is a decade and a half of work to the VMware platform so that now every developer and every enterprise could take advantage of this at scale, that's a lot of software. So in many respects, yeah, there's a piece of hardware in here, but the software stack is even more important. >> So well on the sort of Nvidia the arm piece, there's really interesting, these alternative processing models. And I wonder if you could comment on the implications for AI inferencing at the Edge. It's not just as well processor implications, it's storage, it's networking. It's really a whole new fundamental paradigm. How are you thinking about that Pat? >> Yeah, we've thought about, there's three aspects, but what we said three problems that we're solving. One is the developer problem, what we said, now you develop once, right? And the developer can now say, "Hey, I want to have this new AI centric app and I can develop, and it can run in the data center on the cloud or at the Edge." You'll secondly, my operations team can be able to operate this just like I do all my infrastructure. And now it's VMs containers and AI applications and third, and this is where your question really comes to bear. Most significantly is data gravity, right? These data sets are big. Some of them need to be very low latency as well. They also have regulatory issues. And if I have to move these large regulated data sets to the cloud, boy, maybe I can't do that generally for my apps or if I have low latency heavy apps at the Edge, ah, I can't pull it back to the cloud or to my data center. And that's where the uniform architecture and aspects of the Monterey program, where I'm able to take advantage of the network and the SmartNIC that are being built, but also being able to fully represent the data gravity issues of AI applications at scale 'cause in many cases I'll need to do the processing, both the learning and the inference at the Edge as well. So that's a key part of our strategy here with Nvidia. And I do think is going to be a lock, a new class of apps because when you think about AI and containers, what am I using it for? Well, it's the next generation of applications. A lot of those are going to be Edge 5G based. So very critical. >> We got to talk about security now, too. I mean, I'm going to pivot a little bit here John if it's okay. Years ago you said security is a do over. You said that on theCUBE, It stuck with us. There's there's been a lot of complacency it's kind of, if it didn't broke, don't fix it, but COVID kind of broke it. That's why you see three mega trends. You've got cloud security, you see in Z scaler rocket, you got identity access management and I'll check, I think a customer of yours. And then you've got endpoint you're seeing CrowdStrike explode. You guys pay 2.7 billion I think for carbon black yet CrowdStrike has this huge valuation. That's a mega opportunity for you guys. What are you seeing there? How are you bringing that all together? You've got NSX components, EUC components. You've got sort of security throughout your entire stack. How should we be thinking about that? >> Well, one of the announcements that I am most excited about at Vmworld is the release of carbon black workload, this research we're going to take those carbon black assets and we're going to combine it with workspace one. We're going to build it in NSX. We're going to make it part of Tansu and we're going to make it part of vSphere. And carbon black workload is literally the vSphere embodiment of carbon black in an agentless way. Ans so now you don't need to insert new agents or anything. It becomes part of the hypervisor itself, meaning that there's no attack surface available for the bad guys to pursue, but not only is this an exciting new product capability, but we're going to make it free, right? And what I'm announcing at VMworld and everybody who uses vSphere gets carbon black workload for free for an unlimited number of VMs for the next six months. And as I said in the keynote today is a bad day for cybercriminals. This is what intrinsic security is about, making it part of the platform. Don't add anything on, just click the button and start using what's built into vSphere. And we're doing that same thing with what we're doing at the networking layer. This is the act, the last line acquisition. We're going to bring that same workload kind of characteristic into the container. That's why we did the Octarine acquisition. And we're releasing the integration of workspace one with a carbon black client, and that's going to be the differentiator. And by the way, CrowdStrike is doing well, but guess what? So are we, and like both of us are eliminating the rotting dead carcasses of the traditional AV approach. So there is a huge market for both of us to go pursue here. So a lot of great things in security. And as you said, we're just starting to see that shift of the industry occur that I promised last year in theCUBE. >> So it'd be safe to say that you're a cloud native in a security company these days? >> You all, absolutely. And the bigger picture of us, is that we're critical infrastructure layer for the Edge for the cloud, for the telco environment and for the data center from every end point, every application, every cloud. >> So Padagonia asked you a virtual question, we got from the community, I'm going to throw it out to you because a lot of people look at Amazon, The cloud and they say, "Okay, we didn't see it coming. We saw it coming. We saw it scale all the benefits that are coming out of cloud, Well-documented." The question for you is what's next after cloud, as people start to rethink, especially with COVID highlighting all the scabs out there. As people look at their exposed infrastructure and their software, they want to be modern. They want the modern apps. What's next after cloud. What's your vision? >> Well, with respect to cloud, we are taking customers on the multicloud vision, right? Where you truly get to say, "Oh, this workload, I want to be able to run it with Azure, with Amazon. I need to bring this one on premise. I want to run that one hosted. I'm not sure where I'm going to run that application." So develop it and then run it at the best place. And that's what we mean by our hybrid multicloud strategy is being able for customers to really have cloud flexibility and choice. And even as our preferred relationship with Amazon is going super well. We're seeing a real uptick. We're also happy that the Microsoft Azure VMware services now GA so they're in marketplace, our Google, Oracle, IBM and Alibaba partnerships in the much broader set of VMware cloud Partner Program. So the future is multicloud. Furthermore, it's then how do we do that in the Telco Network for the 5G build out, The Telco cloud? And how do we do that for the Edge? And I think that might be sort of the granddaddy of all of these because increasingly in a 5G world will be a nibbling Edge use cases. We'll be pushing AI to the Edge like we talked about earlier in this conversation, will be enabling these high bandwidth, with low latency use cases at the Edge, and we'll see more and more of the smart embodiment, smart cities, smart street, smart factory, or the autonomous driving. All of those need these type of capabilities. >> So there's hybrid and there's multi, you just talked about multi. So hybrid are data partner ETR, they do quarterly surveys. We're seeing big uptick in VMware cloud and AWS, you guys mentioned that in your call. we're also seeing the VMware cloud, VMware cloud Coundation and the other elements, clearly a big uptake. So how should we think about hybrid? It looks like that's an extension of on-prem maybe not incremental, maybe a share shift whereas multi looks like it's incremental, but today multi has really running on multiple clouds, but vision toward incremental value. How are you thinking about that? >> Yeah, so clearly the idea of multi is to link multiple. Am I taking advantage of multiple clouds being my private clouds, my hosted clouds. And of course my public cloud partners, we believe everybody will be running a great private cloud, picking a primary, a public cloud, and then a secondary public cloud. Hybrid then is saying, which of those infrastructures are identical so that I can run them without modifying any aspect of my infrastructure operations or applications. And in today's world where people are wanting to accelerate their move to the cloud, a hybrid cloud is spot on with their needs because if I have to refactor my applications it's a couple million dollars per app, And I'll see you in a couple of years. If I can simply migrate my existing application to the hybrid cloud, what we're consistently seeing is the time is one quarter and the cost is one eight, four less. Those are powerful numbers. And if I need to exit a data center, I want to be able to move to a cloud environment, to be able to access more of those native cloud services. Wow. That's powerful. And that's why for seven years now we've been preaching that hybrid is the future. It is not a waystation to the future. And I believe that more fervently today than when I declared it seven years ago. So we are firmly on that path that we're enabling a multi and a hybrid cloud future for all of our customers. >> Yeah. You addressed that like CUBE 2013. I remember that interview vividly was not a waystation. I got (murmurs) the answer. Thank you Pat, for clarifying than going back seven years. I love the vision. You're always got the right wave. It's always great to talk to you, but I got to ask you about these initiatives you seeing clearly last year or a year and a half ago, project Pacific name out almost like a guiding directional vision, and then put some meat on the bone Tansu and now you guys have that whole Cloud Native Initiative is starting to flower up thousand flowers are blooming. This year Project Monterrey has announced same kind of situation. You're showing out the vision. What are the plans to take that to the next level and take a minute to explain how project Monterey, what it means and how you see that filling out. I'm assuming it's going to take the same trajectory as Pacific. >> Yeah. Monetary is a big deal. This is rearchitecting The core of vSphere. It really is ripping apart the IO stack from the intrinsic operation of a vSphere and ESX itself, because in many ways, the IO we've been always leveraging the NIC and essentially virtual NICs, but we never leverage the resources of the network adapters themselves in any fundamental way. And as you think about SmartNICs, these are powerful resources now where they may have four, eight, 16, even 32 cores running in the smartNIC itself. So how do I utilize that resource? But it also sits in the right place in the sense that it is the network traffic cop. It is the place to do security acceleration. It is the place that enables IO bandwidth optimization across increasingly rich applications where the workloads, the data, the latency get more important both in the data center and across data centers to the cloud and to the Edge. So this rearchitecting is a big deal. We announced the three partners, Intel, Nvidia, Mellanox, and Penn Sandow that we're working with. And we'll begin the deliveries of this as part of the core vSphere offerings of beginning next year. So it's a big rearchitecting. These are our key partners. We're excited about the work that we're doing with them. And then of course our system partners like Dell and Lenovo, who've already come forward and says, "Yeah, we're going to be bringing these to market together with VMware." >> Pat, personal question for you. I want to get your personal take, your career, going back to Intel. You've seen it all, but the shift is consumer to enterprise. And you look at just recently snowflake IPO, the biggest ever in the history of wall street, an enterprise data's company. And the enterprise is now relevant. Enterprise feels consumer. We talked about consumerization of IT years and years ago, but now more than ever the hottest financial IPO enterprise, you guys are enterprise. You did enterprise at Intel. (laughs) You know the enterprise, you doing it here at VMware. The enterprise is the consumer now with cloud and all this new landscape. What is your view on this? Because you've seen the waves, and you've seen the historical perspective. It was consumer, was the big thing. Now it's enterprise, what's your take on all this? How do you make sense of it? Because it's now mainstream. what's your view on this? >> Well, first I do want to say congratulations to my friend Frank, and the extraordinary snowflake IPO, and by the way, they use VMware. So not only do I feel a sense of ownership 'cause Frank used to work for me for a period of time, but they're also a customer of ours. So go Frank, go snowflake. We're we're excited about that. But there is this episodic, this to the industry where for a period of time it is consumer-driven and CES used to be the hottest ticket in the industry for technology trends. But as you say, it is now shifted to be more business centric. And I've said this very firmly, for instance, in the case of 5G where I do not see consumer a faster video or a better Facebook, isn't going to be why I buy 5G. It's going to be driven by more business use cases where the latency, the security and the bandwidth will have radically differentiated views of the new applications that will be the case. So we do think that we're in a period of time and I expect that it's probably at least the next five years where business will be the technology drivers in the industry. And then probably, hey, there'll be a wave of consumer innovation and I'll have to get my black turtlenecks out again and start trying to be cool, but I've always been more of an enterprise guy. So I like the next five to 10 years better. I'm not cool enough to be a consumer guy. And maybe my age is now starting to conspire against me as well. >> Hey, Pat, I know you've got to go, but quick question. So you guys, you gave guidance, pretty good guidance, actually. I wondered have you and Zane come up with a new algorithm to deal with all this uncertainty or is it kind of back to old school gut feel? (laughs) >> Well, I think as we thought about the year as we came into the year and obviously, COVID smacked everybody, we laid out a model, we looked at various industry analysts, what we call the swoosh model, right? Q2, Q3 and Q4 recovery, Q1 more so, Q2 more so, and basically, we build our own theories behind that. We test it against many analysts, the perspectives, and we had vs and we had Ws and we had Ls and so on. We picked what we thought was really sort of grounded of the best data that we could put our own analysis, which we have substantial data of our own customer's usage, et cetera, and pick the model. And like any model, you put a touch of conservatism against it, and we've been pretty accurate. And I think there's a lot of things, we've been able to sort of, with good data good thoughtfulness, take a view and then just consistently manage against it and everything that we said when we did that back in March, sort of proven out incrementally to be more accurate. And some are saying, "Hey, things are coming back more quickly." And then, oh we're starting to see the fall numbers climb up a little bit. Hey, we don't think this goes away quickly. There's still a lot of secondary things to get flushed through the various economies, as stimulus starts tailoring off small businesses are more impacted and we still don't have a widely deployed vaccine. And I don't expect we will have one until second half of next year. Now there's the silver lining to that, as we said, which means that these changes, these faster to the future shifts in how we learn, how we work, how we educate, how we care for, how we worship, how we live, they will get more and more sedimented into the new normal relying more and more on the digital foundation. And we think ultimately that has extremely good upsides for us longterm, even as it's very difficult to navigate in the near term. And that's why we are just raving optimists for the longterm benefits of a more and more digital foundation for the future of every industry, every human, every workforce, every hospital, every educator, they are going to become more digital. And that's why I think going back to the last question, this is a business driven cycle, we're well positioned, and we're thrilled for all of those who are participating with VMworld 2020. This is a seminal moment for us and our industry. >> Pat, thank you so much for taking the time. It's an enabling model. It's what platforms are all about. You get that. My final parting question for you is whether you're a VC investing in startups or a large enterprise who's trying to get through COVID with a growth plan for that future. What is a modern app look like? And what does a modern company look like in your view? >> Well, a modern company would be that instead of having a lot of people looking down at infrastructure, the bulk of my IT resources are looking up at building apps. Those apps are using modern CICD data pipeline approaches built for a multicloud embodiment, right? And of course, VMware is the best partner that you possibly could have. So if you want to be modern, cool on the front end, come and talk to us. >> All right. Pat Galsinger the CEO of VMware here on theCUBE for VML 2020 virtual here with theCUBE virtual. Great to see you virtually Pat. Thanks for coming on. Thanks for your time. >> Hey, thank you so much. Love to see you in person soon enough, but this is pretty good. Thank you, Dave. Thank you so much. >> Okay. You're watching theCUBE virtual here for VMworld 2020. I'm John Furrier with Dave Vallente with Pat Gelsinger. Thanks for watching. (upbeat music)
SUMMARY :
Narrator: From around the globe. for taking the time. I love the interactions that we have, best events of the year. in all of the VMworld, in lot of the videos was early on, the cloud and to the Edge. in the cloud, if you will, for all of the major AI workloads of Nvidia the arm piece, the cloud or to my data center. I mean, I'm going to for the bad guys to pursue, and for the data center I'm going to throw it out to you of the smart embodiment, and the other elements, is one quarter and the cost What are the plans to take It is the place to do And the enterprise is now relevant. and the bandwidth will have to deal with all this uncertainty of the best data that we much for taking the time. And of course, VMware is the best partner Galsinger the CEO of VMware Love to see you in person soon enough, I'm John Furrier with Dave
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
IBM | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Dave Vallente | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Pat Galsinger | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
UCSF | ORGANIZATION | 0.99+ |
20 people | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
March | DATE | 0.99+ |
Frank | PERSON | 0.99+ |
16 | QUANTITY | 0.99+ |
11 years | QUANTITY | 0.99+ |
2.7 billion | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
2012 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Pat | PERSON | 0.99+ |
last year | DATE | 0.99+ |
2010 | DATE | 0.99+ |
32 cores | QUANTITY | 0.99+ |
eight | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
Vmworld | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
three aspects | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
three problems | QUANTITY | 0.99+ |
three partners | QUANTITY | 0.99+ |
two decades | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
Telco Network | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Telco | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
four years ago | DATE | 0.99+ |
Project Monterrey | ORGANIZATION | 0.99+ |
11th year | QUANTITY | 0.99+ |
Penn Sandow | ORGANIZATION | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
a year and a half ago | DATE | 0.99+ |
Pacific | ORGANIZATION | 0.99+ |
Jensen | PERSON | 0.99+ |
This year | DATE | 0.99+ |
Zane | PERSON | 0.99+ |
VMworld 2020 | EVENT | 0.99+ |
Padagonia | PERSON | 0.99+ |
seven years ago | DATE | 0.99+ |
one quarter | QUANTITY | 0.98+ |
11 years later | DATE | 0.98+ |
next year | DATE | 0.98+ |
CES | EVENT | 0.98+ |
seven years | QUANTITY | 0.98+ |
San Francisco | LOCATION | 0.98+ |
one | QUANTITY | 0.98+ |
10 years | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
vSphere | TITLE | 0.98+ |
yesterday | DATE | 0.98+ |
CrowdStrike | ORGANIZATION | 0.98+ |
Moscone | LOCATION | 0.98+ |
Sujal Das, Netronome - OpenStack Summit 2017 - #OpenStackSummit - #theCUBE
>> Announcer: Live from Boston, Massachusetts, it's theCUBE covering OpenStack Summit 2017. Brought to you by the OpenStack Foundation, Red Hat, and additional ecosystem support. >> And we're back. I'm Stu Miniman with my cohost, John Troyer, getting to the end of day two of three days of coverage here at the OpenStack Summit in Boston. Happy to welcome the program Sujal Das, who is the chief marketing and strategy officer at Netronome. Thanks so much for joining us. >> Thank you. >> Alright, so we're getting through it, you know, really John and I have been digging into, you know, really where OpenStack is, talking to real people, deploying real clouds, where it fits into the multi cloud world. You know, networking is one of those things that took a little while to kind of bake out. Seems like every year we talk about Neutron and all the pieces that are there. But talk to us, Netronome, we know you guys make SmartNICs. You've got obviously some hardware involved when I hear a NIC, and you've got software. What's your involvement in OpenStack and what sort of things are you doing here at the show? >> Absolutely, thanks, Stu. So, we do SmartNIC platforms, so that includes both hardware and software that can be used in commercial office house servers. So with respect to OpenStack, I think the whole idea of STN with OpenStack is centered around the data plane that runs on the server, things such as the Open vSwitch, or Virtual Router, and they're evolving new data planes coming into the market. So we offload and accelerate the data plane in our SmartNICs, because the SmartNICs are programmable, we can evolve the feature set very quickly. So in fact, we have software releases that come out every six months that keep up to speed with OpenStack releases and Open vSwitches. So that's what we do in terms of providing a higher performance OpenStack environment so to say. >> Yeah, so I spent a good part of my career working on that part of the stack, if you will, and the balance is always like, right, what do you build into the hardware? Do I have accelerators? Is this the software that does, you know, usually in the short term hardware can take it care of it, but in the long term you follow the, you know, just development cycles, software tends to win in terms, so, you know. Where are we with where functionality is, what differentiates what you offer compared to others in the market? >> Absolutely. So we see a significant trend in terms of the role of a coprocessor to the x86 or evolving ARM-based servers, right, and the workloads are shifting rapidly. You know, with the need for higher performance, more efficiency in the server, you need coprocessors. So we make, essentially, coprocessors that accelerate networking. And that sits next to an x86 on a SmartNIC. The important differentiation we have is that we are able to pack a lot of cores on a very small form factor hardware device. As many as 120 cores that are optimized for networking. And by able to do that, we're able to deliver very high performance at the lowest cost and power. >> Can you speak to us, just, you know, what's the use case for that? You know, we talk about scale and performance. Who are your primary customers for this? Is this kind of broad spectrum, or, you know, certain industries or use cases that pop out. >> Sure, so we have three core market segments that we go after, right? One is the innovene construction market, where we see a lot of OpenStack use, for example. We also have the traditional cloud data center providers who are looking at accelerating even SmartNICs. And lastly the security market, that's kind of been our legacy market that we have grown up with. With security kind of moving away from appliances to more distributed security, those are our key three market segments that we go after. >> The irony is, in this world of cloud, hardware still matters, right? Not only does hardware, like, you're packing a huger number of cores into a NIC, so that hardware matters. But, one of the reasons that it matters now is because of the rise of this latest generation of solid-state storage, right? People are driving more and more IO. Do you see, what are the trends that you're seeing in terms of storage IO and IO in general in the data center? >> Absolutely. So I think the large data centers of the world, they showed the way in terms of how to do storage, especially with SSDs, what they call disaggregated storage, essentially being able to use the storage on each server and being able to aggregate those together into a pool of storage resources and its being called hyperconverged. I think companies like Nutanix have found a lot of success in that market. What I believe is going to happen in the next phase is hyperconvergence 2.0 where we're going to go beyond security, which essentially addressed TCO and being able to do more with less, but the next level would be hyperconvergence around security where you'd have distributed security in all servers and also telemetry. So basically your storage appliance is going away with hyperconvergence 1.0, but with the next generation of hyperconvergence we'd see the secured appliances and the monitoring appliances sort of going away and becoming all integrated in the server infrastructure to allow for better service levels and scalability. >> So what's the relationship between distributed security and then the need for more bandwidth at the back plane? >> Absolutely. So when you move security into the server, the processing requirements in the server goes up. And typically with all security processing, it's a lot of what's called flow processing or match-action processing. And those are typically not suitable for a general purpose server like the ARM or the x86, but that's where you need specialized coprocessors, kind of like the world of GPUs doing well in the artificial intelligence applications. I think the same example here. When you have security, telemetry, et cetera being done in each server, you need special purpose processing to do that at the lowest cost and power. >> Sujal, you mentioned that you've got solutioned into the public cloud. Are those the big hyperscale guys? Is it service providers? I'm curious if you could give a little color there. >> Yes, so these are both tier one and tier two service providers in the cloud market as well as the telco service providers, more in the NFV side. But we see a common theme here in terms of wanting to do security and things like telemetry. Telemetry is becoming a hot topic. Something called in-band telemetry that we are actually demonstrating at our booth and also speaking about with some our partners at the show, such as with Mirantis, Red Hat, and Juniper. Where doing all of these on each server is becoming a requirement. >> When I hear you talk, I think about here at OpenStack, we're talking about the hybrid or multi cloud world and especially something like security and telemetry I need to handle my data center, I need to handle the public cloud, and even when I start to get into that IoT edge environment, we know that the service area for attack just gets orders of magnitude larger, therefore we need security that can span across those. Are you touching all of those pieces, maybe give us a little bit of, dive into it. >> Absolutely, I think a great example is DDoS, right, distributed denial of service attacks. And today you know you have these kind of attacks happening from computers, right. Look at the environment where you have IoTs, right, you have tons and tons of small devices that can be hacked and could flood attacks into the data center. Look at the autonomous car or self-driving car phenomenon, where each car is equivalent to about 2,500 Internet users. So the number of users is going to scale so rapidly and the amount of attacks that could be proliferated from these kind of devices is going to be so high that people are looking at moving DDoS from the perimeter of the network to each server. And that's a great example that we're working with with a large service provider. >> I'm kind of curious how the systems take advantage of your technology. I can see it, some of it being transparent, like if you just want to jam more bits through the system, then that should be pretty transparent to the app and maybe even to the data plane and the virtual switches. But I'm guessing also there are probably some API or other software driven ways of doing, like to say, hey not only do I want you to jam more bits through there, but I want to do some packet inspection or I want to do some massaging or some QoS or I'm not sure what all these SmartNICs do. So is my model correct? Is that kind of the different ways of interacting with your technology? >> You're hitting a great point. A great question by the way, thank you. So the world has evolved from very custom ways of doing things, so proprietary ways of doing things, to more standard ways of doing things. And one thing that has kind of standardized so to say the data plane that does all of these functions that you mention, things like security or ACL roots or virtualization. Open vSwitch is a great example of a data plane that has kind of standardized how you do things. And there are a lot of new open source projects that are happening in the Linux Foundation, such as VPP for example. So each of these standardize the way you do it and then it becomes easier for vendors like us to implement a standard data plane and then work with the Linux kernel community in getting all of those things upstream, which we are working on. And then having the Red Hats of the world actually incorporate those into their distributions so that way the deployment model becomes much easier, right. And one of the topics of discussion with Red Hat that we presented today was exactly that, as to how do you make these kind of scales, scalability for security and telemetry, be more easily accessible to users through a Red Hat distribution, for example. >> Sujal, can you give us a little bit of just an overview of the sessions that Netronome has here at the show and what are the challenges that people are coming to that they're excited to meet with your company about? >> Absolutely, so we presented one session with Mirantis. Mirantis, as you know, is a huge OpenStack player. With Mirantis, we presented exactly the same, the problem statement that I was talking about. So when you try to do security with OpenStack, whether its stateless or stateful, your performance kind of tanks when you apply a lot of security policies, for example, on a per server basis that you can do with OpenStack. So when you use a SmartNIC, you essentially return a lot of the CPU cores to the revenue generating applications, right, so essentially operators are able to make more per server, make more money per server. That's a sense of what the value is, so that was the topic with Mirantis, who uses actually Open Contrail virtual router data plane in their solution. We also have presented with Juniper, which is also-- >> Stu: Speaking of Open Contrail. >> Yeah, so Juniper is another version of Contrail. So we're presenting a very similar product but that's with the commercial product from Juniper. And then we have yesterday presented with Red Hat. And Red Hat is based on Red Hat's OpenStack and their Open vSwitch based products where of course we are upstreaming a lot of these code bits that I talked about. But the value proposition is uniform across all of these vendors, which is when you do storage, sorry, security and telemetry and virtualization et cetera in a distributed way across all of your servers and get it for all of your appliances, you get better scale. But to achieve the efficiencies in the server, you need a SmartNIC such as ours. >> I'm curious, is the technology usually applied then at the per server level, is there a rack scale component too that needs to be there? >> It's on a per server basis, so it's the use cases like any other traditional NIC that you would use. So it looks and feels like any other NIC except that there is more processing cores in the hardware and there's more software involved. But again all of the software gets tightly integrated into the OS vendor's operating system and then the OpenStack environment. >> Got you. Well I guess you can never be too rich, too thin, or have too much bandwidth. >> That's right, yeah. >> Sujal, share with our audience any interesting conversation you had or other takeaways you want people to have from the OpenStack Summit. >> Absolutely, so without naming specific customer names, we had one large data center service provider in Europe come in and their big pain point was latency. Latency going form the VM on one server to another server. And that's a huge pain point and their request was to be able to reduce that by 10x at least. And we're able to do that, so that's one use case that we have seen. The other is again relates to telemetry, you know, how... This is a telco service provider, so as they go into 5G and they have to service many different applications such as what they call network slices. One slice servicing the autonomous car applications. Another slice managing the video distribution, let's say, with something like Netflix, video streaming. Another one servicing the cellphone, something like a phone like this where the data requirements are not as high as some TV sitting in your home. So they need different kinds of SLA for each of these services. How do they slice and dice the network and how are they able to actually assess the rogue VM so to say that might cause performance to go down and affect SLAs, telemetry, or what is called in-band telemetry is a huge requirement for those applications. So I'm giving you like two, one is a data center operator. You know an infrastructure as a service, just want lower latency. And the other one is interest in telemetry. >> So, Sujal, final question I have for you. Look forward a little bit for us. You've got your strategy hat on. Netronome, OpenStack in general, what do you expect to see as we look throughout the year maybe if we're, you know, sitting down with you in Vancouver a year from now, what would you hope that we as an industry and as a company have accomplished? >> Absolutely, I think you know you'd see a lot of these products so to say that enable seamless integration of SmartNICs become available on a broad basis. I think that's one thing I would see happening in the next one year. The other big event is the whole notion of hyperconvergence that I talked about, right. I would see the notion of hyperconvergence move away from one of just storage focus to security and telemetry with OpenStack kind of addressing that from a cloud orchestration perspective. And also with each of those requirements, software defined networking which is being able to evolve your networking data plane rapidly in the run. These are all going to become mainstream. >> Sujal Das, pleasure catching up with you. John and I will be back to do the wrap-up for day two. Thanks so much for watching theCUBE. (techno beat)
SUMMARY :
Brought to you by the OpenStack Foundation, of coverage here at the OpenStack Summit in Boston. But talk to us, Netronome, we know you guys make SmartNICs. in our SmartNICs, because the SmartNICs are programmable, on that part of the stack, if you will, of a coprocessor to the x86 or evolving ARM-based servers, Can you speak to us, just, you know, And lastly the security market, is because of the rise of this latest generation to do more with less, but the next level kind of like the world of GPUs doing well into the public cloud. more in the NFV side. that the service area for attack just gets orders of the network to each server. I'm kind of curious how the systems take advantage So each of these standardize the way you do it of the CPU cores to the revenue generating applications, of these vendors, which is when you do storage, sorry, But again all of the software gets tightly integrated Well I guess you can never be too rich, too thin, or other takeaways you want people to have The other is again relates to telemetry, you know, how... as we look throughout the year maybe if we're, you know, of these products so to say that enable seamless integration Sujal Das, pleasure catching up with you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Troyer | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Sujal Das | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Vancouver | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
OpenStack Foundation | ORGANIZATION | 0.99+ |
Netronome | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Juniper | ORGANIZATION | 0.99+ |
Mirantis | ORGANIZATION | 0.99+ |
120 cores | QUANTITY | 0.99+ |
10x | QUANTITY | 0.99+ |
Red Hat | TITLE | 0.99+ |
OpenStack | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
each car | QUANTITY | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
each server | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
today | DATE | 0.99+ |
OpenStack Summit | EVENT | 0.98+ |
OpenStack | TITLE | 0.98+ |
OpenStack Summit 2017 | EVENT | 0.98+ |
Netflix | ORGANIZATION | 0.98+ |
three days | QUANTITY | 0.98+ |
about 2,500 Internet users | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
one session | QUANTITY | 0.97+ |
telco | ORGANIZATION | 0.97+ |
Red Hats | TITLE | 0.97+ |
each | QUANTITY | 0.97+ |
Sujal | PERSON | 0.97+ |
day two | QUANTITY | 0.97+ |
one server | QUANTITY | 0.97+ |
#OpenStackSummit | EVENT | 0.96+ |
ARM | ORGANIZATION | 0.96+ |
Stu | PERSON | 0.96+ |
Neutron | ORGANIZATION | 0.95+ |
three market segments | QUANTITY | 0.94+ |
both tier one | QUANTITY | 0.92+ |
Linux kernel | TITLE | 0.9+ |
Open vSwitch | TITLE | 0.9+ |
next one year | DATE | 0.89+ |
hyperconvergence 2.0 | OTHER | 0.84+ |
tier two | QUANTITY | 0.84+ |
x86 | COMMERCIAL_ITEM | 0.83+ |
one use case | QUANTITY | 0.81+ |
one large data center | QUANTITY | 0.81+ |
TCO | ORGANIZATION | 0.8+ |
one thing | QUANTITY | 0.79+ |
Open Contrail | TITLE | 0.79+ |
1.0 | OTHER | 0.75+ |
three core market segments | QUANTITY | 0.74+ |
Chuck Tato, Intel - Mobile World Congress 2017 - #MWC17 - #theCUBE
>> Narrator: Live from Silicon Valley, it's theCUBE. Covering mobile world congress 2017. Brought to you by Intel. >> Okay, welcome back everyone, we're here live in Palo Alto for day two of two days of Mobile World Congress special coverage here in Palo Alto, where we're bringing all the folks in Silicon Valley here in the studio to analyze all the news and commentary of which we've been watching heavily on the ground in Barcelona. We have reporters, we have analysts, and we have friends there, of course, Intel is there as well as SAP, and a variety of other companies we've been talking to on the phone and all those interviews are on YouTube.com/siliconANGLE. And we're here with Chuck Tato, who's the marketing director of the data center of communications with Intel around the FPGA, which is the programmable chips, formerly with the Alterra Group, now a part of Intel, welcome to theCUBE, and thanks for coming on. >> Thank you for having me. So, actually all the rage Mobile World Congress Intel, big splash, and you guys have been, I mean, Intel has always bene the bellweather. I was saying this earlier, Intel plays the long game. You have to in the chips games. You got to build the factories, build fabs. Most of all, have been the heartbeat of the industry, but now doing more of less chips, Most of all, making them smaller, faster, cheaper, or less expensive and just more power. The cloud does that. So you're in the cloud data center group. Take a second to talk about what you guys do within Intel, and why that's important for folks to understand. >> Sure. I'm part of the programmable solutions group. So the programmable solutions group primarily focuses on field programmable gate array technology that was acquired through the Alterra acquisition at Intel. So our focus in my particular group is around data center and Coms infrastructure. So there, what we're doing is we're taking the FPGAs and we're applying them to the data center as well as carrier infrastructure to accelerate things, make them faster, make them more repeatable, or more terministic in nature. >> And so, that how it works, as you were explaining beforehand, kind of, you can set stream of bits at it and it changes the functionality of the chip. >> Yes. So essentially, an FPGA, think of it as a malleable set of resources. When I say that, you know, you can create, it's basically a fabric with many resources in an array. So through the use of a bit stream, you can actually program that fabric to interconnect the different elements of the chip to create any function that you would like, for the most part. So think of it as you can create a switch, you can create a classification engine, and things like that. >> Any why would someone want that functionality versus just a purpose-built chip. >> Perfect question. So if you look at, there's two areas. So in the data center, as well as in carrier infrastructure, the workloads are changing constantly. And there's two problems. Number one you could create infrastructure that becomes stranded. You know, you think you're going to have so much traffic of a certain type and you don't. So you end up buying a lot of purpose-built equipment that's just wrong for what you need going forward. So by building infrastructure that is common, so it kind of COTS, you know, on servers, but adding FPGAs to the mix allows you to reconfigure the networking within the cloud, to allow you to address workloads that you care about at any given time. >> Adaptability seems to be the key thing. You know kind of trends based upon certain things, and certainly the first time you see things, you've got to figure it out. But this gives a lot of flexibility, it sounds like. >> Exactly. Adaptability is the key, as well as bandwidth, and determinism, right? So when you get a high bandwidth coming into the network, and you want to something very rapidly and consistently to provide a certain service level agreement you need to have circuits that are actually very, very deterministic in nature. >> Chuck, I want to get your thoughts on one of the key things. I talked with Sandra Reddy, Sandra Rivera, sorry, she was, I interviewed her this morning, as well as Dan Rodriguez, and Caroline Chan, Lyn Comp as well. Lot of different perspectives. I see 5G as big on one hand, have the devices out there announcing on Sunday. But what was missing, and I think Fortune was the really, the only one I saw pick up on this besides SiliconANGLE, on terms of the coverage was, there's a real end-to-end discussion here around not just the 5G as the connectivity piece that the carriers care about, but there's the under-the-hood work that's changing in the Data Center. And the car's a data center now, right? >> Yeah. >> So you have all these new things happening, IOT, people with sensors on them, and devices, and then you've got the cloud-ready compute available, right? And we love what's happening with cloud. Infinite compute is there and makes data work much better. How does the end-to-end story with Intel, and the group that you're in, impact that and what are some of the use cases that seem to be popping up in that area. >> Okay, so that's a great question, and I guess some of the examples that I could give of where we're creating end-to-end solutions would be in wireless infrastructure, as you just mentioned. As you move on to 5G infrastructure, the goal is to increase the bandwidth by 100X and reduce the latency by orders of magnitude. It's a very, very significant challenge. To do that is quite difficult, to do it just in software. FPGA is a perfect complement to a software-based solution to achieve these goals. For example, virtual switching. It's a significant load on the processors. By offloading virtual switching in an FPGA, you an create the virtual switch that you need for the particular workload that you need. Workloads change, depending on what type of services you're offering in a given area. So you can tailor it to exactly what you need. You may or may not need6 high levels of security, so things like IPsec, yo6u know, at full line rate, are the kind of things that FPGAs allow you to add ad hoc. You can add them where you need them, when you need them, and change them as the services change. >> It sounds like, I'd never thought about that, but it sounds like this is a real architectural advantage, because I'd never thought about offloading the processor, and we all know we all open up or build our PCs know that the heat syncs only get bigger and bigger, so that people want that horsepower for very processor-intensive things. >> Absolutely. So we do two things. One is we do create this flexible infrastructure, the second thing is we offload the processor for things that you know, free up cores to do more value-added things. >> Like gaming for, my kids love to see that gaming. >> Yes. There's gaming, virtual reality, augmented virtual reality, all of those things are very CPU intensive, but there's also a compute-intensive aspect. >> Okay, so I've got to get your take on this. This is kind of a cool conversation because that's, the virtual reality and augmented reality really are relevant. That is a key part of Mobile World Congress, beside the IOT, which I think is the biggest story this year, is IOT, and all the security aspects of it around, and all that good stuff. And that's really where the meat is, but the real sex appeal is the virtual reality and augmented reality. That's an example of the new things that have popped out of the woodwork, so the question for you is for all these new-use cases that I have found that emerge, there will be new things that pop out of the woodwork. "Oh, my God, I don't have to write software for that, There's an app for that now." So the new apps are going to start coming in, whether it's something new and cool on a car, Something new and cool on a sensor, something new and cool in the data center. How adaptive are you guys and how do you guys kind of fit into that kind of preparing for this unknown future. >> Well, that's a great question, too. I like to think about new services coming forward as being a unique blend of storage, compute, and networking, and depending on the application and the moment in that application, you may have to change that mix in a very flexible way. So again, the FPGA provides you the ability to change all of those to match the application needs. I'm surprised as we dig into applications, you know, how many different sets of needs there are. So each time you do that, you can envision, reprogramming your FPGA. So just like a processor, it's completely reprogrammable. You're not going to reprogram it in the same instantaneous way that you do in software, but you can reprogram it on the fly, whatever you would like. >> So, I'm kind of a neophyte here, so I want to ask some dumb questions, probably be dumb to you, but common to me, but would be like, okay, who writes bits? Is it the coders or is it someone on the firmware side, I'm trying to understand where the line is between that hardened top of kind of Intel goodness that goes on algorithmically or automatically, or what programmers do. So think full-stack developer, or a composer, a more artisan type who's maybe writing an app. Are there both access points to the coding, or is it, where's the coding come from? >> So there's multiple ways that this is happening. The traditional way of programming FPGA is the same way that you would design any ASIC in the industry, right? Somebody sits down and they write RTL, they're very specialized programmers However, going forward, there's multiple ways you an access it. For one, we're creating libraries of solutions that you can access through APIs that are built into DPDK, for example on Xeon. So you can very easily access accelerated applications and inline applications that are being developed by ourselves as well as third parties. So there's a rich eco system. >> So you guys are writing hooks that go beyond being the ASIC special type, specialist programming. >> Absolutely. So this makes it very accessible to programmers. The acceleration that's there from a library and purpose-built. >> Give me an example, if you can. >> Sure, virtual switch. So in our platform for NFE, we're building in a virtual switch solution, and you can program that just like you know, totally in software through DPDK. >> One of the things that coming up with NFE that's interesting, I don't know if this y6our wheelhouse or not, but I want to throw it out there because it's come up in multiple interviews and in the industry. You're seeing very cool ideas and solutions roll out, and I'll give, you know, I'll make one up off the top of my head, Openstack. Openstack is a great, great vision, but it's a lot of fumbling in the execution of it and the cost of ownership goes through the roof because there's a lot of operation, I'm overgeneralizing certain use-case, not all Openstack, but in generally speaking, I do have the same problem with big data where, great solution-- >> Uh-huh. >> But when you lay out the architect and then deploy it there's a lot of cost of ownership overhead in terms of resources. So is this kind of an area that you guys can help simplify, 'cause that seems to be a sticking point for people who want to stand up some infrastructure and do dev ops and then get into this API-like framework. >> Yes, from a hardware perspective, we're actually creating a platform, which includes a lot of software to tie into Openstack. So that's all preintegrated for you, if you will. So at least from a hardware interface perspective, I can say that that part of the equation gets neutralized. In terms of the rest of the ownership part, I'm not really qualified to answer that question. >> That's good media training, right there. Chuck just came back from Intel media training, which is good. We got you fresh. Network transformation, and at the, also points to some really cool exciting areas that are going on that are really important. The network layer you see, EDFE, and SDN, for instance, that's really important areas that people are innovating on, and they're super important because, again, this is where the action is. You have virtualization, you have new capabilities, you've got some security things going down lower in the stack. What's the impact there from an Intel perspective, helping this end-to-end architecture be seamless? >> Sure. So what we are doing right now is creating a layer on top of our FPGA-based SmartNIC solutions, which ties together all of that into a single platform, and it cuts across multiple Intel products. We have, you know, Xeon processors integrated with FPGAs, we have discreet FPGAs built onto cards that we are in the process of developing. So from a SmartNIC through to a fully-integrated FPGA plus Xeon processor is one common framework. One common way of programming the FPGA, so IP can move from one to the other. So there's a lot of very neat end-to-end and seamless capabilities. >> So the final question is the customer environment. I would say you guys have a lot of customers out there. The edge computing is a huge thing right now. We're seeing that as a big part of this, kind of, the clarity coming out of Mobile World Congress, at least from the telco standpoints, it's kind of not new in the data center area. The edge now is redefined. Certainly with IOT-- >> Yes. >> And IOTP, which we're calling IOTP app for people having devices. What are the customer challenges right now, that you are addressing. Specifically, what's the pain points and what's the current state-of-the-art relative to the customer's expectations now, that they're focused on that you guys are solving. >> Yeah, that's a great question, too. We have a lot of customers now that are taking transmission equipment, for example, mobile backhaul types of equipment, and they want to add mobile edge computing and NFE-type capabilities to that equipment. The beauty of what we're doing is that the same solution that we have for the cloud works just as well in that same piece of equipment. FPGAs come in all different sizes, so you can fit within your power envelope or processors come in all different sizes. So you can tailor your solution-- >> That's super important on the telco side. I mean, power is huge. >> Yes, yes, and FPGAs allow you to tailor the power equation as much as possible. >> So the question, I think is the next question is, does this make it cloud-ready, because that's term that we've been hearing a lot of. Cloud-ready. Cause that sounds like what you're offering is the ability to kind of tie into the same stuff that the cloud has, or the data center. >> Yes, exactly. In fact, you know, there's been very high profile press around the use of FPGAs in cloud infrastructure. So we're seeing a huge uptick there. So it is getting cloud-ready. I wouldn't say it's perfectly there, but we're getting very close. >> Well the thing that's exciting to me, I think, is the cloud native movement really talks about again, you know, these abstractions with micro services, and you mentioned the APIs, really fits well into some of the agilenesss that needs to happen at the network layer, to be more dynamic. I mean, just think about the provisioning of IOT. >> Chuck: Yeah. >> I mean, I'm a telco, I got to provision a phone, that's get a phone number, connect on the network, and then have sessions go to the base station, and then back to the cloud. Imagine having to provision up and down zillions of times those devices that may get provision once and go away in an hour. >> Right. >> That's still challenging, give you the network fabric. >> Yes. It is going to be a challenge, but I think as common as we can make the physical infrastructure, the better and the easier that's going to be, and as we create more common-- >> Chuck, final question, what's your take from Mobile World Congress? What are you hearing, what's your analysis, commentary, any kind of input you've heard? Obviously, Intel's got a big presence there, your thoughts on what's happening at Mobile World Congress. >> Well, see I'm not at Mobile World Congress, I'm here in Silicon Valley right now, but-- >> John: What have you heard? >> Things are very exciting. I'm mostly focused on the NFE world myself, and there's been just lots and lots of-- >> It's been high profile. >> Yes, and there's been lots of activity, and you know, we've been doing demos and really cool stuff in that area. We haven't announced much of that on the FPGA side, but I think you'll be seeing more-- >> But you're involved, so what's the coolest thing in NFE that you're seeing, because it seems to be crunch time for NFE right now. This is a catalyst point where at least, from my covering NFE, and looking at it, the iterations of it, it's primetime right now for NFE, true? >> Yeah, it's perfect timing, and it's actually perfect timing for FPGA. I'm not trying to just give it a plug. When you look at it, trials have gone on, very significant, lots of learnings from those trials. What we've done is we've identified the bottlenecks, and my group has been working very hard to resolve those bottlenecks, so we can scale and roll out in the next couple of years, and be ready for 5G when it comes. >> Software definer, Chuck Tato, here from Intel, inside theCUBE, breaking down the coverage from Mobile World Congress, as we wind down our day in California, the folks in Spain are just going out. It should be like at 12:00 o'clock at night there, and are going to bed, depending on how beat they are. Again, it's in Barcelona, Spain, it's where it's at. We're covering from here and also talking to folks in Barcelona. We'll have more commentary here in Silicon Valley on the Mobile World Congress after this short break. (techno music)
SUMMARY :
Brought to you by Intel. of the data center of Most of all, have been the So the programmable solutions and it changes the elements of the chip want that functionality So in the data center, as well and certainly the first Adaptability is the key, that the carriers care about, and the group that you're in, impact that for the particular workload that you need. that the heat syncs only the second thing is we love to see that gaming. all of those things the question for you is on the fly, whatever you would like. Is it the coders or is it ASIC in the industry, right? So you guys are writing hooks So this makes it very and you can program that and in the industry. 'cause that seems to be a sticking point of the ownership part, What's the impact there in the process of developing. So the final question is that you guys are solving. is that the same solution on the telco side. you to tailor the power equation is the ability to kind of around the use of FPGAs at the network layer, to be more dynamic. connect on the network, give you the network fabric. the better and the easier What are you hearing, what's the NFE world myself, of that on the FPGA side, the iterations of it, in the next couple of in California, the folks in
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sandra Reddy | PERSON | 0.99+ |
Dan Rodriguez | PERSON | 0.99+ |
Sandra Rivera | PERSON | 0.99+ |
Caroline Chan | PERSON | 0.99+ |
Chuck | PERSON | 0.99+ |
Chuck Tato | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Lyn Comp | PERSON | 0.99+ |
John | PERSON | 0.99+ |
two problems | QUANTITY | 0.99+ |
Chuck Tato | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Alterra Group | ORGANIZATION | 0.99+ |
Spain | LOCATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
two areas | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Sunday | DATE | 0.99+ |
IOTP | TITLE | 0.99+ |
100X | QUANTITY | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
telco | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
two things | QUANTITY | 0.98+ |
#MWC17 | EVENT | 0.98+ |
second thing | QUANTITY | 0.98+ |
YouTube.com/siliconANGLE | OTHER | 0.98+ |
one | QUANTITY | 0.98+ |
two days | QUANTITY | 0.97+ |
Barcelona, Spain | LOCATION | 0.97+ |
first time | QUANTITY | 0.97+ |
single platform | QUANTITY | 0.96+ |
Mobile World Congress 2017 | EVENT | 0.96+ |
FPGA | ORGANIZATION | 0.95+ |
an hour | QUANTITY | 0.95+ |
SAP | ORGANIZATION | 0.95+ |
this morning | DATE | 0.95+ |
this year | DATE | 0.95+ |
each time | QUANTITY | 0.94+ |
day two | QUANTITY | 0.92+ |
one common framework | QUANTITY | 0.9+ |
zillions of times | QUANTITY | 0.9+ |
IOT | TITLE | 0.9+ |
NFE | ORGANIZATION | 0.89+ |
12:00 o'clock at night | DATE | 0.89+ |
Alterra | ORGANIZATION | 0.88+ |
Openstack | TITLE | 0.86+ |
both access | QUANTITY | 0.84+ |
SiliconANGLE | ORGANIZATION | 0.83+ |
once | QUANTITY | 0.83+ |
One common way | QUANTITY | 0.82+ |
NFE | TITLE | 0.79+ |
next couple of years | DATE | 0.73+ |