Dr. Nic Williams, Stark & Wayne | Cloud Foundry Summit 2018
(electronic music) >> Announcer: From Boston, Massachusetts, it's theCUBE. Covering Cloud Foundry Summit 2018. Brought to you by the Cloud Foundry Foundation. >> I'm Stu Miniman, and this is theCUBE's coverage of Cloud Foundry Summit 2018, here in beautiful Boston, Massachusetts. Happy to welcome to the program first-time guest, Dr. Nic Williams, CEO of Stark and Wayne. Dr. Nic, thanks for joining me >> Thank you very much. I think you must've come to the conference from a different direction than I came. >> I'm a local, and I'm trying to get more people to come to the Boston area. We've been doing theCUBE now for, coming up on our ninth year of doing it, and it's only the third time I've done something in this convention center, so please, more tech shows to this area, Boston, the Hynes Convention Center, and things like that. >> There's plenty of tech people. I was at the Nero Cafe, everyone seemed like they were a tech person. >> Oh no, the Seaport region here is exploding. I've done two interviews today with companies here in Boston or Cambridge. There's a great tech scene. For some reason, you and I were joking, it's like, do we really need another conference in Vegas? I mean really. >> Dr. Nic: Right, no, I like the regional. >> But yeah, the weather here is unseasonably cold. It was snowing and sleeting this morning, which is not the Spring weather. >> It is April, it is mid-April, and it's almost snowing outside. >> Alright, so Dr. Nic, first of all, you get props for the T-shirt. You've got Iron Man and Doctor Doom, and we're saying that there is a connection between the superheroes and Stark and Wayne. >> Right, so Stark and Wayne is founded by two fictional superheroes. The best founders are the fictional ones, they don't go to meetings, they're too busy making, you know, films. >> Yes, but everybody knows that Tony Stark is Iron Man, but nobody's supposed to know that Bruce Wayne was Batman. >> Nic: Right, right. >> But I've heard Stark and Wayne mentioned a number of times by customers here at the conference. So, for our audience that doesn't know, what does Stark and Wayne do, and how are you involved in the Cloud Foundry ecosystem? >> So Stark and Wayne, I first found Bosh, I founded Stark and Wayne. Earlier than that I discovered Bosh, six years ago, when it was first released, became like, I claimed to be the world's first evangelist for Bosh, and still probably the number one evangelist. And so Stark and Wayne came out of that. I was VMWare Pivotal's go-to person for standing things up and then customers grew, and you know. Yeah, people want to know who to go to, and when it comes to running Cloud Foundry, that's us. >> Yeah well, there's always that discussion, right? We've got all these wonderful platforms and these things that go together, but a lot of times there's services and people that help to get those up. Pivotal, just had a great discussion with a Pivotal person, talking about the reason they bought Pivotal Labs originally was like, wow, when people got stuck, that's what Pivotal Labs helps with that whole application development, so you're doing similar things with Bosh? >> Correct. No it's, we have our mental model around what it is to run operations of a platform, where you're running complex software, but you have an end user who expects everything just to work. And they never want to talk to you, and you don't want to talk to them. So it's this new world of IT where they get what they want instantly, that's the platform and it has to keep working. >> Dr Nic, is it an unreasonable thing for people to say that, yeah I want the things to work, and it shouldn't go down, and you know-- >> What is shadow IT? Shadow IT is the rebellion against corporate IT, so we want to bring back, well, we want to bring the wonders of public services to corporate environments. >> Okay, so-- >> That's the Cloud Foundry's story. >> Yeah, so talk to me a little bit about your users. We've watched this ecosystem mature since the early days, you know, things are more mature, but what's working well? What are the challenges? What are some of the prime things that have people calling up your team? >> So our scope, our users, or our customers, are people, they're the GEs and the Fords of the world running either as a service or internally large Cloud Foundry installations. And whilst Cloud Foundry is getting better and better, the security model is better, the upgrades seem to be flawless, it does keep getting more complex. You know, you can't just add container to container networking and it not get more complicated, right? So, yeah, trying to keep up-to-date with not just the core, but even the community of projects going on is part of the novelty, but also it's trying to bring it to customers and be successful. >> Yeah, I go to a number of these shows that are open source and every time you come there, it's like, "Well, here's the main things we're talking about "but here's six other projects that come up." How's that impact some of what you were just talking about? But, maybe elaborate as to how you deal with the pace of change, and those big companies, how are they help integrate those into what they're doing, or do they, you know-- >> So my Twitter is different from your Twitter. So my Twitter is 10 years worth of collecting of people who talk about interesting things, putting in a URL, just referencing an idea they're having, so they tend to be the thought leaders. They might be wrong, or like, let's put Docker into production, like, it doesn't make it wrong, but you've got to be wary of people who are too early. And you just start to peace a picture of what's being built, and you start to know which groups and which individuals are machines, and make great stuff, and you sort of track their work. Like HashiCorp, Mitchell Hashimoto, I knew him before HashiCorp, and he is a monster, and so you tend to track their work. >> So your Twitter and my Twitter might be more alike than you think. >> Nic: No maybe, right. >> I interviewed Armon at the Cube-Con show last year. My Twitter blowing up the show was a bunch of people arguing about whether Serverless was going to eradicate this whole ecosystem. >> Well, we can argue about that if you like, I guess. >> But love, one of the things coming into this show, was, you know, how does the whole Kubernetes discussion fit into Cloud Foundry? We've heard at this show, Microsoft, Google, many others, talking about, look, open source communities, they're going to work together. >> Well Windows is going to track things 'cause they think they need to sell them, right? But then Microsoft has Service Fabric, which they've owned and operated internally for 10 years, and so, I think some really interesting products may be built on top of Service Fabric, because of what it is. Whereas, you know, Kubernetes will run things, Service Fabric may build net new projects. And then Cloud Foundry's a different experience altogether, so some people, it's what problems they experienced, comes to the solution they find, and unless you've tried to run a platform for people, you might not think the solution's a platform. You might think it's Kubernetes, but-- >> Yeah, so one of the things we always look at when we talk about platforms, is what do they get stood up for? How many applications do you get to stand up there? What don't they work for? Maybe you could help give us a little bit of color as to what you see? >> I'm pretty good at jamming anything into Cloud Foundry, so I have a pretty small scope of what doesn't fit, but typically the idea of Cloud Foundry is the assumption the user is a developer who has 10 iterations a day. Alright, so they want to deploy, test, deploy, test, and then layer pipelines on top of that. You also get, you're going to get the backend of long, stable apps, but the value is, for many people, is that the deploy experience. And then, you know, but whilst, you're going to get those apps that live forever, we still get to replace the underlying core of it. So you still maintain a security model even for the things that are relatively unloved. Andthis is really valuable, like the nice, clean separation of the security, the package, CVEs, and the base OS, then the apps is part of the-- >> Yeah, absolutely, there's been an interesting kind of push and pull lately. We need to take some of those old applications, and we may need to lift and shift them. It doesn't mean that I can necessarily take advantage of all the cool stuff, and there are some things that I can't do with them when I get them on to that new platform. But absolutely, you need to worry about security, you know, data's like the center of everything. >> If you're lifting and shifting, there probably is no developer looking after it, so it's more of an operator function, and they can put it anywhere they like. They're looking after it now, whereas the Cloud Foundry experience is that developer-led experience that has an operations backend. If you're lifting and shifting, if it fits in Cloud Foundry, great, if it fits in Kubernetes, great. It's your responsibility. >> Yeah, what interaction do you have with your clients, with some of the kind of cultural and operational changes that they need to go through? So thinking specifically, you've go the developers doing things, you know, the operators, whether they're involved, whether that be devops or not, but I'm curious-- >> So the biggest change when it comes to helping people who are running platforms. And I know many people want to talk about the cloud transformation, but let's talk about the operations transformation, is to become a service-orientated group who are there to provide a service. Yes you're internal, yes they all have the same email address that you do, but you're a service-orientated organization, and that is not technology, that is a mental mode. And if you're not service-orientated, shadow IT occurs, because they can go to Amazon and get a support organization that will respond to them, and so you're competing with Amazon, and Google, and you need to be pretty good. >> Yeah, you mentioned that, you know, your typical client is kind of a large, maybe I'm putting words in your mouth, the Fortune 1000 type companies, does this sort of-- >> We haven't got Berkshire. We haven't got Berkshire, and so if we're going to go Fortune 5, you know, we'd like, I've read my Warren Buffett biography, I reckon the FA are here to meet him I reckon. >> Right, so one of the questions, is this only for the enterprise? Can it be used for smaller businesses, for newer businesses? >> What's interesting is people think about Cloud Foundry as like, "Oh you run it on your infrastructure." Like, I did a talk in 2014, 15, when Docker was starting to be frothy, was, before you think you want to build your own pass, ring me on the hotline. Like, argue with me about why you wouldn't just use Heroku, or Pivotal Web Services, or IBM Cloud, like a public pass. Please, I beg of you, before you go down any path of running on-prem anything, answer solidly the question of why you just wouldn't use a public service. And yeah, so it really starts at that point. It's like, use someone else's, and then if you have to run your own. So, who's really going to have all these rules? It's large organization that have these, "Oh, no, no, we have to run our own." >> Well doctor, one of the things we've said for a while, is there's lots of things that enterprise suck at, that they need to realize that they shouldn't be doing. So start at the most basic level, there's like five companies in the world that are good at building data centers, nobody else should build data centers, if you're using somebody else that can do that. So as you go up and up the stack, you want to get rid of the undifferentiated lifting, things like that, so-- >> I like to joke that every CIO, the moment they get that job, like that's their ticket to get to build their own data center. It's like, what else was the point of becoming a CIO? I want to build my own data center. >> No, not anymore, please-- >> Not anymore, but you know, plus they've been around a little longer than-- >> So, what is that line? What should companies be able to consume a platform, versus where do they add the value, and do you help customers kind of understand that that-- >> By the time they're talking to us, they're pretty far along having convinced themselves about what they're doing. And they have their rules. They have their isolation rules, their data-ownership rules, and they'll have their level of comfort. So they might be comfortable on Amazon, Google, Azure, or they might still not be comfortable with public cloud, and they want the vSphere, but they still have that notion of we're going to run this ourselves. And most of them it's not running one, because that idea of we need our own, propagates throughout the entire organization, and they'll start wanting their own Cloud Foundry-- >> Look, I find that when I talk to users, we, the vendors, and those that watch the industry, always try to come up with these multi-cloud hybrid cloud-type discussion. Users, have a cloud strategy, and it's usually often siloed just like everything else, and right, they're using-- >> Developers-- >> I have some data service, and it's running on Google-- >> Developers just want to have a nice life. >> Microsoft apps. >> They just want to get their work done. They want to feel like, "Alright this is a great job, "like, I'm respected, I get interesting work, "we get to ship it, it actually goes into production." I think if you haven't ever had a project you've worked on that didn't go into production, you haven't worked long enough. Many of us work on something for it not to be shipped. Get it into production as quick as possible and-- >> So, do you have your, you know, utopian ideal world though as to, this is the step-- >> Oh, absolutely-- >> And this is how it'll be simple. >> Tell developers what the business problems are. Get them as close to the business problems, and give them responsibility to solve them. Don't put them behind layers of product managers, and IT support-- >> But Dr. Nic, the developers, they don't have the budget-- >> Speak for utopian-- >> How do we sort through that, because, right, the developer says they want to do this, but they're not tied to the person that has the budget, or they're not working with the operators, I mean, how do we sort through that? >> How do we get to utopia? >> Stu: Yeah. Well, Facebook, Google, Microsoft, they all solved utopia, right? So, this is, think more like them, and perhaps the CEO of the company shouldn't come from sales, perhaps it should be an IT person. >> Well, yeah, what's the T-shirt for the show? It was like running at scale, when you reach a certain point of scale, you either need to solve some of these things, or you will break? >> Right, alright look, hire great sales organizations, but if you don't have empathy for what your company needs to look like in five years time, you're probably not going to allow your organization to become that. The power games, alright? If everyone assumes that the marketing department becomes the top of the organization, or the, you know, then the good people are going to leave to go to organizations where they might be become CEO one day. >> Alright, Dr. Nic, want to give you the final word. For the people that haven't been able to come to the sessions, check out the environment, what are they missing at this show? What is exciting you the most in this ecosystem? >> Like any conference you go to, you come, the learning is all put online. Your show is put online, or every session is put online. You don't come just to learn. You get the energy. I live in Australia, I work from a coffee shop, my staff are all in America, and so to come and just to get the energy that you're doing the right thing, that you get surrounded by a group of people, and certainly no one walks away from a CF Summit feeling like they're in the wrong career. >> Excellent. Well, Dr. Nic, appreciate you helping us understand the infinity wars of cloud environments here. Stark and Wayne, thanks so much for joining us. I'm Stu Miniman, and you're watching theCUBE. >> Dr. Nic: Thanks Stu. (electronic music)
SUMMARY :
Brought to you by the Cloud Foundry Foundation. I'm Stu Miniman, and this is theCUBE's coverage I think you must've come to the conference and it's only the third time everyone seemed like they were a tech person. For some reason, you and I were joking, It was snowing and sleeting this morning, and it's almost snowing outside. you get props for the T-shirt. they're too busy making, you know, films. but nobody's supposed to know that Bruce Wayne was Batman. and how are you involved in the Cloud Foundry ecosystem? and then customers grew, and you know. talking about the reason they bought Pivotal Labs originally and you don't want to talk to them. Shadow IT is the rebellion against corporate IT, Yeah, so talk to me a little bit about your users. You know, you can't just add and every time you come there, and he is a monster, and so you tend to track their work. than you think. I interviewed Armon at the Cube-Con show last year. was, you know, how does the whole Kubernetes discussion Whereas, you know, Kubernetes will run things, is that the deploy experience. But absolutely, you need to worry about security, and they can put it anywhere they like. and you need to be pretty good. and so if we're going to go Fortune 5, you know, we'd like, and then if you have to run your own. that they need to realize that they shouldn't be doing. the moment they get that job, By the time they're talking to us, and right, they're using-- I think if you haven't ever had a project and give them responsibility to solve them. But Dr. Nic, the developers, and perhaps the CEO of the company but if you don't have empathy Alright, Dr. Nic, want to give you the final word. and so to come and just to get the energy Well, Dr. Nic, appreciate you helping us understand Dr. Nic: Thanks Stu.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
2014 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Cambridge | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Australia | LOCATION | 0.99+ |
America | LOCATION | 0.99+ |
Bruce Wayne | PERSON | 0.99+ |
Stark | PERSON | 0.99+ |
Cloud Foundry Foundation | ORGANIZATION | 0.99+ |
Nic | PERSON | 0.99+ |
Vegas | LOCATION | 0.99+ |
five companies | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Mitchell Hashimoto | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Nic Williams | PERSON | 0.99+ |
Stark and Wayne | ORGANIZATION | 0.99+ |
ninth year | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
six years ago | DATE | 0.99+ |
Dr. | PERSON | 0.99+ |
Cloud Foundry | TITLE | 0.99+ |
Pivotal Web Services | ORGANIZATION | 0.99+ |
Warren Buffett | PERSON | 0.99+ |
Tony Stark | PERSON | 0.99+ |
Batman | PERSON | 0.99+ |
April | DATE | 0.99+ |
Wayne | PERSON | 0.99+ |
Cloud Foundry Summit 2018 | EVENT | 0.99+ |
five years | QUANTITY | 0.99+ |
third time | QUANTITY | 0.98+ |
GEs | ORGANIZATION | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
two interviews | QUANTITY | 0.98+ |
Pivotal | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Bosh | ORGANIZATION | 0.98+ |
Stu | PERSON | 0.98+ |
Hynes Convention Center | LOCATION | 0.98+ |
Pivotal Labs | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
15 | DATE | 0.98+ |
six other projects | QUANTITY | 0.98+ |
first-time | QUANTITY | 0.98+ |
Berkshire | LOCATION | 0.98+ |
today | DATE | 0.97+ |
10 iterations a day | QUANTITY | 0.97+ |
Azure | ORGANIZATION | 0.96+ |
CF Summit | EVENT | 0.96+ |
Iron Man | PERSON | 0.96+ |
Fords | ORGANIZATION | 0.96+ |
Heroku | ORGANIZATION | 0.95+ |
2021 107 John Pisano and Ki Lee
(upbeat music) >> Announcer: From theCUBE studios in Palo Alto in Boston connecting with thought leaders all around the world, this is theCUBE Conversation. >> Well, welcome to theCUBE Conversation here in theCUBE studios in Palo Alto, California. I'm John Furrier, your host. Got a great conversation with two great guests, going to explore the edge, what it means in terms of commercial, but also national security. And as the world goes digital, we're going to have that deep dive conversation around how it's all transforming. We've got Ki Lee, Vice President of Booz Allen's Digital Business. Ki, great to have you. John Pisano, Principal at Booz Allen's Digital Cloud Solutions. Gentlemen, thanks for coming on. >> And thanks for having us, John. >> So one of the most hottest topics, obviously besides cloud computing having the most refactoring impact on business and government and public sector has been the next phase of cloud growth and cloud scale, and that's really modern applications and consumer, and then here for national security and for governments here in the U.S. is military impact. And as digital transformation starts to go to the next level, you're starting to see the architectures emerge where the edge, the IoT edge, the industrial IoT edge, or any kind of edge concept, 5G is exploding, making that much more of a dense, more throughput for connectivity with wireless. You got Amazon with Snowball, Snowmobile, all kinds of ways to deploy technology, that's IT like and operational technologies. It's causing quite a cloud operational opportunity and disruption, so I want to get into it. Ki, let's start with you. I mean, we're looking at an architecture that's changing both commercial and public sector with the edge. What are the key considerations that you guys see as people have to really move fast in this new architecture of digital? >> Yeah, John, I think it's a great question. And if I could just share our observation on why we even started investing in edge. You mentioned the cloud, but as we've reflected upon kind of the history of IT, then you take a look from mainframes to desktops to servers to cloud to mobile and now IoT, what we observed was that industry investing in infrastructure led to kind of an evolution of IT, right? So as you mentioned, with industry spending billions on IoT and edge, we just feel that that's going to be the next evolution. If you take a look at, you mentioned 5G, I think 5G will be certainly an accelerator to edge because of the resilience, the lower latency and so forth. But taking a look at what's happening in space, you mentioned space earlier as well, right, and what Starlink is doing by putting satellites to actually provide transport into the space, we're thinking that that actually is going to be the next ubiquitous thing. Once transport becomes ubiquitous, just like cloud allows storage to be ubiquitous. We think that the next generation internet will be space-based. So when you think about it, connected, it won't be connected servers per se, it will be connected devices. >> John: Yeah, yeah. >> That's kind of some of the observations and why we've been really focusing on investing in edge. >> I want to come back to that piece around space and edge and bring it from a commercial and then also tactical architecture in a minute 'cause there's a lot to unpack there, role of open source, modern application development, software and hardware supply chains, all are core issues that are going to emerge. But I want to get with John real quick on cloud impact, because you think about 5G and the future of work or future of play, you've got people, right? So whether you're at a large concert like Coachella or a 49ers or Patriots game or Redskins game if you're in the D.C. area, you got people there, of congestion, and now you got devices now serving those people. And that's their play, people at work, whether it's a military operation, and you've got work, play, tactical edge things. How is cloud connecting? 'Cause this is like the edge has never been kind of an IT thing. It's been more of a bandwidth or either telco or something else operationally. What's the cloud at scale, cloud operations impact? >> Yeah, so if you think about how these systems are architected and you think about those considerations that Ki kind of touched on, a lot of what you have to think about now is what aspects of the application reside in the cloud, where you tend to be less constrained. And then how do you architect that application to move out towards the edge, right? So how do I tier my application? Ultimately, how do I move data and applications around the ecosystem? How do I need to evolve where my application stages things and how that data and those apps are moved to each of those different tiers? So when we build a lot of applications, especially if they're in the cloud, they're built with some of those common considerations of elasticity, scalability, all those things; whereas when you talk about congestion and disconnected operations, you lose a lot of those characteristics, and you have to kind of rethink that. >> Ki, let's get into the aspect you brought up, which is space. And then I was mentioning the tactical edge from a military standpoint. These are use cases of deployments, and in fact, this is how people have to work now. So you've got the future of work or play, and now you've got the situational deployments, whether it's a new tower of next to a stadium. We've all been at a game or somewhere or a concert where we only got five bars and no connectivity. So we know what that means. So now you have people congregating in work or play, and now you have a tactical deployment. What's the key things that you're seeing that it's going to help make that better? Are there any breakthroughs that you see that are possible? What's going on in your view? >> Yeah, I mean, I think what's enabling all of this, again, one is transport, right? So whether it's 5G to increase the speed and decrease the latency, whether it's things like Starlink with making transport and comms ubiquitous, that tied with the fact that ships continue to get smaller and faster, right? And when you're thinking about tactical edge, those devices have limited size, weight, power conditions and constraints. And so the software that goes on them has to be just as lightweight. And that's why we've actually partnered with SUSE and what they've done with K3s to do that. So I think those are some of the enabling technologies out there. John, as you've kind of alluded to it, there are additional challenges as we think about it. We're not, it's not a simple transition and monetization here, but again, we think that this will be the next major disruption. >> What do you guys think, John, if you don't mind weighing in too on this as modern application development happens, we just were covering CloudNativeCon and KubeCon, DockerCon, containers are very popular. Kubernetes is becoming super great. As you look at the telco landscape where we're kind of converging this edge, it has to be commercially enterprise grade. It has to have that transit and transport that's intelligent and all these new things. How does open source fit into all this? Because we're seeing open source becoming very reliable, more people are contributing to open source. How does that impact the edge in your opinion? >> So from my perspective, I think it's helping accelerate things that traditionally maybe may have been stuck in the traditional proprietary software confines. So within our mindset at Booz Allen, we were very focused on open architecture, open based systems, which open source obviously is an aspect of that. So how do you create systems that can easily interface with each other to exchange data, and how do you leverage tools that are available in the open source community to do that? So containerization is a big drive that is really going throughout the open source community. And there's just a number of other tools, whether it's tools that are used to provide basic services like how do I move code through a pipeline all the way through? How do I do just basic hardening and security checking of my capabilities? Historically, those have tend to be closed source type apps, whereas today you've got a very broad community that's able to very quickly provide and develop capabilities and push it out to a community that then continues to adapt and add to it or grow that library of stuff. >> Yeah, and then we've got trends like Open RAN. I saw some Ground Station for the AWS. You're starting to see Starlink, you mentioned. You're bringing connectivity to the masses. What is that going to do for operators? Because remember, security is a huge issue. We talk about security all the time. Where does that kind of come in? Because now you're really OT, which has been very purpose-built kind devices in the old IoT world. As the new IoT and the edge develop, you're going to need to have intelligence. You're going to be data-driven. There is an open source impact key. So, how, if I'm a senior executive, how do I get my arms around this? I really need to think this through because the security risks alone could be more penetration areas, more surface area. >> Right. That's a great question. And let me just address kind of the value to the clients and the end users in the digital battlefield as our warriors to increase survivability and lethality. At the end of the day from a mission perspective, we know we believe that time's a weapon. So reducing any latency in that kind of observe, orient, decide, act OODA loop is value to the war fighter. In terms of your question on how to think about this, John, you're spot on. I mean, as I've mentioned before, there are various different challenges, one, being the cyber aspect of it. We are absolutely going to be increasing our attack surface when you think about putting processing on edge devices. There are other factors too, non-technical that we've been thinking about s we've tried to kind of engender and kind of move to this kind of edge open ecosystem where we can kind of plug and play, reuse, all kind of taking the same concepts of the open-source community and open architectures. But other things that we've considered, one, workforce. As you mentioned before, when you think about these embedded systems and so forth, there aren't that many embedded engineers out there. But there is a workforce that are digital and software engineers that are trained. So how do we actually create an abstraction layer that we can leverage that workforce and not be limited by some of the constraints of the embedded engineers out there? The other thing is what we've, in talking with several colleagues, clients, partners, what people aren't thinking about is actually when you start putting software on these edge devices in the billions, the total cost of ownership. How do you maintain an enterprise that potentially consists of billions of devices? So extending the standard kind of DevSecOps that we move to automate CI/CD to a cloud, how do we move it from cloud to jet? That's kind of what we say. How do we move DevSecOps to automate secure containers all the way to the edge devices to mitigate some of those total cost of ownership challenges. >> It's interesting, as you have software defined, this embedded system discussion is hugely relevant and important because when you have software defined, you've got to be faster in the deployment of these devices. You need security, 'cause remember, supply chain on the hardware side and software in that too. >> Absolutely. >> So if you're going to have a serviceability model where you have to shift left, as they say, you got to be at the point of CI/CD flows, you need to be having security at the time of coding. So all these paradigms are new in Day-2 operations. I call it Day-0 operations 'cause it should be in everyday too. >> Yep. Absolutely. >> But you've got to service these things. So software supply chain becomes a very interesting conversation. It's a new one that we're having on theCUBE and in the industry Software supply chain is a superly relevant important topic because now you've got to interface it, not just with other software, but hardware. How do you service devices in space? You can't send a break/fix person in space. (chuckles) Maybe you will soon, but again, this brings up a whole set of issues. >> No, so I think it's certainly, I don't think anyone has the answers. We sure don't have all the answers but we're very optimistic. If you take a look at what's going on within the U.S. Air Force and what the Chief Software Officer Nic Chaillan and his team, and we're a supporter of this and a plankowner of Platform One. They were ahead of the curve in kind of commoditizing some of these DevSecOps principles in partnership with the DoD CIO and that shift left concept. They've got a certified and accredited platform that provides that DevSecOps. They have an entire repository in the Iron Bank that allows for hardened containers and reciprocity. All those things are value to the mission and around the edge because those are all accelerators. I think there's an opportunity to leverage industry kind of best practices as well and patterns there. You kind of touched upon this, John, but these devices honestly just become firmware. The software is just, if the devices themselves just become firmware , you can just put over the wire updates onto them. So I'm optimistic. I think all the piece parts are taking place across industry and in the government. And I think we're primed to kind of move into this next evolution. >> Yeah. And it's also some collaboration. What I like about, why I'm bringing up the open source angle and I think this is where I think the major focus will shift to, and I want to get your reaction to it is because open source is seeing a lot more collaboration. You mentioned some of the embedded devices. Some people are saying, this is the weakest link in the supply chain, and it can be shored up pretty quickly. But there's other data, other collective intelligence that you can get from sharing data, for instance, which hasn't really been a best practice in the cybersecurity industry. So now open source, it's all been about sharing, right? So you got the confluence of these worlds colliding, all aspects of culture and Dev and Sec and Ops and engineering all coming together. John, what's your reaction to that? Because this is a big topic. >> Yeah, so it's providing a level of transparency that historically we've not seen, right? So in that community, having those pipelines, the results of what's coming out of it, it's allowing anyone in that life cycle or that supply chain to look at it, see the state of it, and make a decision on, is this a risk I'm willing to take or not? Or am I willing to invest and personally contribute back to the community to address that because it's important to me and it's likely going to be important to some of the others that are using it? So I think it's critical, and it's enabling that acceleration and shift that I talked about, that now that everybody can see it, look inside of it, understand the state of it, contribute to it, it's allowing us to break down some of the barriers that Ki talked about. And it reinforces that excitement that we're seeing now. That community is enabling us to move faster and do things that maybe historically we've not been able to do. >> Ki, I'd love to get your thoughts. You mentioned battlefield, and I've been covering a lot of the tactical edge around the DOD's work. You mentioned about the military on the Air Force side, Platform One, I believe, was from the Air Force work that they've done, all cloud native kind of directions. But when you talk about a war field, you talk about connectivity. I mean, who controls the DNS in Taiwan, or who controls the DNS in Korea? I mean, we have to deploy, you've got to stand up infrastructure. How about agility? I mean, tactical command and control operations, this has got to be really well done. So this is not a trivial thing. >> No. >> How are you seeing this translate into the edge innovation area? (laughs) >> It's certainly not a trivial thing, but I think, again, I'm encouraged by how government and industry are partnering up. There's a vision set around this joint all domain command control, JADC2. And then all the services are getting behind that, are looking into that, and this vision of this military, internet of military things. And I think the key thing there, John, as you mentioned, it's not just the connected of the sensors, which requires the transport again, but also they have to be interoperable. So you can have a bunch of sensors and platforms out there, they may be connected, but if they can't speak to one another in a common language, that kind of defeats the purpose and the mission value of that sensor or shooter kind of paradigm that we've been striving for for ages. So you're right on. I mean, this is not a trivial thing, but I think over history we've learned quite a bit. Technology and innovation is happening at just an amazing rate where things are coming out in months as opposed to decades as before. I agree, not trivial, but again, I think there are all the piece parts in place and being put into place. >> I think you mentioned earlier that the personnel, the people, the engineers that are out there, not enough, more of them coming in. I think now the appetite and the provocative nature of this shift in tech is going to attract a lot of people because the old adage is these are hard problems attracts great people. You got in new engineering, SRE like scale engineering. You have software development, that's changing, becoming much more robust and more science-driven. You don't have to be just a coder as a software engineer. You could be coming at it from any angle. So there's a lot more opportunities from a personnel standpoint now to attract great people, and there's real hard problems to solve, not just security. >> Absolutely. Definitely. I agree with that 100%. I would also contest that it's an opportunity for innovators. We've been thinking about this for some time, and we think there's absolute value from various different use cases that we've identified, digital battlefield, force protection, disaster recovery, and so forth. But there are use cases that we probably haven't even thought about, even from a commercial perspective. So I think there's going to be an opportunity just like the internet back in the mid '90s for us to kind of innovate based on this new kind of edge environment. >> It's a revolution. New leadership, new brands are going to emerge, new paradigms, new workflows, new operations, clearly great stuff. I want to thank you guys for coming on. I also want to thank Rancher Labs for sponsoring this conversation. Without their support, we wouldn't be here. And now they were acquired by SUSE. We've covered their event with theCUBE virtual last year. What's the connection with those guys? Can you guys take a minute to explain the relationship with SUSE and Rancher? >> Yeah. So it's actually it's fortuitous. And I think we just, we got lucky. There's two overall aspects of it. First of all, we are both, we partner on the Platform One basic ordering agreement. So just there we had a common mentality of DevSecOps. And so there was a good partnership there, but then when we thought about we're engaging it from an edge perspective, the K3s, right? I mean, they're a leader from a container perspective obviously, but the fact that they are innovators around K3s to reduce that software footprint, which is required on these edge devices, we kind of got a twofer there in that partnership. >> John, any comment on your end? >> Yeah, I would just amplify, the K3s aspects in leveraging the containers, a lot of what we've seen success in when you look at what's going on, especially on that tactical edge around enabling capabilities, containers, and the portability it provides makes it very easy for us to interface and integrate a lot of different sensors to close the OODA loop to whoever is wearing or operating that a piece of equipment that the software is running on. >> Awesome, I'd love to continue the conversation on space and the edge and super great conversation to have you guys on. Really appreciate it. I do want to ask you guys about the innovation and the opportunities of this new shift that's happening as the next big thing is coming quickly. And it's here on us and that's cloud, I call it cloud 2.0, the cloud scale, modern software development environment, edge with 5G changing the game. Ki, I completely agree with you. And I think this is where people are focusing their attention from startups to companies that are transforming and re-pivoting or refactoring their existing assets to be positioned. And you're starting to see clear winners and losers. There's a pattern emerging. You got to be in the cloud, you got to be leveraging data, you got to be horizontally scalable, but you got to have AI machine learning in there with modern software practices that are secure. That's the playbook. Some people are making it. Some people are not getting there. So I'd ask you guys, as telcos become super important and the ability to be a telco now, we just mentioned standing up a tactical edge, for instance. Launching a satellite, a couple of hundred K, you can launch a CubeSat. That could be good and bad. So the telco business is changing radically. Cloud, telco cloud is emerging as an edge phenomenon with 5G, certainly business commercial benefits more than consumer. How do you guys see the innovation and disruption happening with telco? >> As we think through cloud to edge, one thing that we realize, because our definition of edge, John, was actually at the point of data collection on the sensor themselves. Others' definition of edge is we're a little bit further back, what we call it the edge of the IT enterprise. But as we look at this, we realize that you needed this kind of multi echelon environment from your cloud to your tactical clouds where you can do some processing and then at the edge of themselves. Really at the end of the day, it's all about, I think, data, right? I mean, everything we're talking about, it's still all about the data, right? The AI needs the data, the telco is transporting the data. And so I think if you think about it from a data perspective in relationship to the telcos, one, edge will actually enable a very different paradigm and a distributed paradigm for data processing. So, hey, instead of bringing the data to some central cloud which takes bandwidth off your telcos, push the products to the data. So mitigate what's actually being sent over those telco lines to increase the efficiencies of them. So I think at the end of the day, the telcos are going to have a pretty big component to this, even from space down to ground station, how that works. So the network of these telcos, I think, are just going to expand. >> John, what's your perspective? I mean, startups are coming out. The scalability, speed of innovation is a big factor. The old telco days had, I mean, months and years, new towers go up and now you got a backbone. It's kind of a slow glacier pace. Now it's under siege with rapid innovation. >> Yeah, so I definitely echo the sentiments that Ki would have, but I would also, if we go back and think about the digital battle space and what we've talked about, faster speeds being available in places it's not been before is great. However, when you think about facing an adversary that's a near-peer threat, the first thing they're going to do is make it contested, congested, and you have to be able to survive. While yes, the pace of innovation is absolutely pushing comms to places we've not had it before, we have to be mindful to not get complacent and over-rely on it, assuming it'll always be there. 'Cause I know in my experience wearing the uniform, and even if I'm up against an adversary, that's the first thing I'm going to do is I'm going to do whatever I can to disrupt your ability to communicate. So how do you take it down to that lowest level and still make that squad, the platoon, whatever that structure is, continue survivable and lethal. So that's something I think, as we look at the innovations, we need to be mindful of that. So when I talk about how do you architect it? What services do you use? Those are all those things that you have to think about. What if I lose it at this echelon? How do I continue the mission? >> Yeah, it's interesting. And if you look at how companies have been procuring and consuming technology, Ki, it's been like siloed. "Okay, we've got a workplace workforce project, and we have the tactical edge, and we have the siloed IT solution," when really work and play, whether it's work here in John's example, is the war fighter. And so his concern is safety, his life and protection. >> Yeah. >> The other department has to manage the comms, (laughs) and so they have to have countermeasures and contingencies ready to go. So all this is, they all integrate it now. It's not like one department. It's like it's together. >> Yeah. John, I love what you just said. I mean, we have to get away from this siloed thinking not only within a single organization, but across the enterprise. From a digital battlefield perspective, it's a joint fight, so even across these enterprise of enterprises, So I think you're spot on. We have to look horizontally. We have to integrate, we have to inter-operate, and by doing that, that's where the innovation is also going to be accelerated too, not reinventing the wheel. >> Yeah, and I think the infrastructure edge is so key. It's going to be very interesting to see how the existing incumbents can handle themselves. Obviously the towers are important. 5G obviously, that's more deployments, not as centralized in terms of the spectrum. It's more dense. It's going to create more connectivity options. How do you guys see that impacting? Because certainly more gear, like obviously not the centralized tower, from a backhaul standpoint but now the edge, the radios themselves, the wireless transit is key. That's the real edge here. How do you guys see that evolving? >> We're seeing a lot of innovations actually through small companies who are really focused on very specific niche problems. I think it's a great starting point because what they're doing is showing the art of the possible. Because again, we're in a different environment now. There's different rules. There's different capabilities. But then we're also seeing, you mentioned earlier on, some of the larger companies, the Amazons, the Microsofts, also investing as well. So I think the merge of the, you know, or the unconstrained or the possible by these small companies that are just kind of driving innovations supported by the maturity and the heft of these large companies who are building out these hardened kind of capabilities, they're going to converge at some point. And that's where I think we're going to get further innovation. >> Well, I really appreciate you guys taking the time. Final question for you guys, as people are watching this, a lot of smart executives and teams are coming together to kind of put the battle plans together for their companies as they transition from old to this new way, which is clearly cloud-scale, role of data. We hit out all the key points I think here. As they start to think about architecture and how they deploy their resources, this becomes now the new boardroom conversation that trickles down and includes everyone, including the developers. The developers are now going to be on the front lines. Mid-level managers are going to be integrated in as well. It's a group conversation. What are some of the advice that you would give to folks who are in this mode of planning architecture, trying to be positioned to come out of this pandemic with a massive growth opportunity and to be on the right side of history? What's your advice? >> It's such a great question. So I think you touched upon it. One is take the holistic approach. You mentioned architectures a couple of times, and I think that's critical. Understanding how your edge architectures will let you connect with your cloud architecture so that they're not disjointed, they're not siloed. They're interoperable, they integrate. So you're taking that enterprise approach. I think the second thing is be patient. It took us some time to really kind of, and we've been looking at this for about three years now. And we were very intentional in assessing the landscape, how people were discussing around edge and kind of pulling that all together. But it took us some time to even figure it out, hey, what are the use cases? How can we actually apply this and get some ROI and value out for our clients? So being a little bit patient in thinking through kind of how we can leverage this and potentially be a disruptor. >> John, your thoughts on advice to people watching as they try to put the right plans together to be positioned and not foreclose any future value. >> Yeah, absolutely. So in addition to the points that Ki raised, I would, number one, amplify the fact of recognize that you're going to have a hybrid environment of legacy and modern capabilities. And in addition to thinking open architectures and whatnot, think about your culture, the people, your processes, your techniques and whatnot, and your governance. How do you make decisions when it needs to be closed versus open? Where do you invest in the workforce? What decisions are you going to make in your architecture that drive that hybrid world that you're going to live in? All those recipes, patience, open, all that, that I think we often overlook the cultural people aspect of upskilling. This is a very different way of thinking on modern software delivery. How do you go through this lifecycle? How's security embedded? So making sure that's part of that boardroom conversation I think is key. >> John Pisano, Principal at Booz Allen Digital Cloud Solutions, thanks for sharing that great insight. Ki Lee, Vice President at Booz Allen Digital Business. Gentlemen, great conversation. Thanks for that insight. And I think people watching are going to probably learn a lot on how to evaluate startups to how they put their architecture together. So I really appreciate the insight and commentary. >> Thank you. >> Thank you, John. >> Okay. I'm John Furrier. This is theCUBE Conversation. Thanks for watching. (upbeat music)
SUMMARY :
leaders all around the world, And as the world goes digital, So one of the most hottest topics, kind of the history of IT, That's kind of some of the observations 5G and the future of work and those apps are moved to and now you have a tactical deployment. and decrease the latency, How does that impact the in the open source community to do that? What is that going to do for operators? and kind of move to this supply chain on the hardware at the time of coding. and in the industry and around the edge because and I think this is where I think and it's likely going to be important of the tactical edge that kind of defeats the earlier that the personnel, back in the mid '90s What's the connection with those guys? but the fact that they and the portability it and the ability to be a telco now, push the products to the data. now you got a backbone. and still make that squad, the platoon, in John's example, is the war fighter. and so they have to have countermeasures We have to integrate, we It's going to be very interesting to see and the heft of these large companies and to be on the right side of history? and kind of pulling that all together. advice to people watching So in addition to the So I really appreciate the This is theCUBE Conversation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
John Pisano | PERSON | 0.99+ |
Ki Lee | PERSON | 0.99+ |
Nic Chaillan | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Taiwan | LOCATION | 0.99+ |
SUSE | ORGANIZATION | 0.99+ |
Starlink | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Rancher | ORGANIZATION | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
five bars | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.99+ |
Microsofts | ORGANIZATION | 0.99+ |
Korea | LOCATION | 0.99+ |
Coachella | EVENT | 0.99+ |
Boston | LOCATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Booz Allen | ORGANIZATION | 0.99+ |
Rancher Labs | ORGANIZATION | 0.99+ |
Ki | PERSON | 0.99+ |
U.S. Air Force | ORGANIZATION | 0.99+ |
Snowmobile | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Snowball | ORGANIZATION | 0.99+ |
last year | DATE | 0.98+ |
CubeSat | COMMERCIAL_ITEM | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
Booz Allen Digital Cloud Solutions | ORGANIZATION | 0.98+ |
mid '90s | DATE | 0.98+ |
two great guests | QUANTITY | 0.98+ |
telcos | ORGANIZATION | 0.98+ |
Iron Bank | ORGANIZATION | 0.97+ |
each | QUANTITY | 0.97+ |
K3s | ORGANIZATION | 0.97+ |
First | QUANTITY | 0.97+ |
single organization | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
49ers | ORGANIZATION | 0.97+ |
Booz Allen Digital Business | ORGANIZATION | 0.96+ |
D.C. | LOCATION | 0.96+ |
billions | QUANTITY | 0.96+ |
one department | QUANTITY | 0.96+ |
billions of devices | QUANTITY | 0.96+ |
about three years | QUANTITY | 0.95+ |
CloudNativeCon | TITLE | 0.95+ |
second thing | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.94+ |
today | DATE | 0.94+ |
U.S. | LOCATION | 0.94+ |
Patriots | ORGANIZATION | 0.93+ |
one | QUANTITY | 0.93+ |
Kubernetes | TITLE | 0.92+ |
Redskins | ORGANIZATION | 0.9+ |
DockerCon | TITLE | 0.89+ |
Chief Software Officer | PERSON | 0.88+ |
Open RAN | TITLE | 0.87+ |
two overall aspects | QUANTITY | 0.87+ |
One | QUANTITY | 0.87+ |
DevSecOps | TITLE | 0.86+ |
KubeCon | TITLE | 0.86+ |
Paul Perez, Dell Technologies and Kit Colbert, VMware | Dell Technologies World 2020
>> Narrator: From around the globe, it's theCUBE! With digital coverage of Dell Technologies World Digital Experience. Brought to you by Dell Technologies. >> Hey, welcome back, everybody. Jeffrey here with theCUBE coming to you from our Palo Altos studios with continuing coverage of the Dell Technology World 2020, The Digital Experience. We've been covering this for over 10 years. It's virtual this year, but still have a lot of great content, a lot of great announcements, and a lot of technology that's being released and talked about. So we're excited. We're going to dig a little deep with our next two guests. First of all we have Paul Perez. He is the SVP and CTO of infrastructure solutions group for Dell technologies. Paul's great to see you. Where are you coming in from today? >> Austin, Texas. >> Austin Texas Awesome. And joining him returning to theCUBE on many times, Kit Colbert. He is the Vice President and CTO of VMware cloud for VMware Kit great to see you as well. Where are you joining us from? >> Yeah, thanks for having me again. I'm here in San Francisco. >> Awesome. So let's jump into it and talk about project Monterrey. You know, it's funny I was at Intel back in the day and all of our passwords used to go out and they became like the product names. It's funny how these little internal project names get a life of their own and this is a big one. And, you know, we had Pat Gelsinger on a few weeks back at VM-ware talking about how significant this is and kind of this evolution within the VMware cloud development. And, you know, it's kind of past Kubernetes and past Tanzu and past project Pacific and now we're into project Monterey. So first off, let's start with Kit, give us kind of the basic overview of what is project Monterey. >> Yep. Yeah, well, you're absolutely right. What we did last year, we announced project Pacific, which was really a fundamental rethinking of VMware cloud foundation with Kubernetes built in right. Kubernetes is still a core to core part of the architecture and the idea there was really to better support modern applications to enable developers and IT operations to come together to work collaboratively toward modernizing a company's application fleet. And as you look at companies starting to be successful, they're starting to run these modern applications. What you found is that the hardware architecture itself needed to evolve, needed to update, to support all the new requirements brought on by these modern apps. And so when you're looking at project Monterey, it's exactly that it's a rethinking of the VMware cloud foundation, underlying hardware architecture. And so you think about a project model or excuse me, product Pacific is really kind of the top half if you will, Kubernetes consumption experiences great for applications. Project Monterey comes along as the second step in that journey, really being the bottom half, fundamentally rethinking the hardware architecture and leveraging SmartNic technology to do that. >> It's pretty interesting, Paul, you know, there's a great shift in this whole move from, you know, infrastructure driving applications to applications driving infrastructure. And then we're seeing, you know, obviously the big move with big data. And again, I think as Pat talked about in his interview with NVIDIA being at the right time, at the right place with the right technology and this, you know, kind of groundswell of GPU, now DPU, you know, helping to move those workloads beyond just kind of where the CPU used to do all the work, this is even, you know, kind of taking it another level you guys are the hardware guys and the solutions guys, as you look at this kind of continuing evolution, both of workloads as well as their infrastructure, how does this fit in? >> Yeah, well, how all this fit it in is modern applications and modern workloads, require a modern infrastructure, right? And a Kit was talking about the infrastructure overlay. That VMware is awesome at that all being, I was coming at this from the emerging data centric workloads, and some of the implications for that, including Phillip and diversity has ever been used for computing. The need to this faculty could be able to combine maybe resources together, as opposed to trying to shoehorn something into a mechanical chassis. And, and if you do segregate, you have to be able to compose on demand. And when you start comparing those, we realized that we were humping it up on our conversion trajectory and we started to team up and partner. >> So it's interesting because part of the composable philosophy, if you will, is to, you know, just to break the components of compute store and networking down to a small pieces as possible, and then you can assemble the right amount when you need it to attack a particular problem. But when you're talking about it's a whole different level of, of bringing the right hardware to bear for the solution. When you talk about SmartNics and you talk about GPS in DPS data processing units, you're now starting to offload and even FPG is that some of these other things offload a lot of work from the core CPU to some of these more appropriate devices that said, how do people make sure that the right application ends up on the right infrastructure? This is that I'm, if it's appropriate using more of a, of a Monterey based solution versus more of a traditional one, depending on the workload, how is that going to get all kind of sorted out and, and routed within the actual cloud infrastructure itself? That was probably back to you a Kit? >> Yeah, sure. So I think it's important to understand kind of what a smart NIC is and how it works in order to answer that question, because what we're really doing is to kind of jump right to it. I guess it's, you know, giving an API into the infrastructure and this is how we're able to do all the things that you just mentioned, but what does a SmartNic? Well, SmartNic is essentially a NIC with a general purpose CPU on it, really a whole CPU complex, in fact, kind of a whole system on server right there on that, on that Nic. And so what that enables is a bunch of great things. So first of all, to your point, we can do a lot of offload. We can actually run ESX. >> SXI on that. Nic, we can take a lot of the functionality that we were doing before on the main server CPU, things like network virtualization, storage, virtualization, security functionality, we can move that all off on the Nic. And it makes a lot of sense because really what we're doing when we're doing all those things is really looking at different sort of IO data paths. You know, as, as the network traffic comes through looking at doing automatic load balancing firewall and for security, delivering storage, perhaps remotely. And so the NIC is actually a perfect place to place all of these functionalities, right? You can not only move it off the core server CPU, but you can get a lot better performance cause you're now right there on the data path. So I think that's the first really key point is that you can get that offload, but then once you have all of that functionality there, then you can start doing some really amazing things. And this ability to expose additional virtual devices onto the PCI bus, this is another great capability of a SmartNic. So when you plug it in physically into the motherboard, it's a Nic, right. You can see that. And when it starts up, it looks like a Nic to the motherboard, to the system, but then via software, you can have it expose additional devices. It could look like a storage controller, or it could look like an FPGA look really any sort of device. And you can do that. Not only for the local machine where it's plugged in, but potentially remote machines as well with the right sorts of interconnects. So what this creates is a whole new sort of cluster architecture. And that's why we're really so excited about it because you got all these great benefits in terms of offload performance improvement, security improvement, but then you get this great ability to get very dynamic, just aggregation. And composability. >> So Kit, how much of it is the routing of the workload to the right place, right? That's got the right amount of say, it's a super data intensive once a lot of GPU versus actually better executing the operation. Once it gets to the place where it's going to run. >> Yeah. It's a bit of a combination actually. So the powerful thing about it is that in a traditional world, where are you want an application? You know, the server that you run it, that app can really only use the local devices there. Yes, there is some newer stuff like NVMe over fabric where you can remote certain types of storage capabilities, but there's no real general purpose solution to that. Yet that generally speaking, that application is limited to the local hardware devices. Well, the great part about what we're doing with Monterey and with the SmartNic technology is that we can now dynamically remote or expose remote devices from other hosts. And so wherever that application runs matters a little bit less now, in a sense that we can give it the right sorts of hardware it needs in order to operate. You know, if you have, let's say a few machines with a FPGA is normally if you have needed that a Fiji had to run locally, but now can actually run remotely and you can better balance out things like compute requirements versus, you know, specialized Accella Requirements. And so I think what we're looking at is, especially in the context of VMware cloud foundation, is bringing that all together. We can look through the scheduling, figure out what the best host for it to let run on based on all these considerations. And that's it, we are missing, let's say a physical device that needs, well, we can remote that and sort of a deal at that, a missing gap there. >> Right, right. That's great. Paul, I want to go back to you. You just talked about, you know, kind of coming at this problem from a data centric point of view, and you're running infrastructure and you're the poor guy that's got to catch all the ASAM Todd i the giant exponential curves up into the right on the data flow and the data quantity. How is that impacting the way you think about infrastructure and designing infrastructure and changing infrastructure and kind of future proofing infrastructure when, you know, just around the corners, 5g and IOT and, Oh, you ain't seen nothing yet in terms of the data flow. >> Yeah. So I come at this from two angles. One that we talked about briefly is the evolution of the workloads themselves. The other angle, which is just as important is the operating model that customers are wanting to evolve to. And in that context, we thought a lot about how cloud, if an operating model, not necessarily a destination, right? So what I, and when way we laid out, what Kit was talking about is that in data center computing, you have operational control and data plane. Where did data plane run from the optimized solution? GPU's, PGA's, offload engines? And the control plane can run on stuff like it could be safe and are then I'm thinking about SmartNic is back codes have arm boards, so you can implement some data plane and some control plane, and they can also be the gateway. Cause, you know, you've talked about composability, what has been done up until now is early for sprint, right? We're carving out software defined infrastructure out of predefined hardware blocks. What we're talking about is making, you know, a GPUs residents in our fabric consistent memory residence of a fabric NVME over fabric and being able to tile computing topologies on demand to realize and applications intent. And we call that intent based computer. >> Right. Well, just, and to follow up on that too, as the, you know, cloud is an attitude or as an operating model or whatever you want to say, you know, not necessarily a place or a thing has changed. I mean, how has that had to get you to shift your infrastructure approach? Cause you've got to support, you know, old school, good old data centers. We've got, you know, some stuff running on public clouds. And then now you've got hybrid clouds and you have multi clouds, right. So we know, you know, you're out in the field that people have workloads running all over the place. So, but they got to control it and they've got compliance issues and they got a whole bunch of other stuff. So from your point of view, as you see the desire for more flexibility, the desire for more infrastructure centric support for the workloads that I want to buy and the increasing amount of those that are more data centric, as we move to hopefully more data driven decisions, how's it changed your strategy. And what does it mean to partner and have a real nice formal relationship with the folks over at VMR or excuse me, VMware? >> Well, I think that regardless of how big a company is, it's always prudent. As I say, when I approached my job, right, architecture is about balance and efficiency and it's about reducing contention. And we like to leverage industry R and D, especially in cases where one plus one equals two, right? In the case of, project Monterey for example, one of the collaboration areas is in improving the security model and being able to provide more air gap isolation, especially when you consider that enterprise wants to behave as service providers is concerned or to their companies. And therefore this is important. And because of that, I think that there's a lot of things that we can do between VMware and Dell lending hardware, and for example, assets like NSX and a different way that will give customers higher scalability and performance and more control, you know, beyond VMware and Dell EMC i think that we're partnering with obviously the SmartNic vendors, cause they're smart interprets and the gateway to those that are clean. They're not really analysis, but also companies that are innovating in data center computing, for example, NVIDIA. >> Right. Right. >> And I think that what we're seeing is while, you know, ambivalent has done an awesome job of targeting their capability, AIML type of workloads, what we realized this applications today depend on platform services, right. And up until recently, those platform services have been debases messaging PI active directory, moving forward. I think that within five years, most applications will depend on some form of AIML service. So I can see an opportunity to go mainstream with this >> Right. Right. Well, it's great. You bring up in NVIDIA and I'm just going to quote one of Pat's lines from, from his interview. And he talked about Jensen from NVIDIA actually telling Pat, Hey Pat, I think you're thinking too small. I love it. You know, let's do the entire AI landscape together and make AI and enterprise class workloads from being more in TANZU, you know, first class citizens. So I, I love the fact, you know, Pat's been around a long time industry veteran, but still, kind of accepted the challenge from Jensen to really elevate AI and machine learning via GPS to first class citizen status. And the other piece, obviously this coming up is ed. So I, you know, it's a nice shot of a, of adrenaline and Kit I wonder if you can share your thoughts on that, you know, in kind of saying, Hey, let's take it up a notch, a significant notch by leveraging a whole another class of compute power within these solutions. >> Yeah. So, I mean, I'll, I'll go real quick. I mean, I, it's funny cause like not many people really ever challenged Pat to say he doesn't think big enough, cause usually he's always blown us away with what he wants to do next, but I think it's, I think it's a, you know, it's good though. It's good to keep us on our toes and push us a bit. Right. All of us within the industry. And so I think a couple of things you have to go back to your previous point around this is like cloud as a model. I think that's exactly what we're doing is trying to bring cloud as a model, even on prem. And it's a lot of these kinds of core hardware architecture capabilities that we do enable the biggest one in my mind, just being enabling an API into the hardware. So the applications can get what they need. And going back to Paul's point, this notion of these AI and ML services, you know, they have to be rooted in the hardware, right? We know that in order for them to be performing for them to run, to support what our customers want to do, we need to have that deeply integrated into the hardware all the way up. But that also becomes a software problem. Once we got the hardware solved, once we get that architecture locked in, how can we as easy as possible, as seamlessly as possible, deliver all those great capabilities, software capabilities. And so, you know, you look at what we've done with the NVIDIA partnership, things around the NVIDIA GPU cloud, and really bringing that to bear. And so then you start having this, this really great full stack integration all the way from the hardware, very powerful hardware architecture that, you know, again, driven by API, the infrastructure software on top of that. And then all these great AI tools, tool chains, capabilities with things like the NVIDIA NGC. So that's really, I think where the vision's going. And we got a lot of the basic parts there, but obviously a lot more work to do going forward. >> I would say that, you know, initially we had dream, we wanted this journey to happen very fast and initially we're baiting infrastructure services. So there's no contention with applications, customer full workload applications, and also in enabling how productive it is to get the data over time, have to have sufficient control over a wide area. there's an opportunity to do something like that to make sure that you think about the probation from bare metal vms (conversation fading) environments are way more dynamic and more spreadable. Right. And they expect hardware. It could be as dynamic and compostable to suit their needs. And I think that's where we're headed. >> Right. So let me, so let me throw a monkey wrench in, in terms of security, right? So now this thing is much more flexible. It's much more software defined. How is that changing the way you think about security and basic security and throughout the stack go to you first, Paul. >> Yeah. Yeah. So like it's actually enables a lot of really powerful things. So first of all, from an architecture and implementation standpoint, you have to understand that we're really running two copies of VXI on each physical server. Now we've got the one running on the X86 side, just like normal, and now we've got one running on the SmartNIC as well. And so, as I mentioned before, we can move a lot of that networking security, et cetera, capabilities off to the SmartNic. And so what does this going toward as what we call a zero trust security architecture, this notion of having really defense in depth at many different layers and many different areas while obviously the hypervisor and the virtualization layer provides a really strong level of security. even when we were doing it completely on the X86 side, now that we're running on a SmartNic that's additional defense in depth because the X86 ESX doesn't really know it doesn't have direct access to the ESX. I run it on the SmartNic So the ESXI running on the SmartNic, it can be this kind of more well defended position. Moreover, now that we're running the security functionality is directly on the data path. In the SmartNic. We can do a lot more with that. We can run a lot deeper analysis, can talk about AI and ML, bring a lot of those capabilities to bear here to actually improve the security profile. And so finally I'd say this notion of kind of distributed security as well, that you don't, that's what I want to have these individual points on the physical network, but I actually distribute the security policies and enforcement to everywhere where a server's running, I everywhere where a SmartNic is, and that's what we can do here. And so it really takes a lot of what we've been doing with things like NSX, but now connects it much more deeply into hardware, allowing for better performance and security. >> A common attack method is to intercept the boot of the server physical server. And, you know, I'm actually very proud of our team because the us national security agency recently published a white paper on best practices for secure group. And they take our implementation across and secure boot as the reference standard. >> Right? Moving forward, imagine an environment that even if you gain control of the server, that doesn't allow you to change bios or update it. So we're moving the root of trust to be in that air gap, domain that Kit talked about. And that gives us a way more capability for zero across the operations. Right. >> Right, right. Paul, I got to ask you, I had Sam bird on the other day, your peer who runs the P the PC group. >> I'm telling you, he is not a peer He's a little bit higher up. >> Higher than you. Okay. Well, I just promoted you so that's okay. But, but it's really interesting. Cause we were talking about, it was literally like 10 years ago, the death of the PC article that came out when, when Apple introduced the tablet and, you know, he's talked about what phenomenal devices that PCs continue to be and evolve. And then it's just funny how, now that dovetails with this whole edge conversation, when people don't necessarily think of a PC as a piece of the edge, but it is a great piece of the edge. So from an infrastructure point of view, you know, to have that kind of presence within the PCs and kind of potentially that intelligence and again, this kind of whole another layer of interaction with the users and an opportunity to define how they work with applications and prioritize applications. I just wonder if you can share how nice it is to have that kind of in your back pocket to know that you've got a whole another, you know, kind of layer of visibility and connection with the users beyond just simply the infrastructure. >> So I actually, within the company we've developed within a framework that we call four edge multicloud, right. Core data centers and enterprise edge IOP, and then off premise. it is a multicloud world. And, and within that framework, we consider our client solutions group products to be part of the yes. And we see a lot of benefit. I'll give an example about a healthcare company that wants to develop real time analytics, regardless of whether it's on a laptop or maybe move into a backend data center, right? Whether it's at a hospital clinic or a patient's home, it gives us a broader innovation surface and a little sooner to get actually the, a lot of people may not appreciate that the most important function within Centene, I considered to be the experienced design thing. So being able to design user flows and customer experience looked at all of use is a variable. >> That's great. That's great. So we're running out of time. I want to give you each the last word you both been in this business for a long time. This is brand new stuff, right? Container aren't new, Kubernetes is still relatively new and exciting. And project Pacific was relatively new and now project Monterrey, but you guys are, you know, you're, multi-decade veterans in this thing. as you look forward, what does this moment represent compared to some of the other shifts that we've seen in IT? You know, generally, but you know, kind of consumption of compute and you know, kind of this application centric world that just continues to grow. I mean, as a software is eating everything, we know it, you guys live it every day. What is, where are we now? And you know, what do you see? Maybe I don't want to go too far out, but the next couple of years within the Monterey framework. And then if you have something else, generally you can add as well. Paul, why don't we start with you? >> Well, I think on a personal level, ingenuity aside I have a long string of very successful endeavor in my career when I came back couple years ago, one of the things that I told Jeff, our vice chairman is a big canvas and I intend to paint my masterpiece and I think, you know, Monterey and what we're doing in support of Monterey is also part of that. I think that you will see, you will see our initial approach focus on, on coordinator. I can tell you that you know how to express it. And we know also how to express even in a multicloud world. So I'm very excited and I know that I'm going to be busy for the next few years. (giggling) >> A Kit to you. >> Yeah. So, you know, it's funny you talk to people about SmartNic and especially those folks that have been around for awhile. And what you hear is like, Hey, you know, people were talking about SmartNic 10 years ago, 20 years ago, that sort of thing. Then they kind of died off. So what's different now. And I think the big difference now is a few things, you know, first of all, it's the core technology of sworn and has dramatically improved. We now have a powerful software infrastructure layer that can take advantage of it. And, you know, finally, you know, applications have a really strong need for it, again, with all the things we've talked about, the need for offload. So I think there's some real sort of fundamental shifts that have happened over the past. Let's say decade that have driven the need for this. And so this is something that I believe strongly as here to last, you know, both ourselves at VMware, as well as Dell are making a huge bet on this, but not only that, and not only is it good for customers, it's actually good for all the operators as well. So whether this is part of VCF that we deliver to customers for them to operate themselves, just like they always have, or if it's part of our own cloud solutions, things like being more caught on Dell, this is going to be a core part about how we deliver our cloud services and infrastructure going forward. So we really do believe this is kind of a foundational transition that's taking place. And as we talked about, there is a ton of additional innovation that's going to come out of it. So I'm really, really excited for the next few years, because I think we're just at the start of a very long and very exciting journey. >> Awesome. Well, thank you both for spending some time with us and sharing the story and congratulations. I'm sure a whole bunch of work for, from a whole bunch of people in, into getting to getting where you are now. And, and as you said, Paul, the work is barely just begun. So thanks again. All right. He's Paul's He's Kit. I'm Jeff. You're watching the cubes, continuing coverage of Dell tech world 2020, that digital experience. Thanks for watching. We'll see you next time. (Upbeat music)
SUMMARY :
Brought to you by Dell Technologies. coming to you from our Palo Altos studios Kit great to see you as well. I'm here in San Francisco. And, you know, it's of the top half if you will, and this, you know, kind And when you start comparing those, how is that going to get So first of all, to your point, really key point is that you can Once it gets to the place You know, the server that you run it, How is that impacting the way is making, you know, how has that had to get you you know, beyond VMware and Dell EMC i think Right. seeing is while, you know, So I, I love the fact, you know, and really bringing that to bear. sure that you think about the the stack go to you first, is directly on the data And, you know, server, that doesn't allow you Sam bird on the other day, He's a little bit higher up. the tablet and, you know, of the yes. of compute and you know, that I'm going to be busy for And what you hear is like, Hey, you know, and as you said, Paul, the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Paul Perez | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Jeffrey | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Pat | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
two angles | QUANTITY | 0.99+ |
second step | QUANTITY | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
couple years ago | DATE | 0.99+ |
Jensen | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
Palo Altos | LOCATION | 0.99+ |
SmartNics | ORGANIZATION | 0.98+ |
Monterey | LOCATION | 0.98+ |
Monterey | ORGANIZATION | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
20 years ago | DATE | 0.98+ |
10 years ago | DATE | 0.98+ |
ESX | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
VCF | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
over 10 years | QUANTITY | 0.98+ |
VMR | ORGANIZATION | 0.97+ |
Austin Texas | LOCATION | 0.97+ |
today | DATE | 0.97+ |
this year | DATE | 0.97+ |
NSX | ORGANIZATION | 0.97+ |
First | QUANTITY | 0.96+ |
X86 | COMMERCIAL_ITEM | 0.96+ |
two guests | QUANTITY | 0.95+ |
Dell Technology World 2020 | EVENT | 0.95+ |
two copies | QUANTITY | 0.95+ |
zero | QUANTITY | 0.95+ |
SmartNic | ORGANIZATION | 0.95+ |
Sam bird | PERSON | 0.94+ |
Centene | ORGANIZATION | 0.94+ |
each physical server | QUANTITY | 0.92+ |
SmartNic | TITLE | 0.92+ |
theCUBE | ORGANIZATION | 0.92+ |
VMware cloud | ORGANIZATION | 0.9+ |
Pacific | ORGANIZATION | 0.9+ |
SmartNic | COMMERCIAL_ITEM | 0.9+ |
Travis Vigil, Dell EMC and Lee Caswell, VMware | VMworld 2020
>> Narrator: From around the globe, it's theCube with digital coverage of Vmworld 2020 brought to you by VMware and its ecosystem partners. >> Welcome back, I'm Stuart Miniman and this is theCube's 11th year of VMworld. Here we are in 2020 of course, rather than being together at the moscone or at the sand. We're coming to you in your place of work or home when you're watching video, happy to welcome back. We have two of our long time guests on the program. First we have Travis Vigil. He is the Senior Vice President of Product Management with Dell Technologies and joining him is Lee Caswell who's the Vice President of Product Storage and Availability Business unit at VMware, Lee and Travis, thanks so much for joining us. >> Thank you Steve, it's good to see you again. >> All right, so we love kind of the maturation of what's happened. I mentioned 11 years, I get to usually sit down and talk with both of you, we talk about strategy we talk about how customers, and at the end of the day, we know things are changing. Like 2020 things are changing more every day, but one of the big transitions here is talking about that, how applications are changing. In the old days it was hey, I have an application, let me just stick it in a VM and it's going to be good there forever. We know that today I need to be able to react fast, I need to move things forward. And that impacts what VMware and Dell are doing together. So hey Lee, if maybe we come with you give the VMware perspective on that application changing and what that means to there and Travis feel free to chime in when Lee's done. >> Sure. >> Yeah thanks so much Steve and great to have to be back here on theCube. And VMworld is always a great opportunity to talk about how the industry is changing. What's really happening here and so one of the things that we're all finding is that the pace of application change is speeding up. And you know what, I mean you think about infrastructure. We want to think about how you can organize around the fastest changing element. This is one of the things we kicked off with project Pacific and our Tanzu portfolio a year ago. And you're starting to see all the products come roaring through right now as we're integrating Kubernetes. So that container based applications can be managed, secured, protected, just the same way with all the same tools that we have with our traditional VM applications. >> Yeah it's an excellent point. I mean, we are seeing the adoption of the modern applications in VMware environments, just accelerate beyond belief. And we're getting increasing requests from our customers to protect, to manage production workloads in Kubernetes environments and with our power protect data manager. Yeah we're actually announcing that we have all support for the Tanzu portfolio. So that includes TKG TKGI, Kubernetes Clusters, Kubernetes Clusters, and vSphere. So we're really excited to be able to offer this capability to our joint customers. And I think one thing that we're seeing is that the roles in IT are oftentimes blending together. So one of the things we're excited about with our solution is that with our direct data protection integration and vSphere environments. It's actually the be admin that can provision, monitor, manage, and protect the Kubernetes workloads, give unified experience and provide that peace of mind in this next generation world. >> Yeah Travis I'm glad you brought up some of those changing roles. I mean, that was such a big theme for so many years as the Virtualization Admin taking on more responsibility. And Lee teed up the changing application, you've got other roles coming together. You've got the application development team, which often times is disconnected from the infrastructure team. So, from either of you just what are you seeing from your customers? How are they sorting through that? I need to move agile, I need to move faster and that's not traditionally how the infrastructure team has worked. >> Things that we've been working on for example is how we've integrated SRM with vVols and PowerMax. And when you think about that, and we've talked for years right about the vVols for example. What we're responding to now is that customers are coming back and saying, listen, I have HCI, but I also have storage system and I need your help to go and be able to manage these with a consistent operating model and the same team. And that career path for the Virtualization Administrator just continues to grow. They're adding now five native applications, Kubernetes Orchestrated Applications, and being able to manage those across traditional storage and newer HCI systems. This is a really interesting blend of where the companies are working together to make sure that customer responses are being addressed really quickly. >> Yeah, it's a great example Lee. I mean, if you think about Three-tier architecture and PowerMax being the flagship of the heart of a lot of data centers that have been in operation for decades, the fact that we're seeing from our customers, hey, can you take a SRM and vVols, Can you integrate it with PowerMax and SRDF and be able to provide me a step along the way on my modernization journey? Such that I can utilize what I've built up my IT operations about around over the last couple of decades along with the newer deployment models like Hyper-converged infrastructure. And we're seeing that kind of that step forward and a blurring of the lines in terms of roles all over the place. I think another good example Lee is Cloud Native App Dev, right? And customers looking object, S3 object storage capability to provide a simple dev apps friendly way of, developing applications and hybrid cloud environments. And that's why we're really happy that we're able to provide early access for what we refer to as object scale, which works in conjunction with the vSAN Data persistence Platform to allow our customers to deliver modern applications. But at the same time use infrastructure that the IT organization is deploying, for other standard applications. I think that's another good example. >> It's a good point we had blocks through VSand of course right? And added files, what was missing well objects. (laughing) And so... >> Exactly >> We're already together with this persistent storage platform. We've got a way to go on basically supply object scale, object scale storage that can be used for Cloud Native Development. And I think this is a good example, right? This isn't just one hand clapping, right? This is both companies working together to make sure that customers have a seamless experience. That's really important. It doesn't come for granted, right? I mean it really takes co-engineering, joint testing and developing and go-to market together between our companies. I've never seen it working better. >> Yeah. >> Yeah. Go ahead Stuart. >> I know Travis I was just saying, we saw how fast VMware went from announcing project Pacific to the GA of the base solution where you needed the cloud foundation to update one already allowing everything to move open. That's going to be a little bit challenging to keep up with that pace of innovation. We've been talking for years on the queue, but we went from the 18 month release cycle to now, most things are like a six week release cycle. So, give us through any other pieces that were portfolio we need to understand the fitting with Tanzu and yeah. How do you move things along and where are the customers with their adoption? Are they sitting there waiting for it, or is this something that is going to be a more traditional enterprise slow roll? >> No I think you hit it spot on Stu the adoption and the deployments of these new architectures are coming very, very quickly, right? Traditional IT is trying to and in many cases successfully moving to a more cloud-like delivery CI/CD approach to how they run their shops and the speed of innovation and the speed and the dynamics of new technologies within the data centers are just, accelerating at a really fast pace. And in order to continue to keep up with these changes, it's I'll reflect back on a little bit on what Lee was talking about. It's understanding where customers are going and jointly working together to target those pain points. And I'll give a very specific example. And then I think maybe Lee, we should start to talk a little bit about Monterey as well, but I'll say a very specific example on joint innovation is, as customers have deployed VMware more broadly and they put more mission, critical large applications on VM, there's been sort of this persistent issue that some of those VMs just were so large or required such high availability, that they were what some IT professionals would refer to as unprotectable. And so we're actually demonstrating with VMware innovation that allows those VMs, those large mission critical Vms that can take zero downtime or even a pause in availability or performance, the ability to take backups without impacting the performance on those VMs. So, that's a very specific thing we're doing, a very specific pin point, but I think it's an example of us working together to target customer customer needs. And then I think more broadly, there's a big trend in composability that part talked a little bit about this morning Project Monterey I'll let Lee kick it off and then kind of talk a little bit about what we're doing to partner with VMware on this initiative. >> Yeah, well great. I definitely want to hear Monterey obviously, edge computing has everybody excited. Travis we've been hearing from the Dell team the last couple of years is that strategy's muttering some of the investment pieces that Dell's doing. So Lee, we hear edge computing. What does that mean? VMware has got a strong telco play that we've watched, for many years. So, just as you said Project Pacific rolled out pretty fast, help us understand a bit more of this Monterey and how fast will this turn into that cascade of products that you talked about for that we sell the last year. >> Yeah thanks, and it's exciting at VMware, right? We're willing to go and share a projects. Overtime project to become products, it's the way it works. And so the project is really a directional vision that says, if you think about what we did with Project Pacific a year ago, and Pacific being like going broad. The idea was applications are changing, we needed to go and basically make Kubernetes integrated with these sphere, with our full VMware Cloud Foundation, and then basically simplify it for customer consumption, and we did that together with the Tanzu brand. Now, Project Monterey, if you think of the Monterey Canyon is now going deep. And what it says is that not only the software architecture has to change, but also hardware, new hardware capabilities, particularly through the use of Smart NICs are a new way for us to think about re-architecting, how compute is basically optimized within a server and then across clusters and even across the hybrid cloud. And so Monterey will be a new way to look at how we go in efficiently offload CPUs and use these new Smart NIC offload engines as a way to think about where hypervisors run, where let's call it software defined, whether it's storage or compute. And most importantly and probably is security. 'Cause one of the things we're finding that applications new applications are demanding is encryption for example or distributed firewalls thinking about like how do we do that secure boot or how do we think about air gapping applications from the infrastructure? And so we're really thinking about how to re-architect the world of security. So the security is integrally distributed throughout an architecture. And so you'll be seeing with Project Monterey our ability to go and drive new products out of that and we're working very closely on an engineering to engineering level with Dell Technologies to make sure this new technology becomes available for customers and fully integrated in the VMware Cloud Foundation. So we have an easy way for customers to digest it which I think that's the thing Stuart right now is there's a lot of new technologies coming so fast, really their partnership means that we're able to consume those more quick. >> Wonderful, yeah Monterey so we're going to go deeper than the grand canyon is deep, but I guess we need to all a breathe under water too. So Travis, as I mentioned, Dell's had for a couple of years, some of these analysts sessions that I've had the opportunity to go through, been watching out that growth of the edge strategy, obviously Dell has everything from some of the hardened pieces on the consumer side, through tying into broad ecosystems. So the software obviously is going to be a huge component of what edges we saw in the keynote stage and video, a big partnership they're obviously a huge important partner for both Dell and VMware. So Travis, from the Dell side, what does this vision of Monterey mean? >> It's extremely important, I'd say transformational potentially for IT going forward and Lee did a really good job of describing the trends, whether that be cloud native Telco 5G, machine learning and data-centric applications, multicloud, and hybrid cloud and that security concern that Lee was talking about. Those are our real trends, and if we can offer infrastructure that is more composable into these dis-aggregated resources, across the edge, across the cloud, across the core, all software defined and seamlessly managed. I mean, that's a powerful vision. And we're just really excited to be partnering with VMware, jointly engineering this future focusing first on those Smartnecks that Lee was talking about because you need that higher compute, you need that increased bandwidth. You need easier manageability of a distributed infrastructure, and you need that ability to provide easier and more distributed security. So lots more to come, we will be incorporating these technologies specifically in the form of Smartnecks into our HCI and our server portfolio. But this like Lee said, this is a trend that will move from initiative to project to products very quickly. >> Wonderful, well we covered that breadth in that depth as you said Lee. Want to give you both just final takeaways, what you want people to take from Vmworld 2020 Lee we'll start with you and then Travis you get the final word. >> Yeah, we're really looking at a changing world in terms of applications. And so for customers around the globe, look for the partnerships that will bring those new capabilities and make it easy to go and deploy as fast as possible. We started off making sure that people weren't looking down at the infrastructure and started looking up at the apps. We're continuing that process with what we're doing around Tanzu, around our Kubernetes portfolio and stay tuned there'll be more to come, much more as we work together on Project Monterey, lots of exciting news and glad that you were here from VMworld to go and see it all of the light. >> Yeah, I think I obviously agree with everything that Lee just said. I think for me the this VMworld is just, another step forward in a great partnership across Dell technologies and VMware. And I mentioned several things, all of the things that we're doing together I forgot to mention actually that we're the first company to be, to offer a certified solution to protect VMworld Cloud Foundations which I use that specific example again expect more first, expect more joint in engineering and integrations. And I think the power of these two organizations coming together is what's going to be needed to help drive forward into this next generation of modern applications and dynamic workloads and dis-aggregated resources. And so we're just really excited about the innovation, the ability to address customer issues and the strong partnership that we have across Dell technologies and VMworld. >> Well, one of the measurements six that we have today is how fast everyone can respond and move fast. Congratulations on all the progress you've both made in your teams in the last year. And absolutely look forward to hearing more about Project Monterey as that matures. Travis and Lee, thanks for joining us. >> Thanks to you. >> Thanks to you. >> All right, and stay tuned for more coverage of VMworld 2020, I'm Stuart Miniman and as always. Thank you for watching theCube. (upbeat music)
SUMMARY :
brought to you by VMware We're coming to you in it's good to see you again. and at the end of the day, and so one of the things that that the roles in IT are I need to move agile, And that career path for the and a blurring of the And added files, what And I think this is a good example, right? Yeah. the cloud foundation to update one already and the dynamics of new technologies of the investment pieces and fully integrated in the the opportunity to go and hybrid cloud and that security concern Want to give you both and make it easy to go and the strong partnership that we have And absolutely look forward to hearing Miniman and as always.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lee Caswell | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
Stuart Miniman | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Travis | PERSON | 0.99+ |
Lee | PERSON | 0.99+ |
Stuart Miniman | PERSON | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
six week | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
18 month | QUANTITY | 0.99+ |
VMware Cloud Foundation | ORGANIZATION | 0.99+ |
11th year | QUANTITY | 0.99+ |
Stuart | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Tanzu | ORGANIZATION | 0.99+ |
both companies | QUANTITY | 0.99+ |
11 years | QUANTITY | 0.99+ |
Travis Vigil | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
Project Pacific | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
Pacific | ORGANIZATION | 0.98+ |
Project Monterey | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
five native applications | QUANTITY | 0.98+ |
a year ago | DATE | 0.98+ |
Monterey Canyon | LOCATION | 0.97+ |
two organizations | QUANTITY | 0.97+ |
Project Pacific | ORGANIZATION | 0.97+ |
VMworld Cloud Foundations | ORGANIZATION | 0.96+ |
Monterey | ORGANIZATION | 0.96+ |
one thing | QUANTITY | 0.96+ |
Paresh Kharya & Kevin Deierling, NVIDIA | HPE Discover 2020
>> Narrator: From around the global its theCUBE, covering HPE Discover Virtual Experience, brought to you by HPE. >> Hi, I'm Stu Miniman and this is theCUBE's coverage of HPE, discover the virtual experience for 2020, getting to talk to Hp executives, their partners, the ecosystem, where they are around the globe, this session we're going to be digging in about artificial intelligence, obviously a super important topic these days. And to help me do that, I've got two guests from Nvidia, sitting in the window next to me, we have Paresh Kharya, he's director of product marketing and sitting next to him in the virtual environment is Kevin Deierling, who is this senior vice president of marketing as I mentioned both with Nvidia. Thank you both so much for joining us. >> Thank you, so great to be here. >> Great to be here. >> All right, so Paresh when you set the stage for us? AI, obviously, one of those mega trends to talk about but just, give us the stages, where Nvidia sits, where the market is, and your customers today, that they think about AI. >> Yeah, so we are basically witnessing a massive changes that are happening across every industry. And it's basically the confluence of three things. One is of course, AI, the second is 5G and IOT, and the third is the ability to process all of the data that we have, that's now possible. For AI we are now seeing really advanced models, from computer vision, to understanding natural language, to the ability to speak in conversational terms. In terms of IOT and 5G, there are billions of devices that are sensing and inferring information. And now we have the ability to act, make decisions in various industries, and finally all of the processing capabilities that we have today, at the data center, and in the cloud, as well as at the edge with the GPUs as well as advanced networking that's available, we can now make sense all of this data to help industrial transformation. >> Yeah, Kevin, you know it's interesting when you look at some of these waves of technology and we say, "Okay, there's a lot of new pieces here." You talk about 5G, it's the next generation but architecturally some of these things remind us of the past. So when I look at some of these architectures, I think about, what we've done for high performance computing for a long time, obviously, you know, Mellanox, where you came from through NVIDIA's acquisition, strong play in that environment. So, maybe give us a little bit compare, contrast, what's the same, and what's different about this highly distributed, edge compute AI, IOT environment and what's the same with what we were doing with HPC in the past. >> Yeah, so we've--Mellanox has now been a part of Nvidia for a little over a month and it's great to be part of that. We were both focused on accelerated computing and high performance computing. And to do that, what it means is the scale and the type of problems that we're trying to solve are just simply too large to fit into a single computer. So if that's the case, then you connect a lot of computers. And Jensen talked about this recently at the GTC keynote where he said that the new unit computing, it's really the data center. So it's no longer the box that sits on your desk or even in Iraq, it's the entire data center because that's the scale of the types of problems that we're solving. And so the notion of scale up and scale out, the network becomes really, really critical. And we're doing high-performance networking for a long time. When you move to the edge, instead of having, a single data center with 10,000 computers, you have 10,000 data centers, each of which as a small number of servers that is processing all of that information that's coming in. But in a sense, the problems are very, very similar, whether you're at the edge or you're doing massive HPC, scientific computing or cloud computing. And so we're excited to be part of bringing together the AI and the networking because they are really optimizing at the data center scale across the entire stack. >> All right, so it's interesting. You mentioned, Nvidia CEO, Jensen. I believe if I saw right in there, he actually could, wrote a term which I had not run across, it was the data processing unit or DPU in that, data center, as you talked about. Help us wrap our heads around this a little bit. I know my CPU, when I think about GPUs, I obviously think of Nvidia. TPUs, in the cloud and everything we're doing. So, what is DPUs? Is this just some new AI thing or, is this kind of a new architectural model? >> Yeah. I think what Jensen highlighted is that there's three key elements of this accelerated disaggregated infrastructure that the data center has becoming. And so that's the CPU, which is doing traditional single threaded workloads but for all of the accelerated workloads, you need the GPU. And that does massive parallelism deals with massive amounts of data, but to get that data into the GPU and also into the CPU, you need really an intelligent data processing because the scale and scope of GPUs and CPUs today, these are not single core entities. These are hundreds or even thousands of cores in a big system. And you need to steer the traffic exactly to the right place. You need to do it securely. You need to do it virtualized. You need to do it with containers and to do all of that, you need a programmable data processing unit. So we have something called our BlueField, which combines our latest, greatest, 100 gig and 200 gig network connectivity with Arm processors and a whole bunch of accelerators for security, for virtualization, for storage. And all of those things then feed these giant parallel engines which are the GPU. And of course the CPU, which is really the workload at the application layer for non-accelerated outs. >> Great, so Paresh, Kevin talked about, needing similar types of services, wherever the data is. I was wondering if you could really help expand for us a little bit, the implications of it AI at the edge. >> Sure, yeah, so AI is basically not just one workload. AI is many different types of models and AI also means training as well as inferences, which are very different workloads or AI printing, for example, we are seeing the models growing exponentially, think of any AI model, like a brain of a computer or like a brain, solving a particular use case a for simple models like computer vision, we have models that are smaller, bugs have computer vision but advanced models like natural language processing, they require larger brains or larger models, so on one hand we are seeing the size of the AI models increasing tremendously and in order to train these models, you need to look at computing at the scale of data center, many processors, many different servers working together to train a single model, on the other hand because of these AI models, they are so accurate today from understanding languages to speaking languages, to providing the right recommendations whether it's for products or for content that you may want to consume or advertisements and so on. These models are so effective and efficient that they are being powered by AI today. These applications are being powered by AI and each application requires a small amount of acceleration, so you need the ability to scale out or, and support many different applications. So with our newly launched MPR architecture, just couple of weeks to go that Jensen announced, in the virtual keynote for the first time, we are now able to provide both, scale up and scale out both training data analytics as well as imprints on the single architecture and that's very exciting. >> Yeah, so look at that. The other thing that's interesting is you're talking about at the edge and scale out versus scale up, the networking is critical for both of those. And there's a lot of different workloads. And as Paresh was describing, you've got different workloads that require different amounts of GPU or storage or networking. And so part of that vision of this data center as the computer is that, the DPU lets you scale independently, everything. So you can compose, you desegregate into DPUs and storage and CPUs, and then you compose exactly the computer that you need on the fly container, right, to solve the problem that you're solving right now. So these new way of programming is programming the entire data center at once and you'll go grab all of it and it'll run for a few hundred milliseconds even and then it'll come back down and recompose itself onsite. And to do that, you need this very highly efficient networking infrastructure. And the good news is we're here at HPE Discover. We've got a great partner with HPE. You know, they have our M series switches that uses the Mellanox hundred gig and now even 200 and 400 gig ethernet switches, we have all of our adapters and they have great platforms. The Apollo platform for example, is break for HPC and they have other great platforms that we're looking at with the new telco that we're doing or 5G and accelerating that. >> Yeah, and on the edge computing side, there's the edge line set of products which are very interesting, the other sort of aspect that I wanted to touch upon, is the whole software stack that's needed for the edge. So edge is different in the sense that it's not centrally managed, the edge computing devices are distributed remote locations. And so managing the workflow of running and updating software on it is important and needs to be done in a very secure manner. The second thing that's, that's very different again, for the edges, these devices are going to require connectivity. As Kevin was pointing out, the importance of networking so we also announced, a couple of weeks ago at our GTC, our EGX product that combines the Mellanox NIC and our GPUs into a single a processor, Mellanox NIC provides a fast connectivity, security, as well as the encryption and decryption capabilities, GPUs provide acceleration to run the advanced DI models, that are required for applications at the edge. >> Okay, and if I understood that, right. So, you've got these throughout the HPE the product line, HPE's got long history of making, flexible configurations, I remember when they first came out with a Blade server it was, different form factors, different connectivity options, they pushed heavily into composable infrastructure. So it sounds like this is just a kind of extending, you know, what HP has been doing for a couple of decades. >> Yeah, I think HP is a great partner there and these new platforms, the EGX, for example that was just announced, a great workload there is a 5G telco. So we'll be working with our friends at HPE to take that to market as well. And, you know, really, there's a lot of different workloads and they've got a great portfolio of products across the spectrum from regular servers. And 1U, 2U, and then all the way up to their big Apollo platform. >> Well I'm glad you brought up telco, I'm curious, are there any specific, applications or workloads that, where the low hanging fruit or the kind of the first targets that you use for AI acceleration? >> Yeah, so you know, the 5G workload is just awesome. We're introduced with the EGX, a new platform called Ariel which is a programming framework and there were lots of partners there that were part of that, including, folks like Ericsson. And the idea there is that you have a software defined hardware accelerated radio area network, so a cloud RAM and it really has all of the right attributes of the cloud and what's nice there is now you can change on the fly, the algorithms that you're using for the baseband codex without having to go climb a radio tower and change the actual physical infrastructure. So that's a critical part. Our role in that, on the networking side, we introduced the technology that's part of EGX then are connected, It's like the DX adapter, it's called 5T for 5G. And one of the things that happens is you need this time triggered transport or a telco technology. That's the 5T's for 5G. And the reason is because you're doing distributed baseband unit, distributed radio processing and the timing between each of those server nodes needs to be super precise, 20 nanosecond. It's something that simply can't be done in software. And so we did that in hardware. So instead of having an expensive FPGA, I try to synchronize all of these boxes together. We put it into our NIC and now we put that into industry standard servers HP has some fantastic servers. And then with the EGX platform, with that we can build, really scale out software to client cloud RAM. >> Awesome, Paresh, anything else on the application side you'd like to add in just about what Kevin spoke about. >> Oh yeah, so from application perspective, every industry has applications that touch on edge. If you take a look at the retail, for example, there is, you know, all the way from supply chain to inventory management, to keeping the right stock units in the shelves, making sure there is a there is no slippage or shrinkage. So to telecom, to healthcare, we are re-looking at constantly monitoring patients and taking actions for the best outcomes to manufacturing. We are looking to automate production detecting failures much early on in the production cycle and so on every industry has different applications but they all use AI. They can all leverage the computing capabilities and high-speed networking at the edge to transform their business processes. >> All right, well, it's interesting almost every time we've talked about AI, networking has come up. So, you know, Kevin, I think that probably ease up a little bit why, Nvidia, spent around $7 billion for the acquisition of Mellanox and not only was it the Mellanox acquisition, Cumulus Networks, very known in the network space for software defined really, operating system for networking but give us strategically, does this change the direction of Nvidia, how should we be thinking about Nvidia in the overall network? >> Yeah, I think the way to think about it is going back to that data center as the computer. And if you're thinking about the data center as computer then networking becomes the back plane, if you will of that data center computer and having a high performance network is really critical. And Mellanox has been a leader in that for 20 years now with our InfiniBand and our Ethernet product. But beyond that, you need a programmatic interface because one of the things that's really important in the cloud is that everything is software defined and it's containerized now and there is no better company in the world then Cumulus, really the pioneer and building Cumulus clinics, taking the Linux operating system and running that on multiple homes. So not just hardware from Mellanox but hardware from other people as well. And so that whole notion of an open networking platform more committed to, you need to support that and now you have a programmatic interface that you can drop containers on top of, Cumulus has been the leader in the Linux FRR, it's Free Range Routing, which is the core routing algorithm. And that really is at the heart of other open source network operating systems like Sonic and DENT so we see a lot of synergy here, all the analytics that Cumulus is bringing to bear with NetQ. So it's really great that they're going to be part here of the Nvidia team. >> Excellent, well thank you both much. Want to give you the final word, what should they do, HPE customers in their ecosystem know about the Nvidia and HPE partnership? >> Yeah, so I'll start you know, I think HPE has been a longtime partner and a customer of ours. If you have accelerated workloads, you need to connect those together. The HPE server portfolio is an ideal place. We can combine some of the work we're doing with our new amp years and existing GPUs and then also to connect those together with the M series, which is their internet switches that are based on our spectrum switch platforms and then all of the HPC related activities on InfiniBand, they're a great partner there. And so all of that, pulling it together, and now as at the edge, as edge becomes more and more important, security becomes more and more important and you have to go to this zero trust model, if you plug in a camera that's somebody has at the edge, even if it's on a car, you can't trust it. So everything has to become, validated authenticated, all the data needs to be encrypted. And so they're going to be a great partner because they've been a leader and building the most secure platforms in the world. >> Yeah and on the data center, server, portfolio side, we really work very closely with HP on various different lines of products and really fantastic servers from the Apollo line of a scale up servers to synergy and ProLiant line, as well as the Edgeline for the edge and on the super computing side with the pre side of things. So we really work to the fullest spectram of solutions with HP. We also work on the software side, wehere a lot of these servers, are also certified to run a full stack under a program that we call NGC-Ready so customers get phenomenal value right off the bat, they're guaranteed, to have accelerated workloads work well when they choose these servers. >> Awesome, well, thank you both for giving us the updates, lots happening, obviously in the AI space. Appreciate all the updates. >> Thanks Stu, great to talk to you, stay well. >> Thanks Stu, take care. >> All right, stay with us for lots more from HPE Discover Virtual Experience 2020. I'm Stu Miniman and thank you for watching theCUBE. (bright upbeat music)
SUMMARY :
the global its theCUBE, in the virtual environment that they think about AI. and finally all of the processing the next generation And so the notion of TPUs, in the cloud and And of course the CPU, which of it AI at the edge. for the first time, we are And the good news is we're Yeah, and on the edge computing side, the product line, HPE's across the spectrum from regular servers. and it really has all of the else on the application side and high-speed networking at the edge in the network space for And that really is at the heart about the Nvidia and HPE partnership? all the data needs to be encrypted. Yeah and on the data Appreciate all the updates. Thanks Stu, great to I'm Stu Miniman and thank
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kevin Deierling | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
Paresh Kharya | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
200 gig | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
100 gig | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
10,000 computers | QUANTITY | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
200 | QUANTITY | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Paresh | PERSON | 0.99+ |
Cumulus | ORGANIZATION | 0.99+ |
Cumulus Networks | ORGANIZATION | 0.99+ |
Iraq | LOCATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
two guests | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
first time | QUANTITY | 0.99+ |
around $7 billion | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.99+ |
each application | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
second | QUANTITY | 0.99+ |
20 nanosecond | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
NetQ | ORGANIZATION | 0.99+ |
400 gig | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
10,000 data centers | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.98+ |
three key elements | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
thousands of cores | QUANTITY | 0.98+ |
three things | QUANTITY | 0.97+ |
Jensen | PERSON | 0.97+ |
Apollo | ORGANIZATION | 0.97+ |
Jensen | ORGANIZATION | 0.96+ |
single computer | QUANTITY | 0.96+ |
HPE Discover | ORGANIZATION | 0.95+ |
single model | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
hundred gig | QUANTITY | 0.94+ |
InfiniBand | ORGANIZATION | 0.94+ |
DENT | ORGANIZATION | 0.93+ |
GTC | EVENT | 0.93+ |
DO NOT PUBLISH Nick Barcet, Red Hat | Red Hat Summit 2020
>>Hi and welcome back to Red Hat. Summit 2020. This is the Cube, and I'm your host. Stew minimum. We're talking so many topics. This event happening globally. We're treating our partners in the red hat executives where they are around the globe. And I guess right now is Nick Carr said, Who's the senior director of technology strategy with Red Hat, And Nic is coming to us. He's early in the Bahamas, speaking to us from his boat, though. Nick, pleasure to see you. And thanks so much for joining us. >>Very nice to meet you. Yeah, remote employees. And I enjoy that a lot. >>Absolutely. So we've been talking your team a lot. Of course. You know, many employees of Red Hat already were remote, but everyone now is working where they are. You're gonna be about a topic, of course, which is even more about riveted solutions. And where things are, we're going to talk about edge and five G before we get into the topic. It's a little bit about your background, how long you've been with red hat. And you know what? Your what your role is. >>So I joined right as a little more than five years ago after the acquisition. Off of all the companies that was working on open stack. Interesting technology. I've been in open source for the past 20 more years. Um, I was, uh, working miss of many distributions of Lennox over the years, so I consider myself in open source veteran. >>Excellent. I I remember that acquisition. We had the Cube at the open stack summit for many years on that, um, you know, new the company before the acquisition >>of the >>brand. And frankly, >>though, let's talk about it. First of all, you know, you talk about edge. Edge means different things to a >>lot of people >>are talking about it >>from a >>career perspective. You know, every customer in the Iot piece. Where does Red Hat into the whole notion of edge on? You know what kind of pieces of the portfolio? Yeah. >>So obviously, edge is about building an infrastructure that goes as far as possible to be as close as possible to where people are either producing or consuming data and building infrastructure as always, being the very heart off what Red Hat has been doing. And we've been growing. That's infrastructure capability. over time. So that means that today we feel the need to fulfill the requirements of those customers that want to extend their infrastructure to there. Because when we say the edge, we have to be countries that we're talking about. Like the layers of an onion more You dig into it. The more layers you find, the more particle case you have. There's no way there is a single. >>Yeah, no, you're absolutely right. So, you know, back in the open stack days, we talked a lot about in it, though Some of the barriers I know I've spoken to prizes. Who's, You know, we are of red Hat, though, you know, Maybe start there and help us understand. You know, where are we with the solution? Uh, talk about how five g fits into it. Of course. Everybody's talking about five feet. Well, that will take time, but help us understand where we are today. >>So, um, obviously for us, the edge years, just an extension of our open ivory clouds. Right? We have always been very vocal in saying that you need to be able to deploy the same workload in any place, and the edges are Justin extensions off these anyplace. So the same strategy that we've been developing first we use open stack, uh then with open shift and making open shift are both our development and our deployment platform for all types of workloads having open shift now, this report normally container based workloads, but also the visualization based workloads are exactly what we are doing at the edge. We want people to be able to deploy a single type of platform on various types of fruit brands managers globally. Ah is complete consistency so that there is no extra cost in maintaining those thousands, sometimes millions off added location into their existing infrastructure off course. In order to do that, we need to develop new tools to do the management to develop new AI or machine learning technology, to help people process not only the data coming from the platform, but also the management of the platform itself. We are reaching such scales that we wouldn't be able to do it. We've out. You are no from the platform yourself. >>Yeah, absolutely. And of course, scale is a relative before rise in. Yeah, I've talked to a couple of times. My understanding you've got news related. This that horizon went off? >>Yeah. This week Horizon has bean announcing ah, reinforce partnership between our two companies to help them heal their edge Platform. Ah, here we are talking about the first step in their edge platform which years? What we call the extension off the board we are talking about developing small data centers are going to be closer to the certainly. Um, And here we are talking about scales that, um can comprise to hundreds of data center, each having to 20 machines or more, um, to do all the processing of their future five g network and further, um, five g years, one off the enabler off edge. But it's also the reason for telcos to start deploying their edge network because the have a requirement to boot treatments off the information closer to where the five g antennas. And this is what we are developing. >>Alright. So, Nick, we talked. You've talked a minute ago about open stack and open fifth, help our audience understand a little bit. We've already talked a lot of customers. You don't. You can have one without the other, or you can layer off of the open stack when it comes to the the solution that you're talking about Verizon or ah, you know, other other service providers out there is it? Is it one is in both eyes that I've been there, Help us understand. >>So currently we have a complete shorts. We can do an edge platform. So Levi's open stack. You've got multiple customer doing that around the world. We can build an edge platform. We is open shift on top of the stack. But if we look at a future as we are, you know, designing it, we are looking at enabling simplicity and simplicity. Means deploying a single seeing open shift on to bare metal and have these bare metal platform deal we both vm and container so that you only have one AP I. You only have one management. You only have one thing to worry about. And since open shift and bark the OS, um, there is extreme simplicity in the methodology for updating or upgrading, and I think this is going to be a key point, making things simple, reducing the number of layers in your set. >>All right, that that really intrigued Nick, help us understand a little bit this ICO, Obviously any red hat doing is open source. It's how you're for that, you know, Red hat does. But you know how you're involved in the industry to help make the word that as edge solutions roll out that customers have flexibility in the first place. >>So you have multiple tee off partnership in this industry, you've got the partnership that are built around community and we are participating in numerous community, like the Lennox Foundation. Edge on many, many more. And this is where we are building the fundamental block off our future solutions we have. Partnership also is multiple vendor. Every time you're dealing with is a specific vertical. You will have a certain number of vendors that are going to be the one enabling 80% of the applications are going to be deployed, and that's okay for the edge. And then you have the partnership we made. We see our customers because the best source off requirements are always our customers. And that's something that we've now made a strong principle, which is to always find early adopters with when we are going to build a solution in a vertical sector on the horizon is one of them has been one of them. For what, a few years now and then replicate this success on to other customers of same sex. And we are reproducing this in the industry and manufacturing sector and in many other virtual. >>Excellent. Uh, you talked earlier about the open hybrid cloud. Obviously talk about they right, Wild help us understand, Nick. You know, edge and cloud. How do they actually go together? Many people. First of all, the people living article that was, you know, edge kills the cloud we've been talking about for a while. We know everything in i t is always additive. But how should customers on the surface but really be thinking about how edge cloud fit together >>in our design? The cloud and the edge is the same thing. You address the edge, you address the cloud, you should address your on premise art where the same way you use the same guy. And this driving FBI ease of communities, FBI, which we deliver through open ship. Um, soon. What is the difference? The difference is going to be who owns the edge, or we also machine running in your cloud who owns the machine running in your private data center. What network you're using, you're going to have Ah, a lot of constrained are going to be a bit more complex when you aren't yet. For example, you are sometimes going to go through the satellite connection. These huge delays in communication you're sometime going to put machines location that are absolutely not secure. So you need to have security layers. You're ensuring that nobody can remember these machines. These are you know it. But overall, once the deployment has done, we really, really on. People should consider that's their edge piece parts of their cloud or vice versa. >>Yeah, Nick, you brought up a lot of good points there. Security, of course. Critical. A one piece that I want to get your honest about. So we're spending a few years really looking at in a worker's process at the edge. What that's brought back core talk about AI work. Both generally understood praying things out at the edge. That's gonna happen. You know more of the core and then get out of the overall devices. What do you seeing where your customers But that overall, when it comes to their data. And >>from a technical perspective, data is the real real motivation about yet they are generating so much data that we are not able to process it anymore in a central location. So we have to process this data locally where it is generated. Or I suppose it's possible to where it is generated before sending, Let's say, a summary of these data or alerts or whatever the business process that pulls for to the center of operation. The use cases that we are demonstrating in this week, uh, that you can watch through the demo booth or you can watch increases. Ah, known presentation. Use the pays off manufacturer, which is installing sensors on many of the machine producing oh stuff. And when you have the right sensors like the vibration sensor or a temperature sensor, you can very easily develop knowledge off. Oh, this machine is going to break in a short amount of time. Maybe I should start scheduling some preventive maintenance on these machines, and you can do that by just actually leading the data and have humans read it. And you can do that a lot more efficiently. Training a machine learning algorithm This is what we are demonstrating that is processing the data and sending the alerts in real part when issues are discovered. Um, all this off course needs to be down in a very scalable fashion. Here we are talking about a use case where the customer may have 50 factories >>around the world. >>Are you updates all these machine learning models in all the factories when you have an update percent to learn about something you so data and data processing and now the eye. But big data are the heart off all of the use cases we, uh, discovered around old verticals for edge. And this is why we are now almost joining forces between the team working on AI. That's right out producing the open data hub and the team marking teams working on our solution. >>Yeah, I'm really glad you brought up manufacturing as the is one of the verticals. Look at their one of the turns and challenges we saw with all of that is yeah, some of the organization, Specifically, if you look at manufacturing, it could be an ot. Um, I'm curious is you're seeing solutions. Roll out your work, Aziz. How Customers are getting beyond those barriers. You know, some of the traditional silos where there was thoroughly collaborate. >>Well, it's always Ah, problem. Every time you introduce a change, you have to manage this in every project off, deploying something you anywhere well, fail if you do not account for the human factor and edge is no different in that. And when you're talking about the factory, if you're not directly talking with the people on the floor well, regarding their needs, you're only talking is a central guy. And you just arrived one day saying, Oh, everything is going to change. It's going to be a failure that the same way is a failure when the government make a decision without going through a consultative process before implementing it. So, um, nothing new, I would say. But as usual, And maybe because of the scale of edge, yeah, we will need to ensure that our customers are aware of those challenges that lay ahead of us. >>Alright, Well, next sounds Sounds like a lot of good progress. Been made definitely further breakout. What? From summit? You learn more. Thank you so much for joining. >>Thank you for having me >>all right. More coverage from the Cube at Red Hat Summit 2020. I'm screaming a man and as always what? Alright, Nick. Good stuff.
SUMMARY :
He's early in the Bahamas, speaking to us from his boat, And I enjoy that a lot. And you know what? Off of all the companies that was working on open stack. We had the Cube at the open stack summit for And frankly, you know, you talk about edge. You know, every customer in the Iot piece. the more particle case you have. So, you know, back in the open stack days, we talked a lot about in You are no from the Yeah, I've talked to a couple of times. one off the enabler off edge. or you can layer off of the open stack when it comes to the the And since open shift and bark the OS, um, there is extreme But you know how you're involved in the industry one enabling 80% of the applications are going to be deployed, First of all, the people living article that was, You address the edge, you address the cloud, you should address your on premise You know more of the core and then get out of the overall And when you have the right sensors like the vibration sensor and data processing and now the eye. some of the traditional silos where there was thoroughly collaborate. And you just arrived one day saying, Oh, everything is going to change. Thank you so much for joining. More coverage from the Cube at Red Hat Summit 2020.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nick Carr | PERSON | 0.99+ |
Nick | PERSON | 0.99+ |
Nick Barcet | PERSON | 0.99+ |
20 machines | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
FBI | ORGANIZATION | 0.99+ |
Bahamas | LOCATION | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Nic | PERSON | 0.99+ |
50 factories | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
Lennox Foundation | ORGANIZATION | 0.99+ |
Lennox | ORGANIZATION | 0.99+ |
both eyes | QUANTITY | 0.99+ |
Levi | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one thing | QUANTITY | 0.98+ |
red hat | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.97+ |
Aziz | PERSON | 0.97+ |
Both | QUANTITY | 0.97+ |
each | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
Summit 2020 | EVENT | 0.96+ |
First | QUANTITY | 0.96+ |
hundreds of data center | QUANTITY | 0.96+ |
one piece | QUANTITY | 0.96+ |
fifth | QUANTITY | 0.94+ |
Red Hat Summit 2020 | EVENT | 0.91+ |
single | QUANTITY | 0.89+ |
g | ORGANIZATION | 0.89+ |
open stack | EVENT | 0.88+ |
a minute ago | DATE | 0.88+ |
single type | QUANTITY | 0.87+ |
one day | QUANTITY | 0.86+ |
Cube | ORGANIZATION | 0.84+ |
five g | ORGANIZATION | 0.84+ |
ek Horizon | ORGANIZATION | 0.83+ |
five g | ORGANIZATION | 0.83+ |
Edge | ORGANIZATION | 0.82+ |
five feet | QUANTITY | 0.78+ |
five years ago | DATE | 0.73+ |
one management | QUANTITY | 0.72+ |
times | QUANTITY | 0.72+ |
telcos | ORGANIZATION | 0.69+ |
past 20 more years | DATE | 0.64+ |
couple | QUANTITY | 0.64+ |
edge | ORGANIZATION | 0.64+ |
more than | DATE | 0.6+ |
five | QUANTITY | 0.58+ |
hat | ORGANIZATION | 0.58+ |
Cube | PERSON | 0.56+ |
Red | TITLE | 0.53+ |
Justin | PERSON | 0.5+ |
Ed Warnicke, Cisco | Open Source Summit 2017
(cheerful music) >> Announcer: Live from Los Angeles, it's theCUBE! Covering Open Source Summit North America 2017. Brought to you by The Linux Foundation and Red Hat. >> Welcome back, and we're live here in Los Angeles. This is theCUBE's special coverage of Open Source Summit North America. I'm John Furrier with Stu Miniman. Two days of wall-to-wall coverage. Our next guest, Ed Warnicke, who is a distinguished consulting engineer with Cisco. Welcome to theCUBE. >> Glad to be here! >> Thanks for coming on. Love to get into it. We love infrastructure as code. We love the cloud developers. The young generation loves it. Making things easy to use all sounds great, but there's still work to get done. The networking... So what's going on here at the Open Source? So this is the big tent event where there's a lot of cross-pollination around projects. Obviously the networking side, you guys at Cisco are doing your share. Give us the update. Networking is still a lot more work to be done. It's a very strategic part of the equation. Certainly making it easier up above makes it programmable. >> Yeah, you have to make the networking invisible even to the DevOps layer. There are certain things that you need from the network. They need isolation and reachability. They need service discovery and service routing. But they don't want to have to think about it. They don't want to be burdened with understanding the nitty gritty details. They don't want to know what subnet they're on, they don't want to have to worry about ACL's, they don't want to think about all of that. And the truth is, there's a lot of work that goes into making the network invisible and ubiquitous for people. And in particular, one of the challenges that we see arising as the world moves more cloud-native, as the microservices get smaller, as the shift happens toward serverless, as Kubernetes is coming on with containers, is that the network is really becoming the run time. And that run time has the need to scale and perform like it never has before. So the number of microservices you'd like to put on a server keeps going up, and that means you need to be able to actually handle that. The amount of traffic that people want to push through them continues to go up. So your performance has to keep up. And that brings a lot of distinct challenges, particularly when you're trying to achieve those in systems that were designed for a world where you had maybe two NIC's on the box, where you weren't really thinking when the original infrastructure was built about the fact that you were actually going to have to do a hell of a lot of routing inside the server because you now have currently hundreds, but hopefully someday thousands and tens of thousands of microservices running there. >> Ed, you know, I think when we've been talking about the last 15 or 20 years or so, I need to move faster with my deployment. It always seemed that networking was the thing that held everything up. It's like, okay, wait, when I virtualized, everything's great and everything, and I can just spit up a VM and do that. Oh, but I need to wait for the network to be provisioned. What are the things you've been working on, what open source projects? There's a lot of them out there helping us to really help that overall agility of work today. >> Absolutely. So one of the things I'm deeply involved in right now is a project called FD.io, usually pronounced Fido, because it's cute. And it means we can give away puppies at conferences. It's great. What FD.io is doing, is we have this core technology called VPP that gives you incredibly performant, incredibly scalable networking purely in user space. Which means from a developer velocity point of view, we can have new features every three months. From an extensibility point of view, you can bring new network features as separate plugins you drop as .so's into a plugin directory instead of having to wait for the kernel to rev on your server. And the revving process is also substantially less invasive. So if you need to take a microservice network as a user space thing and rev it, it's a restart of a process. You're talking microseconds, not 15-minute reboot cycles. You're talking levels of disruption where you don't lose your TCP state, where you don't lose any of those things. And that's really crucial to having the kind of agility that you want in the network. And when I talk about performance and scalability, I'm not kidding. So one of the things we recently clocked out with VPP was being able to route a terabyte per second of traffic with millions of routes in the forwarding tables on commodity servers with no hardware existence at all. And the workloads are starting to grow in that direction. It's going to take them a while to catch up, but to your point about the network being the long pull, we want to be far ahead of that curve so it's not the long pull anymore. So you can achieve the agility that you need in DevOps and move innovative products forward. >> Ed, one of the things that comes up all the time, I wanted to get your reaction to this because you're an important part of it, is developers say, look, I love DevOps. And even ops guys are saying, we want to promote DevOps, so there's a mind meld there if you will. But then what they don't want is a black box. They want to see debugging, and they want to have ease of manageability. So I don't mind pushing dev, if I'm an ops guy, send the dev down, but they need a path of visibility. They need to have access to debug fast. Get access to some of those things. What do you see as gates if you will, that we got to get through to make that seamless and clean right now? Obviously Kubernetes, lot of stuff going on with orchestration. And containers are providing a path. But still, the complaint and nervousness is okay, you can touch and program the infrastructure, but if something happens, you're going to be reactive. >> Yeah, that gets exactly to the point. Because the more invisible the network is, the more visibility you need when things go wrong. And for general operational use. And one of the cool things that's happening in FD.io around that, is number one, it's industrial scale. So you have all sorts of counters and telemetry information that you need from an historical point of view in networks to be able to figure out what's going on. But beyond that, there's a whole lot of innovation that's been happening in the network space that has yet to trickle down all the way to the server edge. A really classic example on the visibility front has to do with in-band iOAM. So we now have the technology, and this is present today in VPP, to be able to say, hey, I would like an in-band trace on the flow though the network of this flow for this customer who's giving me a complaint, where I can see hop by hop through the network including in the edge where VPP is, what's the latency between hops? What path it actually passed through. And there's even a feature there where you could say, at each hop, please send the packet capture at that hop to a third-party point where I can collect it so I can look at it in something like Wireshark. So you can look in Wireshark and say, okay I see where this went into that node and came out that node this way. Node by node by node. I don't know how much more visibility than that is actually physically possible. And that's one of the kinds of things that the velocity of features that you have in VPP has made very possible. That's the kind of thing that would take a long time to work into the traditional development line for networking. >> What's the Cisco internal vibe right now? Because we covered the DevNet Create event that Susie Wee put on, which was kind of like a cloud-native cool event. Kind of grassroots, kind of guerrilla. I love the mojo there. But then you've got the DevNet community at Cisco, which is a robust killer developer community on the Cisco side. How are those worlds coming together? I can imagine that the appetite for the Cisco DevNet teams, the DevNet developer community, is looking at cloud-native as an opportunity. Can you share some insight into what's the sentiment, what's the community vibe, what's going on? For folks that just got to run the networks, I mean this is serious stuff. In the past, they've been like, cloud-native, when you're ready we'll get there. But now there seems to be an onboarding of cloud-native. Talk about the dynamic. >> There has to be, because cloud-native won't wait. And there's a lot of things that the network can do to help you as the run time. The iOAM example is one, but there are a ton more. Again, cloud-native won't wait. They will find a way, and so you have to be able to bring those features at the pace at which cloud-native proceeds. You can't do it on six-month product cycles. You can't do it on 12-month product cycles. You have to be able to respond point by point as things more forward. A good example of this is a lot of the stuff that's happening with server meshes in Insteon. Which is coming really fast. Not quite here, but coming really fast. And for that, the real question is, what can the network do for DevOps? Because there's a synergistic relationship between DevOps and NetOps. >> So you were saying... Just to try to get at the point. So yes, are you seeing that the DevNet community is saying hey we love this stuff? Because they're smart, they know how to adapt. Moving from networks to DevOps. To me it seems like they're connecting the dots. You share some-- Are they, yes no maybe? >> They're absolutely connecting the dots, but there's a whole pipeline with all of this. And DevNet is at the short pointy end where it touches the DevOps people. But to get there, there's a lot of things that have to do with identifying what are the real needs, getting the code written to actually do it, figuring out the proper innovations, engaging with open source communities like Kubernetes so that they're utilized. And by the time you get to DevNet, now we're at the point where you can explain them to DevOps, where they can use them really cleanly. One of the other things is, you want it to come through transparently. Because people want to be able to pick their Kubernetes Helm charts off the web, take the collection of containers for the parts of their application they don't want to have to think about, at least right now, and have it work. So you have to make sure you're supporting all the stuff that's there, and you have to work to be able to take advantage of those new features in the existing API's. Or better yet, just have the results of those API's get better without having to think about new features. >> So they're in great shape. It's not a collision, it's not friction. >> No, no no. >> It's pretty much synergistic. Network guys get the DevOps equation. >> No, we get the DevOps equation, we get the need. There is a learning process for both sides. We deeply need each other. Applications without networking are completely uninteresting. And this is even more true in microservices where it's becoming the run time for the network. On the same side, networks without applications are completely uninteresting because there's no one to talk. And what's fascinating to me is how many of the same problems get described in different language and so we'll talk past each other. So DevOps people will talk about service discovery and service routing. And what they're really saying is, I want a thing, I don't want to have to think about how to get to it. On the network side, for 15 years now, we've been talking about identifier/locator separation. Basically the having an IP address for the thing you want, and having the ability to transparently map that to the location where that thing is without having to... It's the classic renumber your network problem. They're at a very fundamental level the same problem. But it's a different language. >> The game is still the same. There's some language nuances that I think I see some synergies. I see people getting it. It's like learning two languages. Okay, the worlds come together. It's not a collision. But the interesting thing is networking has always been enabling opportunity. This is a fundamental nuance. If you can get this right, it's invisible, as you said. That's the end game. >> Absolutely. That's really what you're looking for. You want invisibility in the normal mode, and you want total transparency when something has to be debugged. The classic example with networks is, when there's a network problem it's almost never the network. It's almost always some little niggle of configuration that went wrong along the way. And so you need that transparency to be able to figure out okay, what's the point where things broke? Or what's the point where things are running suboptimally? Or am I getting the level of service that I need? Am I getting the latency I need, and so forth. And there's been a tendency in the past to shorthand many of those things with networking concepts that are completely meaningless to the underlying problem. People will look at subnets, and say for the same subnet, we should have low latency. Bullshit. I mean basically, if you're on the same subnet, the guy could be on the other end of the WAN in the modern era with L2 overlays. So if you want latency, you should be able to ask for a particular latency guarantee. >> It felt to me that it took the networking community a while to fix things when it came to virtualization. (Ed laughs) but the punch line is, when it comes to containers, and what's happening at Kubernetes, it feels like the networking community is rallying a lot faster and getting ahead of it. So what's different this time? You've got kind of that historical view on it. Are we doing better as an industry now, and why is it? >> So a couple of things. The Kubernetes guys have done a really nice job of laying out their networking API's. They didn't get bogged down in the internal guts of the network that no DevOps guy ever wants to have to see. They got really to the heart of the matter. So if you look at the guarantees that you have in Kubernetes, what is it? Every pod can talk to every other pod at L3. So L2 isn't even in the picture. Which is beautiful, because in the cloud, you need to worry about subnets like you need a hole in the head. Then if you want isolation, you specify a network policy. And you don't talk about IP addresses when you do that. You talk about selectors on labels for pods, which is a beautiful way to go about it. Because you're talking about things you actually care about. And then with services, you're really talking about how do I discover the service I want so I never have to figure out a pod IP? The system does it for me. And there are gaps in terms of there being things that people are going to be able to need to do that are not completely specified on those API's yet. But the things they've covered have been covered so well, and they're being defended so thoroughly, that it's actually making it easier because we can't come in and introduce concepts that harm DevOps. We're forced to work in a paradigm that serves it. >> Okay, great. So this'll be easy, so we'll be ready to tackle serverless. What's that going to mean for the network? >> Serverless gets to be even more interesting because the level of agility that you want in your network goes up. Because you can imagine something in serverless where you don't even want to start a pod until someone has made a request. So there's an L7 piece that has to be dealt with but then you have to worry about the efficiency of how do you actually move that TCP session to the actual instance that's come up for serverless for that thing, and how do you move it to the next thing? Because you're working at an L7, where from the client's point of view, they think it's all the same server, but it's actually been vulcanized across all these microservices. And so you have to find an efficient way of making that transparent that minimizes the degree to which you have to hairpin through things all over the cluster because that just introduces more latency, less throughput, more load on the cluster. You've got to be able to avoid that. And so, by being able to bring sophisticated features quickly to the data plain with something like FD.io and VPP, you can actually start peeling those problems off progressively as serverless matures. Because the truth of the matter is, no one really knows what those things are going to look like. We all like to believe we do, but you're going to find new problems as you go. It's the unknown unknowns that require the velocity. >> So it sounds like you're excited about serverless, though. >> Ed: Usually, yes, definitely. >> So I love serverless too, and I always talk about it. So what is in your opinion the confusion? There are some people who are like, oh it's bullshit. I don't think it is personally. I think it's nirvana. I think it's what people want, what most developers want. There's a server behind it. It's not serverless per se. It's just from a developer standpoint, you don't have to provision hardware. >> Or containers, or VM's, or any of that. >> I personally think it's a good thing. Is it just a better naming convention? Give the people, what's the nuance? Why are people confused? >> I think it's much more fundamental than just the naming convention. Because historically, if you look at the virtualization of workloads, every movement we've had to date has been about some workload run time technology. VM's were about virtual machines. Containers are about containers to run technology. When you get to microservices and serverless, we've made the leap from talking about the underlying technology that most developers don't care about to talking about the philosophy that they do. >> Their run time is their app. Their run time assembly is their code sandwich, not to say the network. >> Just as in serverless, I don't think anyone doubts that the first run of serverless is going to be built on containers. But the philosophy is completely divorced for them. So I'll give you an example. One of the things that we have in VPP is we have an ultra high performance, ultra high scalability userspace TCP stack. We're talking the kind of thing that can trivially handle ten million simultaneous connections with 200,000 new connections coming in every second. And right now, you can scope that to an isolation scope of a container. But there's no reason, with the technology we have, you can't scope it all the way down to a process. So you control the network access at the level of a process. So there's a lot of headroom to go even smaller than containers, even lighter weight than containers. But the serverless philosophy changes not a wit as you have that improvement come in. >> That's beautiful. Ed, thanks so much for coming on theCUBE. We really appreciate your perspective. I'd like you to get one final word in to end the segment. Describe what's happening here because the OS Summit, or the Open Source Summit, is the first of its kind, a big tent event. What's your take on it? What's the purpose of the event? What's your experience? Share with the folks who aren't here what this event is all about. >> It's really exciting, because as much as we love The Linux Foundation, and as much as we've all enjoyed things like LinuxCon in the past, the truth is, for years it's been bleeding beyond just Linux. I don't see the OS Summit so much as a shift in focus, as a recognition of what's developed. Last year we had the Open Source Summit here. We just called it LinuxCon. The year before we had the Open Source Summit here. We just called it LinuxCon. And so what's really happening is, we're recognizing what is. There's actually no new creation happening here. It's the recognition of what's evolved. >> And that is open source as a tier one reality that goes way beyond Linux, which is by the way super valuable at the kernel. >> Ed: Oh, we all love Linux. >> All Linux apps... The only apps are Linux apps. But it's a bigger thing. The growth and scale that's coming is unprecedented. I think a lot of people still are pitching themselves, Stu and I were commenting, that what's coming is going to change the face of software development for generations to come. There's an exponential scale of software libraries coming on board. Up to 400 million was forecast by 2026? >> That sounds conservative to me. (laughs) >> You think so? Well, I mean, just to get the scale. So there's going to be some leadership opportunities for the community, in my opinion. >> Absolutely. And this is where the Open Source Summit actually... I mean, words matter because they shape the way we think about things. So where I think the shift to the Open Source Summit has huge value is that it starts to shift the thinking into this broader space. It's not just a recognition of what's happened. It's a new load of software here for the community. >> This is not a marking then, it's a recognition of what's actually happening. I love that quote. Open Source Summit, brilliant move by The Linux Foundation. Create a big tent event for cross-pollination, sharing of ideas. This is the ethos of open source. Ed, thanks so much for coming on theCUBE. This is theCUBE with live coverage from the Open Source Summit in North America, formerly LinuxCon and all the other great events here in Los Angeles. I'm John Furrier with Stu Miniman. More live coverage after this short break. (electronic music)
SUMMARY :
Brought to you by The Linux Foundation and Red Hat. Welcome to theCUBE. We love the cloud developers. is that the network is really becoming the run time. What are the things you've been working on, So one of the things we recently clocked out with VPP Ed, one of the things that comes up all the time, that the velocity of features that you have in VPP I can imagine that the appetite for the Cisco DevNet teams, is a lot of the stuff that's happening So yes, are you seeing that the DevNet community And by the time you get to DevNet, So they're in great shape. Network guys get the DevOps equation. and having the ability to transparently map that The game is still the same. in the modern era with L2 overlays. but the punch line is, when it comes to containers, So L2 isn't even in the picture. What's that going to mean for the network? that minimizes the degree to which you don't have to provision hardware. Give the people, what's the nuance? from talking about the underlying technology not to say the network. One of the things that we have in VPP is the first of its kind, a big tent event. It's the recognition of what's evolved. And that is open source as a tier one reality is going to change the face of software development That sounds conservative to me. So there's going to be some leadership opportunities is that it starts to shift the thinking This is the ethos of open source.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ed Warnicke | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
2026 | DATE | 0.99+ |
ten million | QUANTITY | 0.99+ |
Los Angeles | LOCATION | 0.99+ |
Susie Wee | PERSON | 0.99+ |
Ed | PERSON | 0.99+ |
six-month | QUANTITY | 0.99+ |
12-month | QUANTITY | 0.99+ |
15-minute | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
two languages | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
North America | LOCATION | 0.99+ |
Linux | TITLE | 0.99+ |
Wireshark | TITLE | 0.99+ |
LinuxCon | EVENT | 0.99+ |
Two days | QUANTITY | 0.99+ |
two NIC | QUANTITY | 0.99+ |
Open Source Summit | EVENT | 0.99+ |
200,000 new connections | QUANTITY | 0.99+ |
Open Source Summit North America | EVENT | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
both sides | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
millions | QUANTITY | 0.98+ |
OS Summit | EVENT | 0.98+ |
Kubernetes | TITLE | 0.98+ |
today | DATE | 0.98+ |
Up to 400 million | QUANTITY | 0.98+ |
Open Source Summit 2017 | EVENT | 0.97+ |
first | QUANTITY | 0.96+ |
tens of thousands | QUANTITY | 0.96+ |
DevOps | TITLE | 0.96+ |
each hop | QUANTITY | 0.96+ |
Open Source Summit North America 2017 | EVENT | 0.95+ |
FD.io | TITLE | 0.95+ |
Linux Foundation | ORGANIZATION | 0.95+ |
DevNet | ORGANIZATION | 0.94+ |
first run | QUANTITY | 0.93+ |
every three months | QUANTITY | 0.9+ |
DevNet Create | EVENT | 0.9+ |
one final word | QUANTITY | 0.89+ |
DevNet | TITLE | 0.88+ |
Niel Viljoen, Netronome & Nick McKeown, Barefoot Networks - #MWC17 - #theCUBE
(lively techno music) >> Hello, everyone, I'm John Furrier with theCUBE. We are here in Palo Alto to showcase a brand new relationship and technology partnership and technology showcase. We're here with Niel Viljoen, who's the CEO of Netronome. Did I get that right? (Niel mumbles) Almost think that I will let you say it, and Nick McKeown, who's Chief Scientist and Chairman and the co-founder Barefoot Networks. Guys, welcome to the conversation. Obviously, a lot going on in the industry. We're seeing massive change in the industry. Certainly, digital transmissions, the buzzword the analysts all use, but, really, what that means is the entire end-to-end digital space, with networks all the way to the applications are completely transforming. Network transformation is not just moving packets around, it's wireless, it's content, it's everything in between that makes it all work. So let's talk about that, and let's talk about your companies. Niel, talk about your company, what you guys do, Netronome and Nick, same for you, for Barefoot. Start with you guys. >> So as Netronome, our core focus lies around SmartNICs. What we mean by that, these are elements that go into the network servers, which in this sort of cloud and NFV world, gets used for a lot of network services, and that's our area of focus. >> Barefoot is trying to make switches that were previously fixed function, turning them into something that those who own and operate networks can program them for themselves to customize them or add new features or protocols that they need to support. >> And Barefoot, you're walking in the park, you don't want to step in any glass, and get a cut, and I like that, love the name of the company, but brings out the real issue of getting this I/O world if there were NICs, it throws back the old school mindset of just network cards and servers, but if you take that out on the Internet now, that is the I/O channel engine, real time, it's certainly a big part of the edge device, whether that's a human or device, IoT to mobile, and then moving it across the network, and by the way, there's multiple networks, so is this kind of where you guys are showcasing your capabilities? >> So, fundamentally, you need both sides of the line, if I could put it that way, so we, on the server side, and specifically, also giving visibility between virtual machines to virtual machines, also called VNFs to VNFs in a service chaining mechanism, which has what a lot of the NFV customers are deploying today. >> Really, as the entire infrastructure upon which these services are delivered, as that moves into software, and more of it is created by those who own and operate these services for themselves, they either create it, commission it, buy it, download it, and then modify it to best meet their needs. That's true whether it's in the network interface portion, whether it's in the switch, and they've seen it happen in the control plane, and now it's moving down so that they can define all the way down to how packets are processed in the NIC and in the switches, and when they do that, they can then add in their ability to see what's going on in ways that they've never been able to do before, so we really think of ourselves as providing that programmability and that flexibility down, all the way to the way that the packets are processed. >> And what's the impact, Nick, talk about the impact then take us through like an example. You guys are showcasing your capabilities to the world, and so what's the impact and give us an example of what the benefit would be. I mean, what goes on like this instrumentation, certainly, everyone wants to instrument everything. >> Niel: Yes. >> Nick: Yeah. >> But what's the practical benefit. I mean who wins from this and what's the real impact? >> Well, you know, in days gone by, if you're a service provider providing services to your customers, then you would typically do this out of vertically integrated pieces of equipment that you get from equipment vendors. It's closed, it's proprietary, they have their own sort of NetFlow, sFlow, whatever the mechanism that they have for measuring what's going on, and you had to learn to live with the constraints of what they had. As this all gets kind of disaggregated and broken apart, and that the owner of the infrastructure gets to define the behavior in software, they can now chain together the modules and the pieces that they need in order to deliver the service. That's great, but now they've lost that proprietary measurement, so now they need to introduce the measurement that they can get greater visibility. This actually has created a tremendous opportunity and this is what we're demonstrating, is if you can come up with a uniform way of doing this, so that you can see, for example, the path that every packet takes, the delay that it encounters along the way, the rules that it encounters that determines the path that it gets, if it encounters congestion, who else contributed to that congestion, so we know who to go blame, then by giving them that flexibility, they can go and debug systems much more quickly, and change them and modify them. >> It's interesting, it's almost like the aspirin, right? You need, the headache now is, I have good proprietary technology for point measurement and solutions, but yet I need to manage multiple components. >> I think there's an add-on to what Nick said, which is the whole key point here which is the programmability, because there's data, and then there's information. Gathering lots and lots of telemetry data is easy. (John chuckles) The problem is you need to have it at all points, which is Nick's key point, but the programmability allows the DevOps person, in other words, the operational people within the cloud or carrier infrastructure, to actually write code that identifies and isolates the data, the information rather than the data that they need. >> So is this customer-based for you guys, the carriers, the service providers, who's your target audience? >> Yep, I think it's service providers who are applying the NFV technologies, in other words, the cloud-like technologies. I always say the real big story here is the cloud technologies rather than just the cloud. >> Yeah, yeah. >> And how that's-- >> And same for you guys, you guys have this, this joint, same target customer. >> Yeah, I don't think there's any disagreement. >> Okay. (laughs) Well, I want to get drilling to the whole aspirin analogy 'cause it's of the things that you brought up with the programmability because NFV has been that, you know, saving grace, it's been the Holy Grail for how many years now, and you're starting to see the tides shifting now towards where NFV is not a silver bullet, so to speak, but it is actually accelerating some of the change, and I always like to ask people, "Hey, are you an aspirin or you a vitamin?" One guest told me, "I'm a steroid. "We make things grow faster." I'm like, "Okay," but in a way, the aspirin solves a problem, like immediate headaches, so it sounds like a lot of the things that you mentioned. That's an immediate benefit right there on the instrumentation, in an open way, multi-component, multi-vendor kind of, benefits of proprietary but open, but the point about programmability gives a lot of headroom around kind of that vitamin, that steroid piece where it's going to allow for automation, which brings an interesting thing, that's customizable automation, meaning, you can apply software policy to it. Is that kind of like, can you tease that out, is that an area that you guys talking about? >> I think the first thing that we should mention is probably the new language called P4. I think Nick will be too modest to state that but I think Nick has been a key player in, along with his team and many other people, in the definition and the creation of this language, which allows the programmability of all these elements. >> Yeah, just drill down, I mean, toot your own horn here, let's get into it because what is it and what's the benefit and what is the real value, what's the upshot of P4? >> Yeah, the way that hardware that processes packets, whether it's in network interface cards, or in switching, the way that that's been defined in the past, has been by chip designers. At the time that they defined the behavior, they're writing Verilog or VHDL, and as we know, people that design chips, don't operate big networks, so they really know what capabilities to put in-- >> They're good at logic in a vacuum but not necessarily in the real world, right? Is that what you (laughs). >> So what we-- >> Not to insult chip designers, they're great, right? >> So what we've all wanted to do for some time is to come up with a uniform language, a domain-specific language that allows you to define how packets will be processed in interfaces, in switches, in hypervisor switches inside the virtual machine environments, in a uniform way so that someone who's proficient in that language can then describe a behavior that can then operate in different paths of the chained services, so that they can get the same behavior, a uniform behavior, so that they can see the network-wide, the service-wide behavior in a uniform way. The P4 language is merely a way to describe that behavior, and then both Netronome and Barefoot, we each have our own compilers for compiling that down to the specific processing element that operates in the interfaces and in the switches. >> So you're bridging the chip layer with some sort of abstraction layer to give people the ability to do policy programming, so all the heavy lifting stuff in the old network days was configuration management, I mean all the, I mean that was like hard stuff and then, now you got dynamic networks. It even gets harder. Is this kind of where the problem goes away? And this is where automation. >> Exactly, and the key point is the programmability versus configurability. >> John: Yeah. >> In a configurable environment, you're always trying to pre-guess what your customer's going to try to look at. >> (chuckles) Guessing's not good in the networking area. That's not good for five nines. >> In the new world that we're in now, the customer actually wants to define exactly what the information is they want to extract-- >> John: I wanted to get-- >> Which is your whole question around the rules and-- >> So let me see if I can connect the dots here, just kind of connect this for, and so, in the showcase, you guys are going to show this programmability, this kind of efficiency at the layer of bringing instrumentation then using that information, and/or data depending on how it's sliced and diced via the policy and programmability, but this becomes cloud-like, right? So when you start moving, thinking about cloud where service providers are under a lot of pressure to go cloud because Over-The-Top right now is booming, you're seeing a huge content and application market that's super ripe for kind of the, these kinds of services. They need that ability to have the infrastructure be like software, so infrastructure is code, is the DevOps term that we talk about in our DevOps world, but that has been more data-centered kind of language, with developers. Is it going the same trajectory in the service provider world because you have networks, I mean they're bigger, higher scale. What are some of those DevOps dynamics in your world? Can you talk about that and share some color on that? >> I mean, the way in which large service providers are starting to deliver those services is out of something that looks very much like the cloud platform. In fact, it could in fact be exactly the same technology. The same servers, the same switches, same operating systems, a lot of the same techniques. The problem they're trying to solve is slightly different. They're chaining together the means to process a sequence of operations. A little bit like, though the cloud operators are moving towards microservices that get chained together, so there are a lot of similarities here and the problems they face are very similar, but think about the hell that this potentially creates for them. It means that we're giving them so much rope to hang themselves because everything is now got to be put together in a way that's coming from different sources, written and authored by different people with different intent, or from different places across the Internet, and so, being able to see and observe exactly how this is working is even more critical than-- >> So I love that rope to hang yourself analogy because a lot of people will end up breaking stuff as Mark Zuckerberg's famous quote is, "Move fast, break stuff," and then by the way, when they 100 million users and moved, slogan went for, "Move fast, be reliable," so he got on the five nines bandwagon pretty quick, but it's more than just the instrumentation. The key that you're talking about here is that they have to run those networks in really high reliability environments. >> Nick: Correct. >> And so that begs the challenge of, okay, it's not just easy as throwing a docker container at something. I mean that's what people are doing now, like hey, I'm going to just use microservices, that's the answer. They still got stuff under the hood, but underneath microservices. You have orchestration challenges and this kind of looks and feels like the old configuration management problems but moved up the stack, so is that a concern in your market as well? >> So I think that's a very, very good point that you make because the carriers, as you say, tend to be more dependent, almost, on absolute reliability, and very importantly, performance, but in other words, they need to know that this is going to be 100 gigs because that's what they've signed up the SLA with their customer for. (John chuckles) It's not going to be almost 100 gigs 'cause then they're going to end up paying a lot of penalties. >> Yeah, they can't afford breakage. They're OpsDev, not DevOps. Which comes first in their world? >> Yes, so the critical point here is just that this is where the demo that we're doing which shows the ability to capture all this information at line rate, at very high speeds in the switches. (mumbles) >> So let's about this demo you're doing, this showcase that you guys are providing and demonstrating to the marketplace, what's the pitch, I mean what is it, what's the essence of the insight of this demo, what's it proving? >> So I think that the, it's good to think about a scenario in which you would need this, and then this leads into what the demo would be. Very common in an environment like the VNF kind of environment, where something goes wrong, they're trying to figure out very quickly, who's to blame, which part of the infrastructure was the problem? Could it be congestion, could it be a misconfiguration? (John laughs) >> Niel: Who's flow-- >> Everyone pointing finger at the other guy. >> Nick: The typical way-- >> Two days later, what happened, really? >> Typical way that they do this, is they'll bring the people that are responsible for the compute, the networking, and the storage quickly into one room, and say, "Go figure it out." The people that are doing the compute, they'll be modifying and changing and customizing, running experiments, isolating the problem. So are the people that are doing storage. They can program their environment. In the past, the networking people had ping and traceroute. That's the same tools that they had 20 years ago. (John chuckles) What we're doing is changing that by introducing the means where they can program and configure, run different experiments, run different probes, so that they can look and see the things that they need to see, and in the demo in particular, you'll be able to see the packets coming in through a switch, through a NIC, through a couple of VMs, back out through a switch, and then you can look at that packet afterwards, and you can ask questions of the packet itself, something you've never been able to-- >> It's the ultimate debugger. Basically, it's the ultimate debugger. >> Nick: That's right. Go to the packet, say-- >> Niel: Programmable debugger. >> "Which path did you take? "How long did you wait at each NIC, "at each VM, at each switch port as you went through? "What are the rules that you followed "that led you to be here, and if you encountered "some congestion, whose fault was it? "Who did you share that queue with?" so we can go back and apportion the blame-- >> So you get a multiple dimension of path information coming in, not just the standard stovepiped tools-- >> Nick: That's right. >> And then, everyone compares logs and then there's all these holes in it, people don't know what the hell happened. >> And through the programmability, you can isolate the piece of the information-- >> So the experimentation agile is where I think, is that what you're getting at? You can say, you can really get down and dirty into a duplication environment and also run these really fast experiments versus kind of in theory or in-- >> Exactly, which is what, as Nick said, is exactly what people on the server side and on the storage side have been able to do in the past. >> Okay so for people watching that are kind of getting into this and people who aren't, just give me in order maybe through of the impact and the consequences of not taking this approach, vis-a-vis the available, today's available techniques. >> If you wanted to try and figure out who it was that you were sharing a queue with inside an interface or inside a switch, you have no way to do that today, right? No means to do that, and so if you wanted to be able to say it's that aggressive flow over there, that malfunction in service over there, you've got no means to do it. As a consequence, the networking people always get the blame because they can't show that it wasn't them. But if you can say, I can see, in this queue, there were four flows going through or 4,000 flows, and one of them was really badly behaved, and it was that one over there and I can tell you exactly why its packets were ending up here, then you can immediately go in and shut that one down. They have no way that they go and randomly shut-- >> Can I get this for my family, I need this for my household. I mean, I'm going to use this for my kids. I mean I know exactly the bad behavior, I need to prove it. No, but this is what the point is, is this is fast. I mean you're talking speed, too, as another aspect-- >> Niel: It's all about the-- >> What's the speed lag on approach versus taking the old, current approach versus this joint approach you guys are taking? What's the, give me an estimate on just ballpark numbers-- >> Well there's two aspects to the speed. One is the speed at which it's operating, so this is going to be in the demo, it's running at 40 gigabits per seconds, but this can easily run, for example, in the Barefoot switch, it'll run at 6 terabits per second. The interesting thing here is that in this entire environment, this measurement capability does not generate a single extra packet. All of it is self-contained in the packets that are already flowing. >> So there's no latency issues on running this in production. >> If you wanted then change the behavior, you needed to go and modify what was happening in the NIC, modify what was happening in the switch, you can do that in minutes. So that you can say-- >> Now the time it takes for a user now to do this, let's go to that time series. What does that look like? So current method is get everyone in a room, do these things, are we talking, you know. >> I think that today, it's just simply not possible. >> Not possible. >> So it's, yes, new capability. >> I think is the key issue. >> So this is a new capability. >> This is a new capability and exactly as Nick said, it's getting the network to the same level of ability that you always had inside the-- >> So I got to ask you guys, as founders of your companies because this is one of those things that's a great success story, entrepreneurs, you got, it's not just a better mousetrap, it's revolutionary in the sense that no one's ever had the capability before, so when you go to events like Mobile World Congress, you're out in the field, are you shaking people like, "You need me! "I need to cut the line and tell you what's going on." I mean, you must have a sense of urgency that, is it resonating with the folks you're talking to? I mean, what are some of the conversations you're having with folks? They must be pretty excited. Can you share any anecdotal stories? >> Well, yup, I mean we're finding, across the industry, not only in the service providers, the data center companies, Wall Street, the OEM box vendors, everybody is saying, "I need," and have been saying for a long time, "I need the ability to probe into the behavior "of individual packets, and I need whoever is owning "and operating the network to be able to customize "and change that." They've never been able to do that. The name of the technique that we use is called In-band Network Telemetry or INT, and everybody is asking for it now. Actually, whether it's with the two of us, or whether they're asking for it more generally, this is, this is-- >> Game changer. >> You'll see this everywhere. >> John: It's a game changer, right? >> That's right. >> Great, all right, awesome. Well, final question is, is that, what's the business benefits for them because I can imagine you get this nailed down with the proper, the ability to test new apps because obviously, we're in a Wild West environment, tsunami of apps coming, there's always going to be some tripwires in new apps, certainly with microservices and APIs. >> I think the general issues that we're addressing here is absolutely crucial to the successful rollout of NFV infrastructures. In other words, the ability to rapidly change, monitor, and adapt is critical. It goes wider than just this particular demo, but I think-- >> It's all apps on the service provider. >> The ability to handle all the VNFs-- >> Well, in the old days, it was simply network spikes, tons of traffic, I mean, now you have, apps could throw off anomalies anywhere, right? You'd have no idea what the downstream triggers could be. >> And that's the whole notion of the programmable network, which is critical. >> Well guys, any information where people can get some more information on this awesome opportunity? You guys' sites, want to share quick web addresses and places people get whitepapers or information? >> For the general P4 movement, there's P4.org. P, the number four, .org. Nice and easy. They'll find lots of information about the programmability that's possible by programming the, the forwarding being what both of us are doing. In-band Network Telemetry, you'll find descriptions there, P4 programs, and whitepapers describing that, and of course, on the two company websites, Netronome and Barefoot. >> Right. Nick and Niel, thanks for spending some time sharing the insights and congratulations. We'll keep an eye for it, and we'll be talking to you soon. >> Thank you. >> Thank you very much. >> This is theCUBE here in Palo Alto. I'm John Furrier, thanks for watching. (lively techno music)
SUMMARY :
and the co-founder Barefoot Networks. that go into the network servers, that they need to support. So, fundamentally, you need both sides of the line, and in the switches, and when they do that, talk about the impact then take us through like an example. I mean who wins from this and what's the real impact? and broken apart, and that the owner It's interesting, it's almost like the aspirin, right? that identifies and isolates the data, is the cloud technologies rather than just the cloud. And same for you guys, you guys have this, 'cause it's of the things that you brought up in the definition and the creation of this language, in the past, has been by chip designers. Is that what you (laughs). that operates in the interfaces and in the switches. so all the heavy lifting stuff in the old network days Exactly, and the key point is the programmability what your customer's going to try to look at. (chuckles) Guessing's not good in the networking area. in the showcase, you guys are going to show and the problems they face are very similar, is that they have to run those networks And so that begs the challenge of, okay, because the carriers, as you say, Which comes first in their world? in the switches. Very common in an environment like the VNF and see the things that they need to see, Basically, it's the ultimate debugger. Go to the packet, say-- and then there's all these holes in it, and on the storage side have been able to do in the past. of the impact and the consequences always get the blame because they can't show I mean I know exactly the bad behavior, I need to prove it. One is the speed at which it's operating, So there's no latency issues on running this in the NIC, modify what was happening in the switch, Now the time it takes for a user now to do this, that no one's ever had the capability before, "I need the ability to probe into the behavior because I can imagine you get this nailed down is absolutely crucial to the successful rollout Well, in the old days, it was simply network spikes, And that's the whole notion of the programmable network, and of course, on the two company websites, sharing the insights and congratulations. This is theCUBE here in Palo Alto.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nick McKeown | PERSON | 0.99+ |
Niel Viljoen | PERSON | 0.99+ |
Niel | PERSON | 0.99+ |
Nick | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
100 gigs | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Barefoot Networks | ORGANIZATION | 0.99+ |
Netronome | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Mark Zuckerberg | PERSON | 0.99+ |
Barefoot | ORGANIZATION | 0.99+ |
two aspects | QUANTITY | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
both | QUANTITY | 0.99+ |
#MWC17 | EVENT | 0.99+ |
two company | QUANTITY | 0.98+ |
each VM | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
100 million users | QUANTITY | 0.98+ |
each switch | QUANTITY | 0.98+ |
Two days later | DATE | 0.98+ |
20 years ago | DATE | 0.98+ |
four | QUANTITY | 0.97+ |
one room | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.96+ |
both sides | QUANTITY | 0.96+ |
each | QUANTITY | 0.96+ |
each NIC | QUANTITY | 0.96+ |
One guest | QUANTITY | 0.95+ |
.org. | OTHER | 0.95+ |
first | QUANTITY | 0.94+ |
6 terabits per second | QUANTITY | 0.94+ |
single extra packet | QUANTITY | 0.91+ |
4,000 flows | QUANTITY | 0.88+ |
P4 | TITLE | 0.88+ |
40 gigabits per seconds | QUANTITY | 0.85+ |
five nines bandwagon | QUANTITY | 0.84+ |
five nines | QUANTITY | 0.84+ |
theCUBE | ORGANIZATION | 0.76+ |
almost 100 gigs | QUANTITY | 0.76+ |
DevOps | TITLE | 0.75+ |
#theCUBE | ORGANIZATION | 0.69+ |
Verilog | TITLE | 0.67+ |
NetFlow | ORGANIZATION | 0.66+ |
OpsDev | ORGANIZATION | 0.64+ |
VNFs | TITLE | 0.62+ |
P4 | OTHER | 0.61+ |
agile | TITLE | 0.59+ |
P4 | ORGANIZATION | 0.58+ |
Wall | ORGANIZATION | 0.56+ |
P4.org | TITLE | 0.5+ |