Ian Buck, NVIDIA | AWS re:Invent 2021
>>Well, welcome back to the cubes coverage of AWS reinvent 2021. We're here joined by Ian buck, general manager and vice president of accelerated computing at Nvidia I'm. John Ford, your host of the QB. And thanks for coming on. So in video, obviously, great brand congratulates on all your continued success. Everyone who has does anything in graphics knows the GPU's are hot and you guys get great brand great success in the company, but AI and machine learning was seeing the trend significantly being powered by the GPU's and other systems. So it's a key part of everything. So what's the trends that you're seeing, uh, in ML and AI, that's accelerating computing to the cloud. Yeah, >>I mean, AI is kind of drape bragging breakthroughs innovations across so many segments, so many different use cases. We see it showing up with things like credit card, fraud prevention and product and content recommendations. Really it's the new engine behind search engines is AI. Uh, people are applying AI to things like, um, meeting transcriptions, uh, virtual calls like this using AI to actually capture what was said. Um, and that gets applied in person to person interactions. We also see it in intelligence systems assistance for a contact center, automation or chat bots, uh, medical imaging, um, and intelligence stores and warehouses and everywhere. It's really, it's really amazing what AI has been demonstrated, what it can do. And, uh, it's new use cases are showing up all the time. >>Yeah. I'd love to get your thoughts on, on how the world's evolved just in the past few years, along with cloud, and certainly the pandemics proven it. You had this whole kind of full stack mindset initially, and now you're seeing more of a horizontal scale, but yet enabling this vertical specialization in applications. I mean, you mentioned some of those apps, the new enablers, this kind of the horizontal play with enablement for specialization, with data, this is a huge shift that's going on. It's been happening. What's your reaction to that? >>Yeah, it's the innovations on two fronts. There's a horizontal front, which is basically the different kinds of neural networks or AIS as well as machine learning techniques that are, um, just being invented by researchers for, uh, and the community at large, including Amazon. Um, you know, it started with these convolutional neural networks, which are great for image processing, but as it expanded more recently into, uh, recurrent neural networks, transformer models, which are great for language and language and understanding, and then the new hot topic graph neural networks, where the actual graph now is trained as a, as a neural network, you have this underpinning of great AI technologies that are being adventure around the world in videos role is try to productize that and provide a platform for people to do that innovation and then take the next step and innovate vertically. Um, take it, take it and apply it to two particular field, um, like medical, like healthcare and medical imaging applying AI, so that radiologists can have an AI assistant with them and highlight different parts of the scan. >>Then maybe troublesome worrying, or requires more investigation, um, using it for robotics, building virtual worlds, where robots can be trained in a virtual environment, their AI being constantly trained, reinforced, and learn how to do certain activities and techniques. So that the first time it's ever downloaded into a real robot, it works right out of the box, um, to do, to activate that we co we are creating different vertical solutions, vertical stacks for products that talk the languages of those businesses, of those users, uh, in medical imaging, it's processing medical data, which is obviously a very complicated large format data, often three-dimensional boxes in robotics. It's building combining both our graphics and simulation technologies, along with the, you know, the AI training capabilities and different capabilities in order to run in real time. Those are, >>Yeah. I mean, it's just so cutting edge. It's so relevant. I mean, I think one of the things you mentioned about the neural networks, specifically, the graph neural networks, I mean, we saw, I mean, just to go back to the late two thousands, you know, how unstructured data or object store created, a lot of people realize that the value out of that now you've got graph graph value, you got graph network effect, you've got all kinds of new patterns. You guys have this notion of graph neural networks. Um, that's, that's, that's out there. What is, what is a graph neural network and what does it actually mean for deep learning and an AI perspective? >>Yeah, we have a graph is exactly what it sounds like. You have points that are connected to each other, that established relationships and the example of amazon.com. You might have buyers, distributors, sellers, um, and all of them are buying or recommending or selling different products. And they're represented in a graph if I buy something from you and from you, I'm connected to those end points and likewise more deeply across a supply chain or warehouse or other buyers and sellers across the network. What's new right now is that those connections now can be treated and trained like a neural network, understanding the relationship. How strong is that connection between that buyer and seller or that distributor and supplier, and then build up a network that figure out and understand patterns across them. For example, what products I may like. Cause I have this connection in my graph, what other products may meet those requirements, or also identifying things like fraud when, when patterns and buying patterns don't match, what a graph neural networks should say would be the typical kind of graph connectivity, the different kind of weights and connections between the two captured by the frequency half I buy things or how I rate them or give them stars as she used cases, uh, this application graph neural networks, which is basically capturing the connections of all things with all people, especially in the world of e-commerce, it's very exciting to a new application, but applying AI to optimizing business, to reducing fraud and letting us, you know, get access to the products that we want, the products that they have, our recommendations be things that, that excited us and want us to buy things >>Great setup for the real conversation that's going on here at re-invent, which is new kinds of workloads are changing. The game. People are refactoring their business with not just replatform, but actually using this to identify value and see cloud scale allows you to have the compute power to, you know, look at a note on an arc and actually code that. It's all, it's all science, all computer science, all at scale. So with that, that brings up the whole AWS relationship. Can you tell us how you're working with AWS before? >>Yeah. 80 of us has been a great partner and one of the first cloud providers to ever provide GPS the cloud, uh, we most more recently we've announced two new instances, uh, the instance, which is based on the RA 10 G GPU, which has it was supports the Nvidia RTX technology or rendering technology, uh, for real-time Ray tracing and graphics and game streaming is their highest performance graphics, enhanced replicate without allows for those high performance graphics applications to be directly hosted in the cloud. And of course runs everything else as well, including our AI has access to our AI technology runs all of our AI stacks. We also announced with AWS, the G 5g instance, this is exciting because it's the first, uh, graviton or ARM-based processor connected to a GPU and successful in the cloud. Um, this makes, uh, the focus here is Android gaming and machine learning and France. And we're excited to see the advancements that Amazon is making and AWS is making with arm and the cloud. And we're glad to be part of that journey. >>Well, congratulations. I remember I was just watching my interview with James Hamilton from AWS 2013 and 2014. He was getting, he was teasing this out, that they're going to build their own, get in there and build their own connections, take that latency down and do other things. This is kind of the harvest of all that. As you start looking at these new new interfaces and the new servers, new technology that you guys are doing, you're enabling applications. What does, what do you see this enabling as this, as this new capability comes out, new speed, more, more performance, but also now it's enabling more capabilities so that new workloads can be realized. What would you say to folks who want to ask that question? >>Well, so first off I think arm is here to stay and you can see the growth and explosion of my arm, uh, led of course, by grab a tiny to be. I spend many others, uh, and by bringing all of NVIDIA's rendering graphics, machine learning and AI technologies to arm, we can help bring that innovation. That arm allows that open innovation because there's an open architecture to the entire ecosystem. Uh, we can help bring it forward, uh, to the state of the art in AI machine learning, the graphics. Um, we all have our software that we released is both supportive, both on x86 and an army equally, um, and including all of our AI stacks. So most notably for inference the deployment of AI models. We have our, the Nvidia Triton inference server. Uh, this is the, our inference serving software where after he was trained to model, he wanted to play it at scale on any CPU or GPU instance, um, for that matter. So we support both CPS and GPS with Triton. Um, it's natively integrated with SageMaker and provides the benefit of all those performance optimizations all the time. Uh, things like, uh, features like dynamic batching. It supports all the different AI frameworks from PI torch to TensorFlow, even a generalized Python code. Um, we're activating how activating the arm ecosystem as well as bringing all those AI new AI use cases and all those different performance levels, uh, with our partnership with AWS and all the different clouds. >>And you got to making it really easy for people to use, use the technology that brings up the next kind of question I want to ask you. I mean, a lot of people are really going in jumping in the big time into this. They're adopting AI. Either they're moving in from prototype to production. There's always some gaps, whether it's knowledge, skills, gaps, or whatever, but people are accelerating into the AI and leaning into it hard. What advancements have is Nvidia made to make it more accessible, um, for people to move faster through the, through the system, through the process? >>Yeah, it's one of the biggest challenges. The other promise of AI, all the publications that are coming all the way research now, how can you make it more accessible or easier to use by more people rather than just being an AI researcher, which is, uh, uh, obviously a very challenging and interesting field, but not one that's directly in the business. Nvidia is trying to write a full stack approach to AI. So as we make, uh, discover or see these AI technologies come available, we produce SDKs to help activate them or connect them with developers around the world. Uh, we have over 150 different STKs at this point, certain industries from gaming to design, to life sciences, to earth scientist. We even have stuff to help simulate quantum computing. Um, and of course all the, all the work we're doing with AI, 5g and robotics. So, uh, we actually just introduced about 65 new updates just this past month on all those SDKs. Uh, some of the newer stuff that's really exciting is the large language models. Uh, people are building some amazing AI. That's capable of understanding the Corpus of like human understanding, these language models that are trained on literally the continent of the internet to provide general purpose or open domain chatbots. So the customer is going to have a new kind of experience with a computer or the cloud. Uh, we're offering large language, uh, those large language models, as well as AI frameworks to help companies take advantage of this new kind of technology. >>You know, each and every time I do an interview with Nvidia or talk about Nvidia my kids and their friends, they first thing they said, you get me a good graphics card. Hey, I want the best thing in their rig. Obviously the gaming market's hot and known for that, but I mean, but there's a huge software team behind Nvidia. This is a well-known your CEO is always talking about on his keynotes, you're in the software business. And then you had, do have hardware. You were integrating with graviton and other things. So, but it's a software practices, software. This is all about software. Could you share kind of more about how Nvidia culture and their cloud culture and specifically around the scale? I mean, you, you hit every, every use case. So what's the software culture there at Nvidia, >>And it is actually a bigger, we have more software people than hardware people, people don't often realize this. Uh, and in fact that it's because of we create, uh, the, the, it just starts with the chip, obviously building great Silicon is necessary to provide that level of innovation, but as it expanded dramatically from then, from there, uh, not just the Silicon and the GPU, but the server designs themselves, we actually do entire server designs ourselves to help build out this infrastructure. We consume it and use it ourselves and build our own supercomputers to use AI, to improve our products. And then all that software that we build on top, we make it available. As I mentioned before, uh, as containers on our, uh, NGC container store container registry, which is accessible for me to bus, um, to connect to those vertical markets, instead of just opening up the hardware and none of the ecosystem in develop on it, they can with a low-level and programmatic stacks that we provide with Kuda. We believe that those vertical stacks are the ways we can help accelerate and advance AI. And that's why we make as well, >>Ram a little software is so much easier. I want to get that plug for, I think it's worth noting that you guys are, are heavy hardcore, especially on the AI side. And it's worth calling out, uh, getting back to the customers who are bridging that gap and getting out there, what are the metrics they should consider as they're deploying AI? What are success metrics? What does success look like? Can you share any insight into what they should be thinking about and looking at how they're doing? >>Yeah. Um, for training, it's all about time to solution. Um, it's not the hardware that that's the cost, it's the opportunity that AI can provide your business and many, and the productivity of those data scientists, which are developing, which are not easy to come by. So, uh, what we hear from customers is they need a fast time to solution to allow people to prototype very quickly, to train a model to convergence, to get into production quickly, and of course, move on to the next or continue to refine it often. So in training is time to solution for inference. It's about our, your ability to deploy at scale. Often people need to have real time requirements. They want to run in a certain amount of latency, a certain amount of time. And typically most companies don't have a single AI model. They have a collection of them. They want, they want to run for a single service or across multiple services. That's where you can aggregate some of your infrastructure leveraging the trading infant server. I mentioned before can actually run multiple models on a single GPU saving costs, optimizing for efficiency yet still meeting the requirements for latency and the real time experience so that your customers have a good, a good interaction with the AI. >>Awesome. Great. Let's get into, uh, the customer examples. You guys have obviously great customers. Can you share some of the use cases, examples with customers, notable customers? >>Yeah. I want one great part about working in videos as a technology company. You see, you get to engage with such amazing customers across many verticals. Uh, some of the ones that are pretty exciting right now, Netflix is using the G4 instances to CLA um, to do a video effects and animation content. And, you know, from anywhere in the world, in the cloud, uh, as a cloud creation content platform, uh, we work in the energy field that Siemens energy is actually using AI combined with, um, uh, simulation to do predictive maintenance on their energy plants, um, and, and, uh, doing preventing or optimizing onsite inspection activities and eliminating downtime, which is saving a lot of money for the engine industry. Uh, we have worked with Oxford university, uh, which is Oxford university actually has over two, over 20 million artifacts and specimens and collections across its gardens and museums and libraries. They're actually using convenient GPS and Amazon to do enhance image recognition, to classify all these things, which would take literally years with, um, uh, going through manually each of these artifacts using AI, we can click and quickly catalog all of them and connect them with their users. Um, great stories across graphics, about cross industries across research that, uh, it's just so exciting to see what people are doing with our technology together with, >>And thank you so much for coming on the cube. I really appreciate Greg, a lot of great content there. We probably going to go another hour, all the great stuff going on in the video, any closing remarks you want to share as we wrap this last minute up >>Now, the, um, really what Nvidia is about as accelerating cloud computing, whether it be AI, machine learning, graphics, or headphones, community simulation, and AWS was one of the first with this in the beginning, and they continue to bring out great instances to help connect, uh, the cloud and accelerated computing with all the different opportunities integrations with with SageMaker really Ks and ECS. Uh, the new instances with G five and G 5g, very excited to see all the work that we're doing together. >>Ian buck, general manager, and vice president of accelerated computing. I mean, how can you not love that title? We want more, more power, more faster, come on. More computing. No, one's going to complain with more computing know, thanks for coming on. Thank you. Appreciate it. I'm John Farrell hosted the cube. You're watching Amazon coverage reinvent 2021. Thanks for watching.
SUMMARY :
knows the GPU's are hot and you guys get great brand great success in the company, but AI and machine learning was seeing the AI. Uh, people are applying AI to things like, um, meeting transcriptions, I mean, you mentioned some of those apps, the new enablers, Yeah, it's the innovations on two fronts. technologies, along with the, you know, the AI training capabilities and different capabilities in I mean, I think one of the things you mentioned about the neural networks, You have points that are connected to each Great setup for the real conversation that's going on here at re-invent, which is new kinds of workloads And we're excited to see the advancements that Amazon is making and AWS is making with arm and interfaces and the new servers, new technology that you guys are doing, you're enabling applications. Well, so first off I think arm is here to stay and you can see the growth and explosion of my arm, I mean, a lot of people are really going in jumping in the big time into this. So the customer is going to have a new kind of experience with a computer And then you had, do have hardware. not just the Silicon and the GPU, but the server designs themselves, we actually do entire server I want to get that plug for, I think it's worth noting that you guys are, that that's the cost, it's the opportunity that AI can provide your business and many, Can you share some of the use cases, examples with customers, notable customers? research that, uh, it's just so exciting to see what people are doing with our technology together with, all the great stuff going on in the video, any closing remarks you want to share as we wrap this last minute up Uh, the new instances with G one's going to complain with more computing know, thanks for coming on.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ian buck | PERSON | 0.99+ |
John Farrell | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Ian Buck | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ian buck | PERSON | 0.99+ |
Greg | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Ford | PERSON | 0.99+ |
James Hamilton | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
G five | COMMERCIAL_ITEM | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
G 5g | COMMERCIAL_ITEM | 0.99+ |
first | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Android | TITLE | 0.99+ |
Oxford university | ORGANIZATION | 0.99+ |
2013 | DATE | 0.98+ |
amazon.com | ORGANIZATION | 0.98+ |
over two | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
first time | QUANTITY | 0.97+ |
single service | QUANTITY | 0.97+ |
2021 | DATE | 0.97+ |
two fronts | QUANTITY | 0.96+ |
single | QUANTITY | 0.96+ |
over 20 million artifacts | QUANTITY | 0.96+ |
each | QUANTITY | 0.95+ |
about 65 new updates | QUANTITY | 0.93+ |
Siemens energy | ORGANIZATION | 0.92+ |
over 150 different STKs | QUANTITY | 0.92+ |
single GPU | QUANTITY | 0.91+ |
two new instances | QUANTITY | 0.91+ |
first thing | QUANTITY | 0.9+ |
France | LOCATION | 0.87+ |
two particular field | QUANTITY | 0.85+ |
SageMaker | TITLE | 0.85+ |
Triton | TITLE | 0.82+ |
first cloud providers | QUANTITY | 0.81+ |
NGC | ORGANIZATION | 0.77+ |
80 of | QUANTITY | 0.74+ |
past month | DATE | 0.68+ |
x86 | COMMERCIAL_ITEM | 0.67+ |
late | DATE | 0.67+ |
two thousands | QUANTITY | 0.64+ |
pandemics | EVENT | 0.64+ |
past few years | DATE | 0.61+ |
G4 | ORGANIZATION | 0.6+ |
RA | COMMERCIAL_ITEM | 0.6+ |
Kuda | ORGANIZATION | 0.59+ |
ECS | ORGANIZATION | 0.55+ |
10 G | OTHER | 0.54+ |
SageMaker | ORGANIZATION | 0.49+ |
TensorFlow | OTHER | 0.48+ |
Ks | ORGANIZATION | 0.36+ |
Kaustubh Das & Kevin Egan, Cisco | Cisco Live EU 2019
>> Live, from Barcelona, Spain it's theCUBE covering Cisco Live! Europe. Brought to you by Cisco and it's ecosystem partners. >> Welcome back to Barcelona everybody, this is theCUBE, the leader in live tech coverage. I'm Dave Vellante with my co-hosts. Stu Miniman, John Furrier has been here all week. Day three coverage of Cisco Live!, Barcelona. Cisco Live EMEA, and R. We learned the other day, add R for Russia. Kaustubh Das is back. KD is the vice president of product management for data center at Cisco and he's joined by Kevin Egan who is the director of the computer systems group for data center. Also from Cisco, gents, good to see you, welcome to theCUBE. >> Thank you. >> Great to be here. >> Thanks for having us. >> KD, Data center was a real focus of the announcements this week. The data center is exploding to a lot of different places. What's going on in the group? >> It's been a terrific weekend, you're right. Data center was a core of a lot of the announcements this week, and as we kicked off the key note with this concept that the data center is no longer centered. It's really, the data moves to the edges, the data center is moving to the edges. We had a lot of announcements around Hyperflex, Hyperflex anywhere, this product that we've been innovating on like monsters. Within a very short time, gone from a brand-new product on the market to a magic quarter liter with Gartner, and really kind of doing a lot of industry firsts with that. That's been a big focus. We had a lot of announcements with our technology partners, because we not only innovate within Cisco, but we work with Pure and NetApp and Citrix and Intel Optane and Nvidia to bring products to the market that get the richness of their innovation and our innovation together. The other big focus has been all about programmability. As the world becomes much more programmable, focus devops automation, it's been around Intersight and programmability and taking that to the next level. >> Interesting. So of course we always talk about shipping five megabytes of code as opposed to shipping petabytes through a straw into the god box. But so Kevin, programmability's a key theme here, of course we're in the devnet zone. We had Susie Wee on yesterday and she was just talking about the evolution of Cisco infrastructure and how early on you guys made the decision. Let's make all this stuff programmable. And that was sort of a game changer, your thoughts. >> Yeah, no it's been amazing. The growth of just Cisco devnet right? We've got half a million developers now developing against our SDKs, our devops, our opportunities all across our Cisco platforms. We've got thousands of Cisco resources doing work on that, producing those libraries, producing that, those sample sets of code and contributing to the communities. And today our customers are using it in a way that they've never really done. Previously it was a sort of a fix because vendor tools weren't getting it done. And now they're using these automation tools to really do every day tasks out to the mass, to reduce the complexity for their teams and reduce the burden. And then of course to have that repeatability and that security and that compliance aspect and it's been amazing the explosion. >> Yeah. The simplicity reminds me back you know the earliest days of UCS, you know UCS was built for that wave of virtualization and as KD has talked with us this week already about some of the partnerships that you've built. The wave of converged infrastructure, UCS really dominated in that marketplace, but here now we talk about AI with some of your partners, you talk about programmability, it's like that's not the Cisco UCS that I remember launching. So maybe give us the updates specifically that was announced this week. Where the platform has gone in more recent days. >> So I can start maybe, >> Yeah, absolutely. >> UCS came up with this concept of everything needs to be programmable, everything needs to be an API. And maybe we were a little ahead of our time, we conceived of this in 2007, got the product out in 09 and really from the very genesis of the program, of the UCS program, it's been a programmable platform, it's been everything's an API. The UI makes calls to the API, our SDKs make calls to the APIs. So that's been the core platform and in some ways it feels like the industry is coming to where we thought it would come to a little bit earlier. So they, this whole concept of infrastructure's code, softly defined what do we want to call it, this was core and germane to the product itself. What we've done lately is, it's taken that policy that we're encapsulated and taken out all of the silver into the fabric for scalability, we've taken that now into the cloud. And what that does is it leads to that velocity of innovation becoming even higher, the ability to create new and unique use cases becomes higher, the ability to conceive it becomes higher. And all of that coupled with where IT is going, which is becoming much more devops, much more around automation. I think those forces are coupling together to create some really unique use cases. >> You said, you gestured take it into the cloud, which is interesting, pointing. What does that mean? Taking it into the cloud? >> So let's speed back a little bit. So what we start off with was listen, a silver's a box, we need to abstract the silver, the personality of the silver out of that box into policy, put it in the fabric. And that allows us to really scale that and give the box different personalities depending upon the workload. What we've done is, we've launched a product called Intersight. Intersight takes that policy and makes it a SAS service, management of the service we want to call it. So now as data moves everywhere, as data centers move everywhere, as our applications no longer become monolithic but become these combinations of little applications communicating across data centers, it allows us to have a centralized dashboard for our infrastructure that we can access, because it's in the cloud, from anywhere. And because it's in the cloud it can kind of get, get that innovation wheel turning much faster. It's just been game changing, and obviously there's other things that can happen once you do that. You can have proactive guidance coming down from the cloud, you can have golden images come down from the cloud, you can do workload specific settings. So there's a lot of new areas that it opens up once you, >> Analytics, right? >> Analytics. >> Machine intelligence. >> So we've got the takeover happening in the devnet zone right now, so focus on the data center, everybody's got t shirts and I think it says Hyperflex on them, big announcement this week about Hyperflex anywhere. Kevin you know I think that when people heard HCI, they often picture a box, or it's a group of boxes it's in a rack, it's all that and everything, and the thing is as an analyst I was poking at it, it's like "well we virtualized a lot of the stuff "and we put it in a new form factor." That's great to modernize the platform but how do we make it cloud native, how does it fit into a hybrid and multi cloud world, and it feels like we're reaching that point now. So help us connect the dots as to how, what HCI was fits into this hybrid and multi cloud world today. >> Absolutely. I mean, HCI when it came out was an alternative to SAN, I mean it was an alternative and it was touting simplicity, touting you know grow with your applications. But really now, with the multi cloud instances that our customers are looking at, they have to have a way to deploy those, a way to connect to those remotely, manage those, monitor those, actually connect that back to the core so that you can take advantage of the analytics that are running at the core and make real time recommendations, make real time adjustments for services and those type of, you know that connectivity is really what we mean by Hyperflex anywhere. It's the evolution of how you deploy, how you manage, and then of course that day two, day five, day one hundred where you're actually making that experience simple for the customers. >> Help us understand exactly, is this, do I just have the backup image in a public cloud, do I actually have similar software stacks, what's the expanse? >> Let me try to unpack that a little bit. I think it's three different vectors that we're doing. So we want as we modernize, and as our customers modernize, they're looking for a much more cloud-like limber, elastic platform. That's the first vector, that's what HCI has done, that's what we've done. And we've actually done it on steroids because we've taken that code-designed hardware and software much like the public cloud guys are doing, but we control that and we can give that to our enterprise customers and our enterprise grade resilient infrastructure. The first thing is that, the second piece of it is what our customers and really our developers and the customers are wanting to do, is to create in one place and deploy in another. So create on the private cloud, deploy in the public cloud, or create in the public cloud, deploy in the private cloud, or actually have an application that bridges the two. So having a homogenous development environment whether it's, and a lot of this is around the container frameworks, whether it's on the public cloud, private cloud. That's key, and what we've done with Hyperflex, and the integrations we've got with our container platform, with open shift, with cloud center, which was again a big announcement this week. That's that second vector, is being able to port applications, develop one place, deploy any place. And the third piece is what we've been talking about all through this segment, which is the ability to now have the cloud drive your infrastructure. Everything's connected, everything's analyzed in the cloud, there's telemetry, there's proactive guidance, there's a common dashboard there's centralized monitoring, there's the ability to deploy, like we did in the key note demonstrating in the key note, multiple different sides spread out across the world, from a cental location. I think that's game changing. >> I'd like to get your take on differentiation. Obviously you guys are biased. Cisco's different, it's better. But I want to hear why. So relative to other infrastructure players, are you, in your words, however you want to describe it more cloud like more programmable, where's the differentiation? >> Go ahead and I'll later on. >> Yeah sure. So basically we started with a foundation of UCS and that foundation, virtualize compute bare metal compute, and of course now hyper-converge, and the reason that it allows us to do things, allows us to Hyperflex anywhere, allows us to have that cloud-based model is because we built that infrastructure around the API from day one. When we started this, that programmatic infrastructure, we were talking to customers, it was stateless it was desired state config, they didn't know what we were talking about. I mean, they had no idea when this came out. But that's the foundation that really allows us to drive the API integrations to our app layers, which is what KD was talking about, and then of course from there to our multi cloud integrations and that's really the foundation that laid, that we laid early on. And that's why all of our UCS platform really enables this cloud integration. >> Yeah, I mean the way I look at it is nobody else has a fully API driven infrastructure. Everything's an API for us, we don't expose APIs after the fact, it is built around, it's an API first infrastructure. And everything is built around them. Whether it's our STKs, our integrations with you know Pop and then Ansible, and those kind of tool sets, our integration with other tool sets that people use. It's all driven through that. The second thing that is different is, we have an emulator, so we can allow our customers to really time travel through the whole process of deployment. I mean, our customers can deploy the infrastructure before the infrastructure hits the loading dock because they can download the UCS emulator. They can actually configure, deploy, build the whole policy on our management platform, test it out, do the what ifs on the emulator. When the equipment shows up, we're ready to go, we are in business, nobody else can do that. And the final thing which is, aside from all of the cloud connected pieces I've talked about, the breadth of Cisco's portfolio spanning from all of our networking assets, our SD WAN assets, our security assets, our collaboration assets, our cloud assets, that breadth gets us to implement use cases for our customers that are just, it's just impossible for anybody else to do. >> We've heard lots of proof points here in the devnote zone specifically from programmability and the automation. I've talked to some service providers here at the show, we've also heard about the journey that enterprise customers are going through to kind of understand that space and learn places here like this. Kevin, I'm sure you're talking to a lot of customers here, maybe if you have examples as to you know the exemplars of who're doing this well, and what people can learn from customers like that. >> Yeah, I mean it's amazing right. In just devnet alone we've got sessions on UCS with Python, STKs, UCS with Powertool, how to integrate with Ansible, these are just becoming common terms, common household terms for our customers. As you go up to enterprise customers, service provider customers, they're using these tools in a day to day manner to do the automation on top of, to really deploy and manage their apps, right, and the way that, I mean, it's exciting, we have customers from all segments of all industry, and they really they use these programmatic, KD's simple example of platform emulator, you don't realize how powerful that is, where you can set that same exact state machine that's in your UCS, you can put it on your laptop, set up all your policies, and then when that gear hits the dock, you are up in hours. Literally we have very large e-commerce sites, they do this, thousands of servers hit it, and in a matter of hours, they've applied those policies and they're up and running. Python, we've got Python, Ruby, Powertool, software developer kits, we've got devops that sit on those, and Ansible, Puppet, Chef, and these are just the automation so if you want to do it yourself, we've got the world class API, nobody else gives you that programmatic API. That's how we built our foundation. If you want Cisco to call those APIs, we have Intersight and we'll make those calls for you. If you just want to do some simple scripting, Powertool. You can automate certain processes, it doesn't have to be the whole end to end. You know you can use all these, it's basically choice to really, what your applications are demanding and what your customers are demanding. >> That's a strong story, one of breadth and depth. We're out of time, but KD I wonder if you could sort of put a bow on Cisco Live! Europe this year, big takeaways from your point of view. >> Listen, we've been innovating like monsters and it's such a terrific week for us to come here, to really touch and feel and listen to our customers and see the delight on their faces as we show them what we've been doing. And this part of the show, day three the devnet takeover, this is where it gets really really real, because now we get to go down to the depths of looking at those APIs, looking at those use cases, getting people to play around with them. So it's just been terrific, I love it. >> I love it too, we're the interview monsters this week. So guys thanks very much for coming on theCUBE. >> Thanks for having us. >> You're welcome. Alright keep it right there everybody, we'll be back with our next guest right after this short break. You're watching theCUBE from Cicso Live! In Barcelona. Be right back. (upbeat electronic outro)
SUMMARY :
Brought to you by Cisco and KD is the vice president focus of the announcements It's really, the data moves to the edges, about the evolution and it's been amazing the explosion. the earliest days of UCS, you know the ability to create Taking it into the cloud? and give the box different personalities in the devnet zone right now, that back to the core so that you and software much like the the differentiation? and the reason that it of the cloud connected here at the show, we've hits the dock, you are up in hours. if you could sort of put a bow and see the delight the interview monsters we'll be back with our next guest
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kevin Egan | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Kevin | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Hyperflex | ORGANIZATION | 0.99+ |
Kaustubh Das | PERSON | 0.99+ |
third piece | QUANTITY | 0.99+ |
Susie Wee | PERSON | 0.99+ |
Barcelona | LOCATION | 0.99+ |
second piece | QUANTITY | 0.99+ |
Citrix | ORGANIZATION | 0.99+ |
five megabytes | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
three different vectors | QUANTITY | 0.99+ |
this week | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
UCS | ORGANIZATION | 0.99+ |
first vector | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Pure | ORGANIZATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
Intersight | ORGANIZATION | 0.99+ |
first thing | QUANTITY | 0.98+ |
Ruby | TITLE | 0.98+ |
HCI | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
one place | QUANTITY | 0.98+ |
second vector | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.98+ |
KD | PERSON | 0.97+ |
Russia | LOCATION | 0.97+ |
this year | DATE | 0.97+ |
UCS | TITLE | 0.96+ |
09 | DATE | 0.96+ |
NetApp | ORGANIZATION | 0.96+ |
STKs | TITLE | 0.96+ |
R. | PERSON | 0.95+ |
theCUBE | ORGANIZATION | 0.95+ |
Day three | QUANTITY | 0.95+ |
Powertool | TITLE | 0.95+ |
half a million developers | QUANTITY | 0.93+ |
Europe | LOCATION | 0.92+ |
one place | QUANTITY | 0.92+ |
day two | QUANTITY | 0.9+ |
petabytes | QUANTITY | 0.89+ |
big focus | ORGANIZATION | 0.88+ |
firsts | QUANTITY | 0.87+ |
first infrastructure | QUANTITY | 0.87+ |
day five | QUANTITY | 0.84+ |
Ansible | ORGANIZATION | 0.84+ |