Image Title

Search Results for IA:

Kubernetes on Any Infrastructure Top to Bottom Tutorials for Docker Enterprise Container Cloud


 

>>all right, We're five minutes after the hour. That's all aboard. Who's coming aboard? Welcome everyone to the tutorial track for our launchpad of them. So for the next couple of hours, we've got a SYRIZA videos and experts on hand to answer questions about our new product, Doctor Enterprise Container Cloud. Before we jump into the videos and the technology, I just want to introduce myself and my other emcee for the session. I'm Bill Milks. I run curriculum development for Mirant us on. And >>I'm Bruce Basil Matthews. I'm the Western regional Solutions architect for Moran Tissue esa and welcome to everyone to this lovely launchpad oven event. >>We're lucky to have you with us proof. At least somebody on the call knows something about your enterprise Computer club. Um, speaking of people that know about Dr Enterprise Container Cloud, make sure that you've got a window open to the chat for this session. We've got a number of our engineers available and on hand to answer your questions live as we go through these videos and disgusting problem. So that's us, I guess, for Dr Enterprise Container Cloud, this is Mirant asses brand new product for bootstrapping Doctor Enterprise Kubernetes clusters at scale Anything. The airport Abu's? >>No, just that I think that we're trying Thio. Uh, let's see. Hold on. I think that we're trying Teoh give you a foundation against which to give this stuff a go yourself. And that's really the key to this thing is to provide some, you know, many training and education in a very condensed period. So, >>yeah, that's exactly what you're going to see. The SYRIZA videos we have today. We're going to focus on your first steps with Dr Enterprise Container Cloud from installing it to bootstrapping your regional child clusters so that by the end of the tutorial content today, you're gonna be prepared to spin up your first documentary prize clusters using documented prize container class. So just a little bit of logistics for the session. We're going to run through these tutorials twice. We're gonna do one run through starting seven minutes ago up until I guess it will be ten fifteen Pacific time. Then we're gonna run through the whole thing again. So if you've got other colleagues that weren't able to join right at the top of the hour and would like to jump in from the beginning, ten. Fifteen Pacific time. We're gonna do the whole thing over again. So if you want to see the videos twice, you got public friends and colleagues that, you know you wanna pull in for a second chance to see this stuff, we're gonna do it all. All twice. Yeah, this session. Any any logistics I should add, Bruce that No, >>I think that's that's pretty much what we had to nail down here. But let's zoom dash into those, uh, feature films. >>Let's do Edmonds. And like I said, don't be shy. Feel free to ask questions in the chat or engineers and boosting myself are standing by to answer your questions. So let me just tee up the first video here and walk their cost. Yeah. Mhm. Yes. Sorry. And here we go. So our first video here is gonna be about installing the Doctor Enterprise Container Club Management cluster. So I like to think of the management cluster as like your mothership, right? This is what you're gonna use to deploy all those little child clusters that you're gonna use is like, Come on it as clusters downstream. So the management costs was always our first step. Let's jump in there >>now. We have to give this brief little pause >>with no good day video. Focus for this demo will be the initial bootstrap of the management cluster in the first regional clusters to support AWS deployments. The management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case, eight of us and the Elsie um, components on the UCP Cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a big strap note on this dependencies on handling with download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the adduce environment. The fourth configuring the deployment, defining things like the machine types on the fifth phase. Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node, just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now we're just checking through AWS to make sure that the account we want to use we have the correct credentials on the correct roles set up and validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just going to check that we can, from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next, we're going to run it. I'm in. Deploy it. Changing into that big struck folder. Just making see what's there. Right now we have no license file, so we're gonna get the license filed. Oh, okay. Get the license file through the more antis downloads site, signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, Once we've done that, we can now go ahead with the rest of the deployment. See that the follow is there. Uh, huh? That's again checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. All right, The next big step is valid in all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running an AWS policy create. So it is part of that is creating our Food trucks script, creating the mystery policy files on top of AWS, Just generally preparing the environment using a cloud formation script you'll see in a second will give a new policy confirmations just waiting for it to complete. Yeah, and there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created Today I am console. Go to that new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media Access key I D and the secret access key. We went, Yeah, usually then exported on the command line. Okay. Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Yes. Okay, that's the key. Secret X key. Right on. Let's kick it off. Yeah, So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you, and as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the west side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS. At the end of the process, that cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Okay. Local clusters boat just waiting for the various objects to get ready. Standard communities objects here Okay, so we speed up this process a little bit just for demonstration purposes. Yeah. There we go. So first note is being built the best in host. Just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for a W s to create the instance. Okay. Yes. Here, beauty there. Okay. Mhm. Okay. Yeah, yeah. Okay. On there. We got question. Host has been built on three instances for the management clusters have now been created. We're going through the process of preparing. Those nodes were now copying everything over. See that? The scaling up of controllers in the big Strap cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Yeah. Yeah, just waiting for key. Clark. Uh huh. Start to finish up. Yeah. No. What? Now we're shutting down control this on the local bootstrap node on preparing our I. D. C. Configuration. Fourth indication, soon as this is completed. Last phase will be to deploy stack light into the new cluster the last time Monitoring tool set way Go stack like to plan It has started. Mhm coming to the end of the deployment Mountain. Yeah, America. Final phase of the deployment. Onda, We are done. Okay, You'll see. At the end they're providing us the details of you. I log in so there's a keeper clogging. You can modify that initial default password is part of the configuration set up with one documentation way. Go Councils up way can log in. Yeah, yeah, thank you very much for watching. >>Excellent. So in that video are wonderful field CTO Shauna Vera bootstrapped up management costume for Dr Enterprise Container Cloud Bruce, where exactly does that leave us? So now we've got this management costume installed like what's next? >>So primarily the foundation for being able to deploy either regional clusters that will then allow you to support child clusters. Uh, comes into play the next piece of what we're going to show, I think with Sean O'Mara doing this is the child cluster capability, which allows you to then deploy your application services on the local cluster. That's being managed by the ah ah management cluster that we just created with the bootstrap. >>Right? So this cluster isn't yet for workloads. This is just for bootstrapping up the downstream clusters. Those or what we're gonna use for workings. >>Exactly. Yeah. And I just wanted to point out, since Sean O'Mara isn't around, toe, actually answer questions. I could listen to that guy. Read the phone book, and it would be interesting, but anyway, you can tell him I said that >>he's watching right now, Crusoe. Good. Um, cool. So and just to make sure I understood what Sean was describing their that bootstrap er knows that you, like, ran document fresh pretender Cloud from to begin with. That's actually creating a kind kubernetes deployment kubernetes and Docker deployment locally. That then hits the AWS a p i in this example that make those e c two instances, and it makes like a three manager kubernetes cluster there, and then it, like, copies itself over toe those communities managers. >>Yeah, and and that's sort of where the transition happens. You can actually see it. The output that when it says I'm pivoting, I'm pivoting from my local kind deployment of cluster AP, I toothy, uh, cluster, that's that's being created inside of AWS or, quite frankly, inside of open stack or inside of bare metal or inside of it. The targeting is, uh, abstracted. Yeah, but >>those air three environments that we're looking at right now, right? Us bare metal in open staff environments. So does that kind cluster on the bootstrap er go away afterwards. You don't need that afterwards. Yeah, that is just temporary. To get things bootstrapped, then you manage things from management cluster on aws in this example? >>Yeah. Yeah. The seed, uh, cloud that post the bootstrap is not required anymore. And there's no, uh, interplay between them after that. So that there's no dependencies on any of the clouds that get created thereafter. >>Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, be a temporary container that would bootstrap all the other containers. Go away. It's, uh, so sort of a similar, similar temporary transient bootstrapping model. Cool. Excellent. What will convict there? It looked like there wasn't a ton, right? It looked like you had to, like, set up some AWS parameters like credentials and region and stuff like that. But other than that, that looked like heavily script herbal like there wasn't a ton of point and click there. >>Yeah, very much so. It's pretty straightforward from a bootstrapping standpoint, The config file that that's generated the template is fairly straightforward and targeted towards of a small medium or large, um, deployment. And by editing that single file and then gathering license file and all of the things that Sean went through, um, that that it makes it fairly easy to script >>this. And if I understood correctly as well that three manager footprint for your management cluster, that's the minimum, right. We always insist on high availability for this management cluster because boy do not wanna see oh, >>right, right. And you know, there's all kinds of persistent data that needs to be available, regardless of whether one of the notes goes down or not. So we're taking care of all of that for you behind the scenes without you having toe worry about it as a developer. >>No, I think there's that's a theme that I think will come back to throughout the rest of this tutorial session today is there's a lot of there's a lot of expertise baked him to Dr Enterprise Container Cloud in terms of implementing best practices for you like the defaulter, just the best practices of how you should be managing these clusters, Miss Seymour. Examples of that is the day goes on. Any interesting questions you want to call out from the chap who's >>well, there was. Yeah, yeah, there was one that we had responded to earlier about the fact that it's a management cluster that then conduce oh, either the the regional cluster or a local child molester. The child clusters, in each case host the application services, >>right? So at this point, we've got, in some sense, like the simplest architectures for our documentary prize Container Cloud. We've got the management cluster, and we're gonna go straight with child cluster. In the next video, there's a more sophisticated architecture, which will also proper today that inserts another layer between those two regional clusters. If you need to manage regions like across a BS, reads across with these documents anything, >>yeah, that that local support for the child cluster makes it a lot easier for you to manage the individual clusters themselves and to take advantage of our observation. I'll support systems a stack light and things like that for each one of clusters locally, as opposed to having to centralize thumb >>eso. It's a couple of good questions. In the chat here, someone was asking for the instructions to do this themselves. I strongly encourage you to do so. That should be in the docks, which I think Dale helpfully thank you. Dale provided links for that's all publicly available right now. So just head on in, head on into the docks like the Dale provided here. You can follow this example yourself. All you need is a Mirante license for this and your AWS credentials. There was a question from many a hear about deploying this toe azure. Not at G. Not at this time. >>Yeah, although that is coming. That's going to be in a very near term release. >>I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. Very bracing. Cool. Okay. Any other thoughts on this one does. >>No, just that the fact that we're running through these individual pieces of the steps Well, I'm sure help you folks. If you go to the link that, uh, the gentleman had put into the chat, um, giving you the step by staff. Um, it makes it fairly straightforward to try this yourselves. >>E strongly encourage that, right? That's when you really start to internalize this stuff. OK, but before we move on to the next video, let's just make sure everyone has a clear picture in your mind of, like, where we are in the life cycle here creating this management cluster. Just stop me if I'm wrong. Who's creating this management cluster is like, you do that once, right? That's when your first setting up your doctor enterprise container cloud environment of system. What we're going to start seeing next is creating child clusters and this is what you're gonna be doing over and over and over again. When you need to create a cluster for this Deb team or, you know, this other team river it is that needs commodity. Doctor Enterprise clusters create these easy on half will. So this was once to set up Dr Enterprise Container Cloud Child clusters, which we're going to see next. We're gonna do over and over and over again. So let's go to that video and see just how straightforward it is to spin up a doctor enterprise cluster for work clothes as a child cluster. Undocumented brands contain >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster, the scaling of the cluster and how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the U I so you can switch. Project Mary only has access to development. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Nan Yes, this H Keys Associate ID for Mary into her team on the cloud credentials that allow you to create access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences, Right? Let's now set up semester search keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name, we copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our local machine. A simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, you go to the clusters tab. We hit the create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one release version five point seven is the current release Onda Attach. Mary's Key is necessary Key. We can then check the rest of the settings, confirming the provider Any kubernetes c r D r I p address information. We can change this. Should we wish to? We'll leave it default for now on. Then what components? A stack light I would like to deploy into my Custer. For this. I'm enabling stack light on logging on Aiken. Sit up the retention sizes Attention times on. Even at this stage, at any customer alerts for the watchdogs. E consider email alerting which I will need my smart host details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I commend side on the route device size. There we go, my three machines obviously creating. I now need to add some workers to this custom. So I go through the same process this time once again, just selecting worker. I'll just add to once again, the AM is extremely important. Will fail if we don't pick the right, Am I for a boon to machine in this case and the deployment has started. We can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here we've created the VPC. We've created the sub nets on We've created the Internet gateway. It's unnecessary made of us and we have no warnings of the stage. Yeah, this will then run for a while. We have one minute past waken click through. We can check the status of the machine bulls as individuals so we can check the machine info, details of the machines that we've assigned, right? Mhm Onda. See any events pertaining to the machine areas like this one on normal? Yeah. Just watch asked. The community's components are waiting for the machines to start. Go back to Custer's. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway on the stage. The machines have been built on assigned. I pick up the U. S. Thank you. Yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. Mhm. No speeding things up a little bit. This whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured. Mhm, mhm. And then we go. Cluster has been deployed. So once the classes deployed, we can now never get around our environment. Okay, Are cooking into configure cluster We could modify their cluster. We could get the end points for alert alert manager on See here The griffon occupying and Prometheus are still building in the background but the cluster is available on you would be able to put workloads on it the stretch to download the cube conflict so that I can put workloads on it. It's again three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster Mhm all right Now that the build is fully completed, we can check out cluster info on. We can see that Allow the satellite components have been built. All the storage is there, and we have access to the CPU. I So if we click into the cluster, we can access the UCP dashboard, right? Shit. Click the signing with Detroit button to use the SSO on. We give Mary's possible to use the name once again. Thing is, an unlicensed cluster way could license at this point. Or just skip it on. There. We have the UCP dashboard. You can see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon, a data just being automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Mhm. So we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah. General dashboard of Cuba navies cluster one of this is configurable. You can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster, all right to scale the cluster on to add a notice. A simple is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger disks and you'll see that worker has been added from the provisioning state on shortly. We will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workouts are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button, validating which release you would like to update to. In this case, the next available releases five point seven point one. Here I'm kicking the update by in the background We will coordinate. Drain each node slowly go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Girl, we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already on In a few minutes we'll see that there are great has been completed. There we go. Great. Done. Yeah. If you work loads of both using proper cloud native community standards, there will be no impact. >>Excellent. So at this point, we've now got a cluster ready to start taking our communities of workloads. He started playing or APs to that costume. So watching that video, the thing that jumped out to me at first Waas like the inputs that go into defining this workload cost of it. All right, so we have to make sure we were using on appropriate am I for that kind of defines the substrate about what we're gonna be deploying our cluster on top of. But there's very little requirements. A so far as I could tell on top of that, am I? Because Docker enterprise Container Cloud is gonna bootstrap all the components that you need. That s all we have is kind of kind of really simple bunch box that we were deploying these things on top of so one thing that didn't get dug into too much in the video. But it's just sort of implied. Bruce, maybe you can comment on this is that release that Shawn had to choose for his, uh, for his cluster in creating it. And that release was also the thing we had to touch. Wanted to upgrade part cluster. So you have really sharp eyes. You could see at the end there that when you're doing the release upgrade enlisted out a stack of components docker, engine, kubernetes, calico, aled, different bits and pieces that go into, uh, go into one of these commodity clusters that deploy. And so, as far as I can tell in that case, that's what we mean by a release. In this sense, right? It's the validated stack off container ization and orchestration components that you know we've tested out and make sure it works well, introduction environments. >>Yeah, and and And that's really the focus of our effort is to ensure that any CVS in any of the stack are taken care of that there is a fixes air documented and up streamed to the open stack community source community, um, and and that, you know, then we test for the scaling ability and the reliability in high availability configuration for the clusters themselves. The hosts of your containers. Right. And I think one of the key, uh, you know, benefits that we provide is that ability to let you know, online, high. We've got an update for you, and it's fixes something that maybe you had asked us to fix. Uh, that all comes to you online as your managing your clusters, so you don't have to think about it. It just comes as part of the product. >>You just have to click on Yes. Please give me that update. Uh, not just the individual components, but again. It's that it's that validated stack, right? Not just, you know, component X, y and Z work. But they all work together effectively Scalable security, reliably cool. Um, yeah. So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old universal control plane. Doctor Enterprise. On top of that, Sean had the classic comment there, you know? Yeah. Yeah. You'll see a little warnings and errors or whatever. When you're setting up, UCP don't handle, right, Just let it do its job, and it will converge all its components, you know, after just just a minute or two. But we saw in that video, we sped things up a little bit there just we didn't wait for, you know, progress fighters to complete. But really, in real life, that whole process is that anything so spend up one of those one of those fosters so quite quite quick. >>Yeah, and and I think the the thoroughness with which it goes through its process and re tries and re tries, uh, as you know, and it was evident when we went through the initial ah video of the bootstrapping as well that the processes themselves are self healing, as they are going through. So they will try and retry and wait for the event to complete properly on. And once it's completed properly, then it will go to the next step. >>Absolutely. And the worst thing you could do is panic at the first warning and start tearing things that don't don't do that. Just don't let it let it heal. Let take care of itself. And that's the beauty of these manage solutions is that they bake in a lot of subject matter expertise, right? The decisions that are getting made by those containers is they're bootstrapping themselves, reflect the expertise of the Mirant ISS crew that has been developing this content in these two is free for years and years now, over recognizing humanities. One cool thing there that I really appreciate it actually that it adds on top of Dr Enterprise is that automatic griffon a deployment as well. So, Dr Enterprises, I think everyone knows has had, like, some very high level of statistics baked into its dashboard for years and years now. But you know our customers always wanted a double click on that right to be able to go a little bit deeper. And Griffon are really addresses that it's built in dashboards. That's what's really nice to see. >>Yeah, uh, and all of the alerts and, uh, data are actually captured in a Prometheus database underlying that you have access to so that you are allowed to add new alerts that then go out to touch slack and say hi, You need to watch your disk space on this machine or those kinds of things. Um, and and this is especially helpful for folks who you know, want to manage the application service layer but don't necessarily want to manage the operations side of the house. So it gives them a tool set that they can easily say here, Can you watch these for us? And Miran tas can actually help do that with you, So >>yeah, yeah, I mean, that's just another example of baking in that expert knowledge, right? So you can leverage that without tons and tons of a long ah, long runway of learning about how to do that sort of thing. Just get out of the box right away. There was the other thing, actually, that you could sleep by really quickly if you weren't paying close attention. But Sean mentioned it on the video. And that was how When you use dark enterprise container cloud to scale your cluster, particularly pulling a worker out, it doesn't just like Territo worker down and forget about it. Right? Is using good communities best practices to cordon and drain the No. So you aren't gonna disrupt your workloads? You're going to just have a bunch of containers instantly. Excellent crash. You could really carefully manage the migration of workloads off that cluster has baked right in tow. How? How? Document? The brass container cloud is his handling cluster scale. >>Right? And And the kubernetes, uh, scaling methodology is is he adhered to with all of the proper techniques that ensure that it will tell you. Wait, you've got a container that actually needs three, uh, three, uh, instances of itself. And you don't want to take that out, because that node, it means you'll only be able to have to. And we can't do that. We can't allow that. >>Okay, Very cool. Further thoughts on this video. So should we go to the questions. >>Let's let's go to the questions >>that people have. Uh, there's one good one here, down near the bottom regarding whether an a p I is available to do this. So in all these demos were clicking through this web. You I Yes, this is all a p. I driven. You could do all of this. You know, automate all this away is part of the CSC change. Absolutely. Um, that's kind of the point, right? We want you to be ableto spin up. Come on. I keep calling them commodity clusters. What I mean by that is clusters that you can create and throw away. You know, easily and automatically. So everything you see in these demos eyes exposed to FBI? >>Yeah. In addition, through the standard Cube cuddle, Uh, cli as well. So if you're not a programmer, but you still want to do some scripting Thio, you know, set up things and deploy your applications and things. You can use this standard tool sets that are available to accomplish that. >>There is a good question on scale here. So, like, just how many clusters and what sort of scale of deployments come this kind of support our engineers report back here that we've done in practice up to a Zeman ia's like two hundred clusters. We've deployed on this with two hundred fifty nodes in a cluster. So were, you know, like like I said, hundreds, hundreds of notes, hundreds of clusters managed by documented press container fall and then those downstream clusters, of course, subject to the usual constraints for kubernetes, right? Like default constraints with something like one hundred pods for no or something like that. There's a few different limitations of how many pods you can run on a given cluster that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. >>Yeah, E. I mean, I don't think that we constrain any of the capabilities that are available in the, uh, infrastructure deliveries, uh, service within the goober Netease framework. So were, you know, But we are, uh, adhering to the standards that we would want to set to make sure that we're not overloading a node or those kinds of things, >>right. Absolutely cool. Alright. So at this point, we've got kind of a two layered our protection when we are management cluster, but we deployed in the first video. Then we use that to deploy one child clustering work, classroom, uh, for more sophisticated deployments where we might want to manage child clusters across multiple regions. We're gonna add another layer into our architectural we're gonna add in regional cluster management. So this idea you're gonna have the single management cluster that we started within the first video. On the next video, we're gonna learn how to spin up a regional clusters, each one of which would manage, for example, a different AWS uh, US region. So let me just pull out the video for that bill. We'll check it out for me. Mhm. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectures of you how to set up the management environment, prepare for the deployment deployment overview and then just to prove it, to play a regional child cluster. So, looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case AWS on the LCN components on the D you speak Cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need a regional cluster? Different platform architectures, for example aws who have been stack even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager we also Machine Manager were held. Mandel are managed as well as the actual provider logic. Mhm. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. And you see, it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster similar to what we're going to deploy now, also only has three managers once again, no workers. But as a comparison, here's a child cluster This one has three managers, but also has additional workers associate it to the cluster. All right, we need to connect. Tell bootstrap note. Preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine. All right. A few things we have to do to make sure the environment is ready. First thing we're going to see go into route. We'll go into our releases folder where we have the kozberg struck on. This was the original bootstrap used to build the original management cluster. Yeah, we're going to double check to make sure our cube con figures there once again, the one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything is working. A condom. No damages waken access to a swell. Yeah. Next we're gonna edit the machine definitions. What we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I. So that's found under the templates AWS directory. We don't need to edit anything else here. But we could change items like the size of the machines attempts. We want to use that The key items to ensure where you changed the am I reference for the junta image is the one for the region in this case AWS region for utilizing this was no construct deployment. We have to make sure we're pointing in the correct open stack images. Yeah, okay. Set the correct and my save file. Now we need to get up credentials again. When we originally created the bootstrap cluster, we got credentials from eight of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we're just exporting the AWS access key and I d. What's important is CAAs aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our cube conflict that we want to use for the management cluster. When we looked at earlier Yeah, now we're exporting that. Want to call the cluster region Is Frank Foods Socrates Frankfurt yet trying to use something descriptive It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed. Um, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at W s and waiting for that bastard and no to get started. Please. The best you nerd Onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy. Dr. Enterprise, this is probably the longest face. Yeah, seeing the second that all the nerds will go from the player deployed. Prepare, prepare. Yeah, You'll see their status changes updates. He was the first night ready. Second, just applying second already. Both my time. No waiting from home control. Let's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running the date of the U. S. All my stay. Ah, now we're playing Stockland. Switch over is done on. Done. Now I will build a child cluster in the new region very, very quickly to find the cluster will pick. Our new credential has shown up. We'll just call it Frankfurt for simplicity a key and customs to find. That's the machine. That cluster stop with three managers. Set the correct Am I for the region? Yeah, Do the same to add workers. There we go test the building. Yeah. Total bill of time Should be about fifteen minutes. Concedes in progress. It's going to expect this up a little bit. Check the events. We've created all the dependencies, machine instances, machines, a boat shortly. We should have a working cluster in Frankfurt region. Now almost a one note is ready from management. Two in progress. Yeah, on we're done. Clusters up and running. Yeah. >>Excellent. So at this point, we've now got that three tier structure that we talked about before the video. We got that management cluster that we do strapped in the first video. Now we have in this example to different regional clustering one in Frankfurt, one of one management was two different aws regions. And sitting on that you can do Strap up all those Doctor enterprise costumes that we want for our work clothes. >>Yeah, that's the key to this is to be able to have co resident with your actual application service enabled clusters the management co resident with it so that you can, you know, quickly access that he observation Elson Surfboard services like the graph, Ana and that sort of thing for your particular region. A supposed to having to lug back into the home. What did you call it when we started >>the mothership? >>The mothership. Right. So we don't have to go back to the mother ship. We could get >>it locally. Yeah, when, like to that point of aggregating things under a single pane of glass? That's one thing that again kind of sailed by in the demo really quickly. But you'll notice all your different clusters were on that same cluster. Your pain on your doctor Enterprise Container Cloud management. Uh, court. Right. So both your child clusters for running workload and your regional clusters for bootstrapping. Those child clusters were all listed in the same place there. So it's just one pane of glass to go look for, for all of your clusters, >>right? And, uh, this is kind of an important point. I was, I was realizing, as we were going through this. All of the mechanics are actually identical between the bootstrapped cluster of the original services and the bootstrapped cluster of the regional services. It's the management layer of everything so that you only have managers, you don't have workers and that at the child cluster layer below the regional or the management cluster itself, that's where you have the worker nodes. And those are the ones that host the application services in that three tiered architecture that we've now defined >>and another, you know, detail for those that have sharp eyes. In that video, you'll notice when deploying a child clusters. There's not on Lee. A minimum of three managers for high availability management cluster. You must have at least two workers that's just required for workload failure. It's one of those down get out of work. They could potentially step in there, so your minimum foot point one of these child clusters is fine. Violence and scalable, obviously, from a >>That's right. >>Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want to my last video. There's another question here about, like where these clusters can live. So again, I know these examples are very aws heavy. Honestly, it's just easy to set up down on the other us. We could do things on bare metal and, uh, open stack departments on Prem. That's what all of this still works in exactly the same way. >>Yeah, the, uh, key to this, especially for the the, uh, child clusters, is the provision hers? Right? See you establish on AWS provision or you establish a bare metal provision or you establish a open stack provision. Or and eventually that list will include all of the other major players in the cloud arena. But you, by selecting the provision or within your management interface, that's where you decide where it's going to be hosted, where the child cluster is to be hosted. >>Speaking off all through a child clusters. Let's jump into our last video in the Siri's, where we'll see how to spin up a child cluster on bare metal. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. So why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent. Provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and supports high performance workloads like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another opera visor. Lay it between so continue on the theme Why Communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p G A s G p us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. Uh, we can handle utilization in the scheduling. Better Onda we increase the performances and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project will add the bare metal hosts, including the host name. I put my credentials I pay my address the Mac address on then provide a machine type label to determine what type of machine it is for later use. Okay, let's get started. So well again. Was the operator thing. We'll go and we'll create a project for our machines to be a member off helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. So the first thing we had to be in post, Yeah, many of the machine A name. Anything you want, que experimental zero one. Provide the IAP my user name type my password. Okay. On the Mac address for the common interface with the boot interface and then the i p m I i p address These machines will be at the time storage worker manager. He's a manager. Yeah, we're gonna add a number of other machines on will. Speed this up just so you could see what the process looks like in the future. Better discovery will be added to the product. Okay. Okay. Getting back there we have it are Six machines have been added, are busy being inspected, being added to the system. Let's have a look at the details of a single note. Yeah, you can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. I see. Okay, let's go and create the cluster. Yeah, So we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So we'll credit custom. We'll give it a name, but if it were selecting bare metal on the region, we're going to select the version we want to apply. No way. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of dress range on update the address range that we want to use for the cluster. Check that the sea ideal blocks for the Cuban ladies and tunnels are what we want them to be. Enable disabled stack light. Yeah, and soothe stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here. We're focused on building communities clusters, so we're gonna put the count of machines. You want managers? We're gonna pick the label type manager and create three machines is the manager for the Cuban eighties. Casting Okay thing. We're having workers to the same. It's a process. Just making sure that the worker label host level are I'm sorry. On when Wait for the machines to deploy. Let's go through the process of putting the operating system on the notes validating and operating system deploying doctor identifies Make sure that the cluster is up and running and ready to go. Okay, let's review the bold events waken See the machine info now populated with more information about the specifics of things like storage and of course, details of a cluster etcetera. Yeah, yeah, well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build? And that brings us to the end of this particular demo. You can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>All right, so there we have it, deploying a cluster to bare metal. Much the same is how we did for AWS. I guess maybe the biggest different stepwise there is there is that registration face first, right? So rather than just using AWS financials toe magically create PM's in the cloud. You got a point out all your bare metal servers to Dr Enterprise between the cloud and they really come in, I guess three profiles, right? You got your manager profile with a profile storage profile which has been labeled as allocate. Um, crossword cluster has appropriate, >>right? And And I think that the you know, the key differentiator here is that you have more physical control over what, uh, attributes that love your cat, by the way, uh, where you have the different attributes of a server of physical server. So you can, uh, ensure that the SSD configuration on the storage nodes is gonna be taken advantage of in the best way the GP use on the worker nodes and and that the management layer is going to have sufficient horsepower to, um, spin up to to scale up the the environments, as required. One of the things I wanted to mention, though, um, if I could get this out without the choking much better. Um, is that Ah, hey, mentioned the load balancer and I wanted to make sure in defining the load balancer and the load balancer ranges. Um, that is for the top of the the cluster itself. That's the operations of the management, uh, layer integrating with your systems internally to be able to access the the Cube Can figs. I I p address the, uh, in a centralized way. It's not the load balancer that's working within the kubernetes cluster that you are deploying. That's still cube proxy or service mesh, or however you're intending to do it. So, um, it's kind of an interesting step that your initial step in building this, um and we typically use things like metal L B or in gen X or that kind of thing is to establish that before we deploy this bear mental cluster so that it can ride on top of that for the tips and things. >>Very cool. So any other thoughts on what we've seen so far today? Bruce, we've gone through all the different layers. Doctor enterprise container clouds in these videos from our management are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. Closing thoughts before we take just a very short break and run through these demos again. >>You know, I've been very exciting. Ah, doing the presentation with you. I'm really looking forward to doing it the second time, so that we because we've got a good rhythm going about this kind of thing. So I'm looking forward to doing that. But I think that the key elements of what we're trying to convey to the folks out there in the audience that I hope you've gotten out of it is that will that this is an easy enough process that if you follow the step by steps going through the documentation that's been put out in the chat, um, that you'll be able to give this a go yourself, Um, and you don't have to limit yourself toe having physical hardware on prim to try it. You could do it in a ws as we've shown you today. And if you've got some fancy use cases like, uh, you you need a Hadoop And and, uh, you know, cloud oriented ai stuff that providing a bare metal service helps you to get there very fast. So right. Thank you. It's been a pleasure. >>Yeah, thanks everyone for coming out. So, like I said we're going to take a very short, like, three minute break here. Uh, take the opportunity to let your colleagues know if they were in another session or they didn't quite make it to the beginning of this session. Or if you just want to see these demos again, we're going to kick off this demo. Siri's again in just three minutes at ten. Twenty five a. M. Pacific time where we will see all this great stuff again. Let's take a three minute break. I'll see you all back here in just two minutes now, you know. Okay, folks, that's the end of our extremely short break. We'll give people just maybe, like one more minute to trickle in if folks are interested in coming on in and jumping into our demo. Siri's again. Eso For those of you that are just joining us now I'm Bill Mills. I head up curriculum development for the training team here. Moran Tous on Joining me for this session of demos is Bruce. Don't you go ahead and introduce yourself doors, who is still on break? That's cool. We'll give Bruce a minute or two to get back while everyone else trickles back in. There he is. Hello, Bruce. >>How'd that go for you? Okay, >>Very well. So let's kick off our second session here. I e just interest will feel for you. Thio. Let it run over here. >>Alright. Hi. Bruce Matthews here. I'm the Western Regional Solutions architect for Marantz. Use A I'm the one with the gray hair and the glasses. Uh, the handsome one is Bill. So, uh, Bill, take it away. >>Excellent. So over the next hour or so, we've got a Siris of demos that's gonna walk you through your first steps with Dr Enterprise Container Cloud Doctor Enterprise Container Cloud is, of course, Miranda's brand new offering from bootstrapping kubernetes clusters in AWS bare metal open stack. And for the providers in the very near future. So we we've got, you know, just just over an hour left together on this session, uh, if you joined us at the top of the hour back at nine. A. M. Pacific, we went through these demos once already. Let's do them again for everyone else that was only able to jump in right now. Let's go. Our first video where we're gonna install Dr Enterprise container cloud for the very first time and use it to bootstrap management. Cluster Management Cluster, as I like to describe it, is our mother ship that's going to spin up all the other kubernetes clusters, Doctor Enterprise clusters that we're gonna run our workloads on. So I'm gonna do >>I'm so excited. I can hardly wait. >>Let's do it all right to share my video out here. Yeah, let's do it. >>Good day. The focus for this demo will be the initial bootstrap of the management cluster on the first regional clusters. To support AWS deployments, the management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case AWS and the Elsom components on the UCP cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a bootstrap note on its dependencies on handling the download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the ideas environment, the fourth configuring the deployment, defining things like the machine types on the fifth phase, Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node. Just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now, we're just checking through aws to make sure that the account we want to use we have the correct credentials on the correct roles set up on validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just gonna check that we can from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next we're going to run it. Yeah, I've been deployed changing into that big struck folder, just making see what's there right now we have no license file, so we're gonna get the license filed. Okay? Get the license file through more antis downloads site signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, since we've done that, we can now go ahead with the rest of the deployment. Yeah, see what the follow is there? Uh huh. Once again, checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. Alright. Next big step is violating all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running in AWS policy create. So it is part of that is creating our food trucks script. Creating this through policy files onto the AWS, just generally preparing the environment using a cloud formation script, you'll see in a second, I'll give a new policy confirmations just waiting for it to complete. And there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created. Good day. I am console. Go to the new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media access Key I. D and the secret access key, but usually then exported on the command line. Okay, Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Okay, thanks. Is key. So you could X key Right on. Let's kick it off. So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you. Um, as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the AWS side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS at the end of the process. That cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Yeah, okay. Local clusters boat. Just waiting for the various objects to get ready. Standard communities objects here. Yeah, you mentioned Yeah. So we've speed up this process a little bit just for demonstration purposes. Okay, there we go. So first note is being built the bastion host just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for AWS to create the instance. Okay. Yeah. Beauty there. Movies. Okay, sketch. Hello? Yeah, Okay. Okay. On. There we go. Question host has been built on three instances for the management clusters have now been created. Okay, We're going through the process of preparing. Those nodes were now copying everything over. See that scaling up of controllers in the big strapped cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Right? Okay. Just waiting for key. Clark. Uh huh. So finish up. Yeah. No. Now we're shutting down. Control this on the local bootstrap node on preparing our I. D. C configuration, fourth indication. So once this is completed, the last phase will be to deploy stack light into the new cluster, that glass on monitoring tool set, Then we go stack like deployment has started. Mhm. Coming to the end of the deployment mountain. Yeah, they were cut final phase of the deployment. And we are done. Yeah, you'll see. At the end, they're providing us the details of you. I log in. So there's a key Clark log in. Uh, you can modify that initial default possible is part of the configuration set up where they were in the documentation way. Go Councils up way can log in. Yeah. Yeah. Thank you very much for watching. >>All right, so at this point, what we have we got our management cluster spun up, ready to start creating work clusters. So just a couple of points to clarify there to make sure everyone caught that, uh, as advertised. That's darker. Enterprise container cloud management cluster. That's not rework loans. are gonna go right? That is the tool and you're gonna use to start spinning up downstream commodity documentary prize clusters for bootstrapping record too. >>And the seed host that were, uh, talking about the kind cluster dingy actually doesn't have to exist after the bootstrap succeeds eso It's sort of like, uh, copies head from the seed host Toothy targets in AWS spins it up it then boots the the actual clusters and then it goes away too, because it's no longer necessary >>so that bootstrapping know that there's not really any requirements, Hardly on that, right. It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, as you just said, it's just a kubernetes in docker cluster on that piece. Drop note is just gonna get torn down after the set up finishes on. You no longer need that. Everything you're gonna do, you're gonna drive from the single pane of glass provided to you by your management cluster Doctor enterprise Continue cloud. Another thing that I think is sort of interesting their eyes that the convict is fairly minimal. Really? You just need to provide it like aws regions. Um, am I? And that's what is going to spin up that spending that matter faster. >>Right? There is a mammal file in the bootstrap directory itself, and all of the necessary parameters that you would fill in have default set. But you have the option then of going in and defining a different Am I different for a different region, for example? Oh, are different. Size of instance from AWS. >>One thing that people often ask about is the cluster footprint. And so that example you saw they were spitting up a three manager, um, managing cluster as mandatory, right? No single manager set up at all. We want high availability for doctrine Enterprise Container Cloud management. Like so again, just to make sure everyone sort of on board with the life cycle stage that we're at right now. That's the very first thing you're going to do to set up Dr Enterprise Container Cloud. You're going to do it. Hopefully exactly once. Right now, you've got your management cluster running, and they're gonna use that to spend up all your other work clusters Day today has has needed How do we just have a quick look at the questions and then lets take a look at spinning up some of those child clusters. >>Okay, e think they've actually been answered? >>Yeah, for the most part. One thing I'll point out that came up again in the Dail, helpfully pointed out earlier in surgery, pointed out again, is that if you want to try any of the stuff yourself, it's all of the dogs. And so have a look at the chat. There's a links to instructions, so step by step instructions to do each and every thing we're doing here today yourself. I really encourage you to do that. Taking this out for a drive on your own really helps internalizing communicate these ideas after the after launch pad today, Please give this stuff try on your machines. Okay, So at this point, like I said, we've got our management cluster. We're not gonna run workloads there that we're going to start creating child clusters. That's where all of our work and we're gonna go. That's what we're gonna learn how to do in our next video. Cue that up for us. >>I so love Shawn's voice. >>Wasn't that all day? >>Yeah, I watched him read the phone book. >>All right, here we go. Let's now that we have our management cluster set up, let's create a first child work cluster. >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster the scaling of the cluster on how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the u I. So you can switch Project Mary only has access to development. Uh huh. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Man. Yes, this H keys, Associate ID for Mary into her team on the cloud credentials that allow you to create or access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences. Right. Let's now set up some ssh keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name. We copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our machine. A very simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, we got the clusters tab we had to create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one released version five point seven is the current release Onda Attach. Mary's Key is necessary key. We can then check the rest of the settings, confirming the provider any kubernetes c r D a r i p address information. We can change this. Should we wish to? We'll leave it default for now and then what components of stack light? I would like to deploy into my custom for this. I'm enabling stack light on logging, and I consider the retention sizes attention times on. Even at this stage, add any custom alerts for the watchdogs. Consider email alerting which I will need my smart host. Details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I convinced side on the route. Device size. There we go. My three machines are busy creating. I now need to add some workers to this cluster. So I go through the same process this time once again, just selecting worker. I'll just add to once again the am I is extremely important. Will fail if we don't pick the right. Am I for a Clinton machine? In this case and the deployment has started, we can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen. Cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here. We've created the VPC. We've created the sub nets on. We've created the Internet Gateway. It's unnecessary made of us. And we have no warnings of the stage. Okay, this will then run for a while. We have one minute past. We can click through. We can check the status of the machine balls as individuals so we can check the machine info, details of the machines that we've assigned mhm and see any events pertaining to the machine areas like this one on normal. Yeah. Just last. The community's components are waiting for the machines to start. Go back to customers. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway. And at this stage, the machines have been built on assigned. I pick up the U S. Yeah, yeah, yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. No speeding things up a little bit this whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured mhm and then we go. Cluster has been deployed. So once the classes deployed, we can now never get around. Our environment are looking into configure cluster. We could modify their cluster. We could get the end points for alert Alert Manager See here the griffon occupying and Prometheus are still building in the background but the cluster is available on You would be able to put workloads on it at this stage to download the cube conflict so that I can put workloads on it. It's again the three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster. All right, Now that the build is fully completed, we can check out cluster info on. We can see that all the satellite components have been built. All the storage is there, and we have access to the CPU. I. So if we click into the cluster, we can access the UCP dashboard, click the signing with the clock button to use the SSO. We give Mary's possible to use the name once again. Thing is an unlicensed cluster way could license at this point. Or just skip it on. Do we have the UCP dashboard? You could see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon. A data just been automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Um, so we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah, a general dashboard of Cuba Navies cluster. What If this is configurable, you can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster. All right to scale the cluster on to add a No. This is simple. Is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger group disks and you'll see that worker has been added in the provisioning state. On shortly, we will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node we would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workloads are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button validating which release you would like to update to this case. This available releases five point seven point one give you I'm kicking the update back in the background. We will coordinate. Drain each node slowly, go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Who we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already. Yeah, and in a few minutes, we'll see that the upgrade has been completed. There we go. Great. Done. If you work loads of both using proper cloud native community standards, there will be no impact. >>All right, there. We haven't. We got our first workload cluster spun up and managed by Dr Enterprise Container Cloud. So I I loved Shawn's classic warning there. When you're spinning up an actual doctor enterprise deployment, you see little errors and warnings popping up. Just don't touch it. Just leave it alone and let Dr Enterprises self healing properties take care of all those very transient temporary glitches, resolve themselves and leave you with a functioning workload cluster within victims. >>And now, if you think about it that that video was not very long at all. And that's how long it would take you if someone came into you and said, Hey, can you spend up a kubernetes cluster for development development A. Over here, um, it literally would take you a few minutes to thio Accomplish that. And that was with a W s. Obviously, which is sort of, ah, transient resource in the cloud. But you could do exactly the same thing with resource is on Prem or resource is, um physical resource is and will be going through that later in the process. >>Yeah, absolutely one thing that is present in that demo, but that I like to highlight a little bit more because it just kind of glides by Is this notion of, ah, cluster release? So when Sean was creating that cluster, and also when when he was upgrading that cluster, he had to choose a release. What does that didn't really explain? What does that mean? Well, in Dr Enterprise Container Cloud, we have released numbers that capture the entire staff of container ization tools that will be deploying to that workload costume. So that's your version of kubernetes sed cor DNs calico. Doctor Engineer. All the different bits and pieces that not only work independently but are validated toe work together as a staff appropriate for production, humanities, adopted enterprise environments. >>Yep. From the bottom of the stack to the top, we actually test it for scale. Test it for CVS, test it for all of the various things that would, you know, result in issues with you running the application services. And I've got to tell you from having, you know, managed kubernetes deployments and things like that that if you're the one doing it yourself, it can get rather messy. Eso This makes it easy. >>Bruce, you were staying a second ago. They I'll take you at least fifteen minutes to install your release. Custer. Well, sure, but what would all the other bits and pieces you need toe? Not just It's not just about pressing the button to install it, right? It's making the right decision. About what components work? Well, our best tested toe be successful working together has a staff? Absolutely. We this release mechanism and Dr Enterprise Container Cloud. Let's just kind of package up that expert knowledge and make it available in a really straightforward, fashionable species. Uh, pre Confederate release numbers and Bruce is you're pointing out earlier. He's got delivered to us is updates kind of transparent period. When when? When Sean wanted toe update that cluster, he created little update. Custer Button appeared when an update was available. All you gotta do is click. It tells you what Here's your new stack of communities components. It goes ahead. And the straps those components for you? >>Yeah, it actually even displays at the top of the screen. Ah, little header That says you've got an update available. Do you want me to apply? It s o >>Absolutely. Another couple of cool things. I think that are easy to miss in that demo was I really like the on board Bafana that comes along with this stack. So we've been Prometheus Metrics and Dr Enterprise for years and years now. They're very high level. Maybe in in previous versions of Dr Enterprise having those detailed dashboards that Ravana provides, I think that's a great value out there. People always wanted to be ableto zoom in a little bit on that, uh, on those cluster metrics, you're gonna provides them out of the box for us. Yeah, >>that was Ah, really, uh, you know, the joining of the Miranda's and Dr teams together actually spawned us to be able to take the best of what Morantes had in the open stack environment for monitoring and logging and alerting and to do that integration in in a very short period of time so that now we've got it straight across the board for both the kubernetes world and the open stack world. Using the same tool sets >>warm. One other thing I wanna point out about that demo that I think there was some questions about our last go around was that demo was all about creating a managed workplace cluster. So the doctor enterprise Container Cloud managers were using those aws credentials provisioned it toe actually create new e c two instances installed Docker engine stalled. Doctor Enterprise. Remember all that stuff on top of those fresh new VM created and managed by Dr Enterprise contain the cloud. Nothing unique about that. AWS deployments do that on open staff doing on Parramatta stuff as well. Um, there's another flavor here, though in a way to do this for all of our long time doctor Enterprise customers that have been running Doctor Enterprise for years and years. Now, if you got existing UCP points existing doctor enterprise deployments, you plug those in to Dr Enterprise Container Cloud, uh, and use darker enterprise between the cloud to manage those pre existing Oh, working clusters. You don't always have to be strapping straight from Dr Enterprises. Plug in external clusters is bad. >>Yep, the the Cube config elements of the UCP environment. The bundling capability actually gives us a very straightforward methodology. And there's instructions on our website for exactly how thio, uh, bring in import and you see p cluster. Um so it it makes very convenient for our existing customers to take advantage of this new release. >>Absolutely cool. More thoughts on this wonders if we jump onto the next video. >>I think we should move press on >>time marches on here. So let's Let's carry on. So just to recap where we are right now, first video, we create a management cluster. That's what we're gonna use to create All our downstream were closed clusters, which is what we did in this video. Let's maybe the simplest architectures, because that's doing everything in one region on AWS pretty common use case because we want to be able to spin up workload clusters across many regions. And so to do that, we're gonna add a third layer in between the management and work cluster layers. That's gonna be our regional cluster managers. So this is gonna be, uh, our regional management cluster that exists per region that we're going to manage those regional managers will be than the ones responsible for spending part clusters across all these different regions. Let's see it in action in our next video. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectural overview, how to set up the management environment, prepare for the deployment deployment overview, and then just to prove it, to play a regional child cluster. So looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case, AWS on the L C M components on the d you speak cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need original cluster? Different platform architectures, for example AWS open stack, even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager. We also machine manager. We're hell Mandel are managed as well as the actual provider logic. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. When you see it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster, similar to what we're going to deploy now. Also only has three managers once again, no workers. But as a comparison is a child cluster. This one has three managers, but also has additional workers associate it to the cluster. Yeah, all right, we need to connect. Tell bootstrap note, preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine Mhm. All right, A few things we have to do to make sure the environment is ready. First thing we're gonna pseudo into route. I mean, we'll go into our releases folder where we have the car's boot strap on. This was the original bootstrap used to build the original management cluster. We're going to double check to make sure our cube con figures there It's again. The one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything's working, condone, load our images waken access to a swell. Yeah, Next, we're gonna edit the machine definitions what we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I So that's found under the templates AWS directory. We don't need to edit anything else here, but we could change items like the size of the machines attempts we want to use but the key items to ensure where changed the am I reference for the junta image is the one for the region in this case aws region of re utilizing. This was an open stack deployment. We have to make sure we're pointing in the correct open stack images. Yeah, yeah. Okay. Sit the correct Am I save the file? Yeah. We need to get up credentials again. When we originally created the bootstrap cluster, we got credentials made of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we just exporting AWS access key and I d. What's important is Kaz aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our Q conflict that we want to use for the management cluster when we looked at earlier. Yeah, now we're exporting that. Want to call? The cluster region is Frankfurt's Socrates Frankfurt yet trying to use something descriptive? It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at us and waiting for the past, you know, to get started. Please the best your node, onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy Dr Enterprise, he says. Probably the longest face we'll see in a second that all the nodes will go from the player deployed. Prepare, prepare Mhm. We'll see. Their status changes updates. It was the first word ready. Second, just applying second. Grady, both my time away from home control that's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running a data for us? Yeah, almost a on. Now we're playing Stockland. Thanks. Whichever is done on Done. Now we'll build a child cluster in the new region very, very quickly. Find the cluster will pick our new credential have shown up. We'll just call it Frankfurt for simplicity. A key on customers to find. That's the machine. That cluster stop with three manages set the correct Am I for the region? Yeah, Same to add workers. There we go. That's the building. Yeah. Total bill of time. Should be about fifteen minutes. Concedes in progress. Can we expect this up a little bit? Check the events. We've created all the dependencies, machine instances, machines. A boat? Yeah. Shortly. We should have a working caster in the Frankfurt region. Now almost a one note is ready from management. Two in progress. On we're done. Trust us up and running. >>Excellent. There we have it. We've got our three layered doctor enterprise container cloud structure in place now with our management cluster in which we scrap everything else. Our regional clusters which manage individual aws regions and child clusters sitting over depends. >>Yeah, you can. You know you can actually see in the hierarchy the advantages that that presents for folks who have multiple locations where they'd like a geographic locations where they'd like to distribute their clusters so that you can access them or readily co resident with your development teams. Um and, uh, one of the other things I think that's really unique about it is that we provide that same operational support system capability throughout. So you've got stack light monitoring the stack light that's monitoring the stack light down to the actual child clusters that they have >>all through that single pane of glass that shows you all your different clusters, whether their workload cluster like what the child clusters or usual clusters from managing different regions. Cool. Alright, well, time marches on your folks. We've only got a few minutes left and I got one more video in our last video for the session. We're gonna walk through standing up a child cluster on bare metal. So so far, everything we've seen so far has been aws focus. Just because it's kind of easy to make that was on AWS. We don't want to leave you with the impression that that's all we do, we're covering AWS bare metal and open step deployments as well documented Craftsman Cloud. Let's see it in action with a bare metal child cluster. >>We are on the home stretch, >>right. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. Yeah, so why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and support high performance workouts like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another hyper visor layer in between. So continuing on the theme Why communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p g A s G p, us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. We can handle utilization in the scheduling better Onda. We increase the performance and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project. Will add the bare metal hosts, including the host name. I put my credentials. I pay my address, Mac address on, then provide a machine type label to determine what type of machine it is. Related use. Okay, let's get started Certain Blufgan was the operator thing. We'll go and we'll create a project for our machines to be a member off. Helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. Yeah. So the first thing we had to be in post many of the machine a name. Anything you want? Yeah, in this case by mental zero one. Provide the IAP My user name. Type my password? Yeah. On the Mac address for the active, my interface with boot interface and then the i p m i P address. Yeah, these machines. We have the time storage worker manager. He's a manager. We're gonna add a number of other machines on will speed this up just so you could see what the process. Looks like in the future, better discovery will be added to the product. Okay, Okay. Getting back there. We haven't Are Six machines have been added. Are busy being inspected, being added to the system. Let's have a look at the details of a single note. Mhm. We can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. Okay, it's going to create the cluster. Mhm. Okay, so we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So credit custom. We'll give it a name. Thank you. But he thought were selecting bare metal on the region. We're going to select the version we want to apply on. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of the dress range update the address range that we want to use for the cluster. Check that the sea idea blocks for the communities and tunnels are what we want them to be. Enable disabled stack light and said the stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here we're focused on building communities clusters. So we're gonna put the count of machines. You want managers? We're gonna pick the label type manager on create three machines. Is a manager for the Cuban a disgusting? Yeah, they were having workers to the same. It's a process. Just making sure that the worker label host like you are so yes, on Duin wait for the machines to deploy. Let's go through the process of putting the operating system on the notes, validating that operating system. Deploying Docker enterprise on making sure that the cluster is up and running ready to go. Okay, let's review the bold events. We can see the machine info now populated with more information about the specifics of things like storage. Yeah, of course. Details of a cluster, etcetera. Yeah, Yeah. Okay. Well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build, and that brings us to the end of this particular do my as you can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>Here we have a child cluster on bare metal for folks that wanted to play the stuff on Prem. >>It's ah been an interesting journey taken from the mothership as we started out building ah management cluster and then populating it with a child cluster and then finally creating a regional cluster to spread the geographically the management of our clusters and finally to provide a platform for supporting, you know, ai needs and and big Data needs, uh, you know, thank goodness we're now able to put things like Hadoop on, uh, bare metal thio in containers were pretty exciting. >>Yeah, absolutely. So with this Doctor Enterprise container cloud platform. Hopefully this commoditized scooping clusters, doctor enterprise clusters that could be spun up and use quickly taking provisioning times. You know, from however many months to get new clusters spun up for our teams. Two minutes, right. We saw those clusters gets better. Just a couple of minutes. Excellent. All right, well, thank you, everyone, for joining us for our demo session for Dr Enterprise Container Cloud. Of course, there's many many more things to discuss about this and all of Miranda's products. If you'd like to learn more, if you'd like to get your hands dirty with all of this content, police see us a training don Miranda's dot com, where we can offer you workshops and a number of different formats on our entire line of products and hands on interactive fashion. Thanks, everyone. Enjoy the rest of the launchpad of that >>thank you all enjoy.

Published Date : Sep 17 2020

SUMMARY :

So for the next couple of hours, I'm the Western regional Solutions architect for Moran At least somebody on the call knows something about your enterprise Computer club. And that's really the key to this thing is to provide some, you know, many training clusters so that by the end of the tutorial content today, I think that's that's pretty much what we had to nail down here. So the management costs was always We have to give this brief little pause of the management cluster in the first regional clusters to support AWS deployments. So in that video are wonderful field CTO Shauna Vera bootstrapped So primarily the foundation for being able to deploy So this cluster isn't yet for workloads. Read the phone book, So and just to make sure I understood The output that when it says I'm pivoting, I'm pivoting from on the bootstrap er go away afterwards. So that there's no dependencies on any of the clouds that get created thereafter. Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, The config file that that's generated the template is fairly straightforward We always insist on high availability for this management cluster the scenes without you having toe worry about it as a developer. Examples of that is the day goes on. either the the regional cluster or a We've got the management cluster, and we're gonna go straight with child cluster. as opposed to having to centralize thumb So just head on in, head on into the docks like the Dale provided here. That's going to be in a very near term I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. No, just that the fact that we're running through these individual So let's go to that video and see just how We can check the status of the machine bulls as individuals so we can check the machine the thing that jumped out to me at first Waas like the inputs that go into defining Yeah, and and And that's really the focus of our effort is to ensure that So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old of the bootstrapping as well that the processes themselves are self healing, And the worst thing you could do is panic at the first warning and start tearing things that don't that then go out to touch slack and say hi, You need to watch your disk But Sean mentioned it on the video. And And the kubernetes, uh, scaling methodology is is he adhered So should we go to the questions. Um, that's kind of the point, right? you know, set up things and deploy your applications and things. that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. to the standards that we would want to set to make sure that we're not overloading On the next video, we're gonna learn how to spin up a Yeah, Do the same to add workers. We got that management cluster that we do strapped in the first video. Yeah, that's the key to this is to be able to have co resident with So we don't have to go back to the mother ship. So it's just one pane of glass to the bootstrapped cluster of the regional services. and another, you know, detail for those that have sharp eyes. Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want all of the other major players in the cloud arena. Let's jump into our last video in the Siri's, So the first thing we had to be in post, Yeah, many of the machine A name. Much the same is how we did for AWS. nodes and and that the management layer is going to have sufficient horsepower to, are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. that's been put out in the chat, um, that you'll be able to give this a go yourself, Uh, take the opportunity to let your colleagues know if they were in another session I e just interest will feel for you. Use A I'm the one with the gray hair and the glasses. And for the providers in the very near future. I can hardly wait. Let's do it all right to share my video So the first thing is, we need those route credentials which we're going to export on the command That is the tool and you're gonna use to start spinning up downstream It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, and all of the necessary parameters that you would fill in have That's the very first thing you're going to Yeah, for the most part. Let's now that we have our management cluster set up, let's create a first We can check the status of the machine balls as individuals so we can check the glitches, resolve themselves and leave you with a functioning workload cluster within exactly the same thing with resource is on Prem or resource is, All the different bits and pieces And I've got to tell you from having, you know, managed kubernetes And the straps those components for you? Yeah, it actually even displays at the top of the screen. I really like the on board Bafana that comes along with this stack. the best of what Morantes had in the open stack environment for monitoring and logging So the doctor enterprise Container Cloud managers were Yep, the the Cube config elements of the UCP environment. More thoughts on this wonders if we jump onto the next video. Let's maybe the simplest architectures, of the regional cluster and how it connects to the management cluster on their components, There we have it. that we provide that same operational support system capability Just because it's kind of easy to make that was on AWS. Just making sure that the worker label host like you are so yes, It's ah been an interesting journey taken from the mothership Enjoy the rest of the launchpad

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MaryPERSON

0.99+

SeanPERSON

0.99+

Sean O'MaraPERSON

0.99+

BrucePERSON

0.99+

FrankfurtLOCATION

0.99+

three machinesQUANTITY

0.99+

Bill MilksPERSON

0.99+

AWSORGANIZATION

0.99+

first videoQUANTITY

0.99+

second phaseQUANTITY

0.99+

ShawnPERSON

0.99+

first phaseQUANTITY

0.99+

ThreeQUANTITY

0.99+

Two minutesQUANTITY

0.99+

three managersQUANTITY

0.99+

fifth phaseQUANTITY

0.99+

ClarkPERSON

0.99+

Bill MillsPERSON

0.99+

DalePERSON

0.99+

Five minutesQUANTITY

0.99+

NanPERSON

0.99+

second sessionQUANTITY

0.99+

Third phaseQUANTITY

0.99+

SeymourPERSON

0.99+

Bruce Basil MatthewsPERSON

0.99+

Moran TousPERSON

0.99+

five minutesQUANTITY

0.99+

hundredsQUANTITY

0.99+

Chris Riley, Automation Anywhere | CUBE Conversations, June 2020


 

>> Narrator: From theCUBE's studios in Palo Alto, in Boston, connecting with thought leaders all around the world. This is a CUBE conversation. >> Hey everybody, this is Dave Vellante and welcome to this episode of "CXO Insights." As you know, we've been grabbing the perspectives of leaders throughout this pandemic and assessing their tips for managing in a crisis and of course, managing in good times as well. But now, as we enter the post-isolation economy, we really want to look at not just how you manage through the crisis but how you manage beyond the crisis. And I'm really excited to have Chris Riley here. He's the newly minted Chief Revenue Officer at Automation Anywhere. Chris, my friend, how you doing? I hope you and the family are well. >> Thank you, David. I wish the same for you. I think getting by as most folks are, it's the new normal, we're all getting used to it but I'm happy to be here and happy to be at Automation Anywhere. >> Yeah, I want to talk about that in detail. Eddie Walsh calls it the new abnormal but so congratulations on the new role. I want to start with your career. I met you in 1987, which ironically was the same year I met Dave Donatelli, the same year I met Michigan I. and Saul Koi, talk about great timing. And then, you came into the industry at a time, really different time. It was, the IBM people don't remember this but IBM was the dominant player and you guys unseated them amazing 12-year career at EMC and then you kind of went to the .com boom. That was amazing. You relive that ride, did a stint at HP and really turned that business around and then came back to Dell, top go to market executive. One of the top in the industry that I know and now, of course at Automation Anywhere we're going to talk about. My first question to you is, a lot of changes have occurred since 1987. What has changed the most? Now we're talking diversity, we're talking all kinds of your different sales models. From your career looking back, what's changed the most? >> I think everything has changed and candidly for the better, Dave. You just led with one of the key areas in an area I'm deeply passionate about and that is diversity and inclusion and I think there's no stronger time, at least in our country's history where the inequalities that exist have been so exposed. So I view this as an opportunity, as I did at Dell to make a difference, lead from the front and make this a destination and a company whose culture really supports and drives diversity and inclusion. So I'd say that's one area, and I know it's very passionate for you as well. The others, it was a time before laptops, desktops. I think Ken Olsen once said, who would ever need a laptop in their home and boy, the world has changed. So I think some of the things though that haven't changed and this is why I'm so excited about Automation Anywhere. At the manual processes we have our workers doing and I think there is a real opportunity. I've lived through explosive growth at EMC, top company performing stock during the 90s, I get to see VMware firsthand. I've seen what's happened with ServiceNow and I believe this RPA space, as to you is in its infancy. It's seeing 30% compounded annual growth and we're just at the beginning and I think it's going to change the way people work and really lead to that digital transformation that so many of us have been talking about for the last decade. >> Yeah and you know kind of my position. Quick aside, I don't know if you saw the Netflix announcement this morning and I've been wondering as a small business, what can we do? What more can we do for inclusion and diversity? Netflix announced they're going to take 2% of their cash and put it into banks, financial institutions that support black causes and I just talked to our CFO. I said, look, why don't we take some of our cash, let's take 2% and stick it into banks, community banks. There's 30 million small businesses in the United States. If just 1% puts 10 grand in each, that's $3 billion that go into black community. So I'm going to start a mission and I just thought I'd share that 'cause I know it's a passion of yours. >> Yeah, and we all need to be in a position to provide equal opportunity for employment and that is reaching out into those communities and starting early on in creating the opportunities for advancement professionally, mentorship and just the path forward. And I'm excited to see what Netflix is doing. I'm sure you'll come up with the right answer for your company and I think all of us are searching, what's the right answer for our respective companies? >> Yeah, so now let's get into it. You're a month in and I want to talk about this project. I've learned a lot about not only RPA but about automation. I've just had a deep dive with your team and it really brought some things into focus. Guys, if you bring up the first slide, I want to get some thoughts on the table here. So this is a chart that sort of came into my focus with a friend of mine, Dave Moschello, who really big thinker on this stuff and he pointed out, this is data from the US Bureau of Labor and Statistics and the EU and it shows the lackluster productivity that's going on in the past decade. So you can see, we had the boost in the 80s and the 90s, we had this sort of productivity uptick from laptops but now, look what's happened since 2007. And the point that Moschello made on the right hand side is we have all these huge issues that we face, whether it's climate change, we have this massive debt, healthcare, an aging population, feeding everyone, et cetera, et cetera, et cetera, and his point was, there's no way we're going to be able to solve all these problems by throwing humans at the problem. So I've really begun to sort of think about this just in terms of machines and the roles that machines will play. I think overnight, Chris, we've gone from, wow, I'm afraid that machines are going to take my job to you can't operate if you're not digital. >> Yeah, well digital transformation is not a new term. I think it's meant something different each year for the last 10 years and I look at this as an opportunity. The World Economic Forum projected that IA and RPA will create 58 million new jobs. It's an astounding number. What COVID-19 has exposed is this work from home phenomenon that really exposes the risk of manual processes within the enterprise. So I think those two things combined is almost a perfect storm and I think what it'll do is accelerate the adoption of RPA and IPA. So something that might've taken years or decades to really be adopted in force, in this new normal, I think is going to be accelerated quite dramatically. >> So, the combination of your go to market execution, you managed complex sales motions before. Automation Anywhere obviously has some great product capabilities. Guys, I want to bring up the next slide and Chris, you might have seen this in some of the stuff that I wrote but this is data from ETR Enterprise technology research. They're a data partner of ours. Now it's clear that Automation Anywhere has the right product market fit and you can see on this chart, this is a methodology that we use. ETR goes out and they ask people, are you adopting a platform new? Are you increasing spending relative to last year? Are you flat, decreasing or replacing? And you can see here, there is zero churn in the Automation Anywhere base. And so obviously you got product market fit. Churn is the silent killer, obviously of SAS companies and so, you've picked a winner and I'm learning more about this. At first I thought the team office is quite large, I sized it. I actually think it's bigger than I originally thought. Chris, I thought this was going to be a winner-take-all type of market. I'm really rethinking that after, especially the deep dive I've had with your team in terms of how you guys go to market with an end-to-end sort of life cycle approach as opposed to sort of putting point products in. So I wonder if that narrative that I just laid out, resonates with you, is it sort of consistent with what you're seeing and then maybe some of the reasons why you joined Automation Anywhere? >> Yeah, I would say the most aggressive software growth that I've seen in the last decade or so, and two companies stand out for me. That's VMware and ServiceNow. They don't have a point product, they have a platform and that's what attracted me to Automation Anywhere is this platform approach. And Dave as you know, I've spent most of my career calling on the enterprise' strong relationships with those types of companies and they aren't looking to develop a point product solution and then cobble together lots of disparate islands of solutions. They're looking for a platform that can grow as they grow. They can extend from the back office to the front office but all centered around workforce, transformation, productivity and just as importantly, resiliency. And as we start to develop more and more capabilities that will be delivered through this platform approach, I think we're going to see explosive growth as the industry goes through its explosive growth. >> Well, I know your approach and your approach is to stay very close to customers. So as you were doing your due diligence on Automation Anywhere and as you enter your sort of first part of your 100-day journey here, I'm sure you've talked to a lot of customers. What are they telling you? What are the big takeaways right now that you're hearing? >> Yeah, so I think some of the data you pointed out with 4,000 customers in essence, zero churn, the opportunity to upsell those customers with more products and solutions clearly is there. New account acquisition has been a tremendous source of growth for the company in a strong professional services organization that actually is able to deliver the outcomes that our customers expect. From an enterprise perspective, I couldn't have come into a better situation with 4,000 customers, 50% of the fortune 500, 2 million bots deployed, those types of strategic relationships are going to be more and more critical as this company continues to accelerate its growth. Most of the RPA solutions really got in through the back office and candidly, really weren't even a component of an IT solution. Now, as you go to the front of the house, where you're looking at customer facing applications and worker productivity, these are CEO, CFO, COO and IT initiatives. So I really believe we're coming into our own, at the front of the house with senior executives that really want to create a better working environment for their employees and de-risk a lot of these manual processes that have existed for years. >> So I know why you chose Automation Anywhere. My question is, why did Automation Anywhere choose Chris Riley? I know your capabilities but obviously when somebody hires a executive like yourself, they say, "Hey, Chris, we want you to help us "get to the next level," but what does that mean? Are we talking about changes in the go to market? Are we talking about your ability to recruit and coach, to manage complex of sales motions? What is it that you want to bring to Automation Anywhere? >> I think it's all those, Dave. Having built a reputation throughout my 30 plus year career around a people-centric focus, a customer-centric focus and those two things kind of aren't interchangeable. When you look at customers, they put their faith and confidence in people and they put their faith and confidence in companies. And what I see here at Automation Anywhere is that the ability to kind of expand upon the great culture that the company already has but do it with someone whose role in a company at scale globally and can put a lot of the best practices and disciplines in place to do that 'cause it is difficult but also, how do we start to do larger, more complex deals and build relationships with the CIO, the CFO, the CEO? And I think a big reason why I'm here is, that experience in doing that, doing large complex multi-year opportunities but also being able to deliver upon the outcomes that we told our customers we could achieve and do that together with our customers and again we have a strong professional services organization and an incredible ecosystem of partners that have demonstrated year over year, the company's ability to actually deliver upon its promise. >> That was my next question to you, was the ecosystem. One of the big changes from when you started in this business, was it used to be just belly to belly, hardcore, direct sales, the importance and leverage that you get from a partner ecosystem. You point out VMware. In fact ServiceNow, it's interesting. When we first started covering ServiceNow, one of the things we said is we want to see as an indicator of success, the partner ecosystem evolve and then sure enough, it exploded with the SIs and all the kinds of developers. So maybe talk about AA's ecosystem, The Partner System. You obviously have a lot of experience there in your career, how do you see that as a leverage point? >> Yeah, it's huge. This market is far larger than we can cover with a direct sales organization and requires substantial services engagements that go well beyond the kind of professional services organization we want to build out organically in the company. So when you look at that, the company today has 1,900 partners. The global systems integrators are key, especially those with VPO type practices, the regional SIs and candidly, the regional VARs who've built out a strong service malpractice, a strong VMware practice and have the professional services capabilities to do some of these complex automation or automation type work that have also built the trust and confidence of their customers. So, in partnership with these types of companies, we believe we can expand our reach. We believe we can offer a more comprehensive outcome and solution to our customers and we, what I'm going to be looking at is, how do we enhance our channel programs to be the kind of company that the channel partners want to engage with, built upon the reputation of the company, the leadership position we have in the technology and also our willingness to go after this space together. >> So I got to go but last question is, what can you share with us about your 100-day plan? Where are you going to focus? >> On the people. There is a strong culture here, there's incredible sales talent and there's talent throughout the organization. I think Dave, you've seen for me over the years, a clarity of our mission, keep things simple and try and drive a repetitive process to deliver results. I'm very accountability focused. So I think what I'm going to look to assess is where the organization is today, how to get more out of the great talent we have, build stronger and deeper relationships with our customers and really scale and grow through our ecosystem of channel partners. >> Well, Chris, I'm super excited for you. A great hire by Automation Anywhere obviously got my attention. I think it'll get the industry's as well. Best of luck, and of course we'll be watching. >> Good, always great to see you, Dave, take care. >> Yeah, ditto, thanks so much for coming on and thank you for watching everybody. Keep it here because this month we're going to be really digging into the ETR data we've been reporting on that horse race between Automation Anywhere and UI Path. The ETR data is in the field and we'll be reporting on that. So look for that. This is Dave Vellante for theCUBE and we'll see you next time. (gentle music)

Published Date : Jul 2 2020

SUMMARY :

leaders all around the world. the perspectives of leaders and happy to be at Automation Anywhere. and then came back to Dell, and I think it's going to and I just talked to our CFO. and just the path forward. and the 90s, we had this that really exposes the and you can see on this chart, and they aren't looking to What are the big takeaways of the data you pointed out changes in the go to market? is that the ability to kind of and all the kinds of developers. and have the professional the great talent we have, I think it'll get the industry's as well. Good, always great to and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

DavidPERSON

0.99+

Dave VellantePERSON

0.99+

Dave DonatelliPERSON

0.99+

Ken OlsenPERSON

0.99+

NetflixORGANIZATION

0.99+

Eddie WalshPERSON

0.99+

Dave MoschelloPERSON

0.99+

DavePERSON

0.99+

Saul KoiPERSON

0.99+

Chris RileyPERSON

0.99+

IBMORGANIZATION

0.99+

1987DATE

0.99+

$3 billionQUANTITY

0.99+

Palo AltoLOCATION

0.99+

30%QUANTITY

0.99+

12-yearQUANTITY

0.99+

30 millionQUANTITY

0.99+

June 2020DATE

0.99+

US Bureau of Labor and StatisticsORGANIZATION

0.99+

1,900 partnersQUANTITY

0.99+

United StatesLOCATION

0.99+

100-dayQUANTITY

0.99+

DellORGANIZATION

0.99+

30 plus yearQUANTITY

0.99+

two companiesQUANTITY

0.99+

4,000 customersQUANTITY

0.99+

10 grandQUANTITY

0.99+

50%QUANTITY

0.99+

HPORGANIZATION

0.99+

2%QUANTITY

0.99+

Automation AnywhereORGANIZATION

0.99+

COVID-19OTHER

0.99+

EMCORGANIZATION

0.99+

first questionQUANTITY

0.99+

2007DATE

0.99+

BostonLOCATION

0.99+

first slideQUANTITY

0.99+

World Economic ForumORGANIZATION

0.99+

each yearQUANTITY

0.99+

EUORGANIZATION

0.99+

todayDATE

0.99+

last yearDATE

0.98+

90sDATE

0.98+

two thingsQUANTITY

0.98+

1%QUANTITY

0.98+

dittoPERSON

0.98+

ServiceNowTITLE

0.97+

this monthDATE

0.96+

OneQUANTITY

0.96+

oneQUANTITY

0.96+

ETRORGANIZATION

0.96+

MoschelloPERSON

0.95+

first partQUANTITY

0.95+

firstQUANTITY

0.95+

2 million botsQUANTITY

0.94+

pandemicEVENT

0.94+

80sDATE

0.94+

CUBEORGANIZATION

0.92+

CXO InsightsTITLE

0.92+

IAORGANIZATION

0.91+

last decadeDATE

0.91+

SASORGANIZATION

0.91+

one areaQUANTITY

0.91+

theCUBEORGANIZATION

0.89+

past decadeDATE

0.89+

yearsQUANTITY

0.87+

ServiceNowORGANIZATION

0.86+

decadesQUANTITY

0.86+

58 million new jobsQUANTITY

0.85+

Dave Malik, Cisco | Cisco Live US 2019


 

>> Narrator: Live from San Diego, California. It's theCUBE. covering Cisco Live US 2019. Brought to you by Cisco and its ecosystem partners. >> Welcome back to San Diego, everybody. You're watching Cisco Live 2019. This is theCUBE, the leader in live tech coverage. This is day three of our wall-to-wall coverage. We go out to the events, we extract the signal from the noise. My name is Dave Vellante. Stu Miniman is here. Our third host, Lisa Martin is also in the house. Dave Malik is here. He's a fellow and Chief Architect at Cisco. David, good to see you. >> Oh, glad to be here. >> Thanks for coming on. First of all, congratulations on being a fellow. What does that mean, a Cisco Fellow? What do you got to go through to achieve that status? >> It's pretty arduous task. It's one of the most highest technical designations in Cisco, but we work across multiple architectures in technologies, as well as our partners, as well, to drive corporate-wide strategy. >> So you've been talking to customers here, you've been presenting. I think you said you gave three presentations here? Multi-cloud, blockchain, and some stuff on machine intelligence, ML. >> Yes. >> Let's hit those. Kind of summarize the overall themes, and then we'll maybe get into each, and then we got a zillion questions for you. >> Sure, excellent. So multi-cloud, I think one of the customers, we're clearly hearing from them is around, how do we get a universal policy model and connectivity model, and how do you orchestrate workloads seamlessly? And those are some of the challenges that we trying to address at this conference. On blockchain, a lot of buzz out there. We're not talking about Bitcoin or cryptocurrency, it's really about leveraging blockchain from a networking perspective, or an identity and encryption, and providing a uniform ledger that everything is pervasive across infrastructure. And then ML, I think it's the heart of every conversation. How do we take pervasive analytics and bring it into the network so we can drive actionable insights into automation? >> So let's start with the third one. When you talk about ML, was your talk on machine learning? Did it spill into artificial intelligence? What's the difference to you from a technology perspective? >> Machine learning is really getting a lot of the data and looking at repetitive patterns in a very common fashion, and doing a massive correlation across multiple domains. So you may have some things happening in the branch, the data set, or a WAN in cloud, but the whole idea is how do you put them together to drive insight? And through artificial intelligence and algorithms, we can try to take those insights and automate them and push them back into the infrastructure or to the application layer. So now you're driving intelligence for not just consumers or devices, but also humans as well to drive insight. >> All right. So Dave, I wonder if you'd help connect with us what you were talking about there, and we'll get to the multicloud piece because I was at an Amazon show last week from Amazon, talking about how when they look at all the technologies that they use to get packages, their fulfillment centers, everything that they do as a business, ML and AI, they said, is underneath that, and AWS is what's driving that technology from that standpoint. Now, multicloud, AWS is a partner of yours. >> Yes. >> Can you give us how you work in multicloud and does ML and IA, is that a Cisco specific? Are you working with some of the standards out there to connect all those pieces? Help us look at some of the big picture of those items. >> So we believe we're agnostic, whether you connect to Amazon, Azure, Google, et cetera, we believe in a uniform policy model and connectivity model, which is very, very arduous today. So you shouldn't have to have a specific policy model, connectivity model, security model for that matter, for each provider. So we're normalizing that plane completely, which is awesome. Then, at a workload level, regardless of whether your workload is spun up or spun down, it should have the same security posture and visibility. We have certain customers that are running as single applications across multiple clouds, so your data is going to be obviously on-prem, you may be running analytics in TenserFlow, compute in EC2, and connecting to O365, that's one app. And where we're seeing the models go is are you leveraging technology such as this? Do you offer service mesh? How do we tie a lot of these micro-services together and then be able to layer workload orchestration on top? So regardless of where your workload sits, and one key point that we keep hearing from our customers is their ungovernance. How we provide cloud-based governance regardless of where their workload is, and that's something we're doing in a very large fashion with customers that have a multicloud strategy. >> So Stu, I think there's still some confusion around multicloud generally, and maybe Cisco's strategy. I wonder if we could maybe clear it up a little bit. >> Dave, it's that big elephant in the room, and I always feel like everybody describes multicloud from a different angle. >> So let's dig into this a little bit, and let's hear from Cisco's perspective. So you got, to my count, five companies really going after this space. You got Cisco, VMware, IBM Red Hat, Microsoft, and Google with Anthos. Probably all those guys are partners of yours. >> Yes. >> Okay, but you guys want to provide the bromide or the single pane of glass, okay. I'm hearing open and agnostic. That's a differentiator. Security, you're in a good position to make an argument that you're in a good position to make things secure. You got the network and so forth. High-performance network, and cost-effective. Everybody's going to make that argument relative to having multiple stovepipes, but that's part of your story as well. So the question. Why Cisco? What's the key differentiator and what gives you confidence that you can really help win in this marketplace? >> So our core competencies are our networking and security. Whether it's cloud-based security or on-prem security, it's uniform. From a security perspective, we have a universal architecture. Whether it's the endpoint, the edge, the cloud, they're all sharing information and intelligence. That's really important. Instead of having bespoke products, these products and solutions need to communicate with each other, so if someone's sick in one area, we're informing the other one. So threat intelligence and network intelligence is huge. Then more importantly, after working with Google, Microsoft, and Amazon, we have on-prem solutions as well, so as customers are going on their multicloud journey, and eventually the workload will transition, you have the same management experience and security experience. So Anthos was a recent announcement, AWS as well, where you can run on-prem Kubernetes, and you can take the same workload and move it to AWS or GCP, but the management model and the control pane model, they are extremely similar and you don't have to learn anything new from a training perspective. >> Okay, but I used the term agnostic, oh, no. You did agnostic, I said open. But you don't care if it's Anthos or VMware, or OpenShift, you don't care. >> Don't care. >> And, architecturally, how is it that you can successfully not care? >> Because the underlying, fundamental principles is you can load any workload you want with this, bare metal, virtualized, or Kubernetes-based containers, they all need the same. For example, everyone needs bread and water. It's not different. So why should you be able to discriminate against a workload or OpenShare if they're using Pivotal Cloud Foundry, for example? The same model, all applications still need security, visibility, networking, and management, but they should not be different across all clouds, and that's traditionally what you're seeing from the other vendors in the market. They're very unique to their stovepipe, and we want to break down those stovepipes across the board, regardless of what app and what workload you have. >> Dave, talk a little bit about the automation that Cisco's delivering to help enable this because there's skill set challenges, just the scale of these environments are more than humans alone can take care of, so how does that automation, I know you're heavily involved in the CX beast of Cisco. How does that all tie together? >> So we're working on a lot of automation projects with our large enterprises and SPs, I mean, you see Rakuten being fairly prominent in the show, but more importantly, we understand not everyone's building a greenfield environment, not everything is purely public cloud. We have to deal with brownfield, we have to deal with third-party ecosystem partners, so you can't have a vertically tight single-vendor solution. So again, to your point, it's completely open. Then we have frameworks, meaning you have orchestrators that can talk down to the device through programmatic interfaces. That's why we see DevNet surrounding us, but then more importantly, we're looking at services that have workflows that could span on-prem, off-prem, third-party, it doesn't really matter. And we stitch a lot of those workloads southbound, but more importantly, northbound to security at ITSM Systems. So those frameworks are coming into life, whether you're a telecom cloud provider or you're a large enterprise. And they slowly fall into those workflows as they become more multi-domain. You saw David Goeckeler the other day, talking about SD-WAN, ECI, and campus wired and wireless. These domains are coming together and that's where we're driving a lot of the automation work. >> So automation is a linchpin to what business outcome? Ultimately, what are customers trying to achieve through automation? >> There's a couple of things. Mean time to value. So if you're a service provider, to your internal customers or external, time to value and speed and agility are key. The other ones are mean time to repair and mean time to detect. If I can shorten the time to detect and shorten time to react, then I can take proactive and preemptive action in situations that may happen. So time to value is really, really important. Cost is a play, obviously, 'cause when you have more and more machines doing your work, your OPEX will come down, but it's really not purely a cost play. Agility and speed are really driving automation to that scale as we're working with folks like Rakuten and others. >> What do you see, Dave, as the big challenges of achieving automation when customers, first of all, I was talking like, 10, 15 years ago people, they were afraid of automation. Some still are. But they I think understand as part of a digital transformation, they got to automate. So what are the challenges that they're having and how are you helping them solve them? >> So typically, what people have thought about automation has been more network-centric, but as we just discussed multicloud, automation is extending all the way to the public cloud, at the workload or at the functional level, if you're running in Lambda, for example. And then more importantly, traditionally, customers have been leveraging Python scripts and things of that nature, but the days of scripters are there, but they cannot scale. You need a model-driven framework, you need model-driven telemetry to get insight. So I think the learning curve of customers moving to a model-driven mindset is extremely important, and it's not just about the network alone, it's also about the application. So that's why we're driving a lot of our frameworks and education and training. And talent's a big gap that we're helping with with our training programs. >> Okay, so you're talking about insights. There's a lot of data. The saying goes, "data is plentiful, insights aren't." So how do you get from data to insights? Is that where the machine intelligence comes in? Maybe you can explain that. >> There's a combination. Machines can process much faster than humans can, but more importantly, somebody has to drive the 30 or 40 years of experience that Cisco has from our tech, our architects and CX, and our customers and the community that we're developing through DevNet. So taking trusted expertise from humans, from all that knowledge base, combining that with machine learning so we get the best of both worlds. 'Cause you need that experience. And that is driving insight so we can filter the signal from the noise, and then more importantly, how do you take that signal and then, in an automated fashion, push that down to an intent-based architecture across the board. >> Dave, can you take us inside a little bit of your touchpoints into customers? In the old days, it was a CCIE, his job, his title, it was equipment that he would touch, and today, talking about this multicloud and the automation, it's very dispersed as to who owns it, most of what I am managing is not something that's under their purview, so the touchpoints you have into the company and the relationship you have changed a lot in the last three, five years or so. >> Absolutely, 'cause the buying center's also changing, because folks are getting more and more centric around the line of business and want the outcome we want to drive for their clients. So the cloud architecture teams that are being built, they're more horizontal now. You'll have a security person, an application, networking, operations, for example, and what we're actually pioneering, a lot of the enterprises and SPs, is building the site reliability engineering teams, or SRE, which Google has obviously pioneered, and we're bringing those concepts and teams through a CX framework, through telecos, and some of their high-end enterprises initially, and you'll see more around that over the coming months. Our SRE jobs, if you go on LinkedIn, you'll probably see hundreds of them out there now. >> One of the other things we've been watching is Cisco has a very broad portfolio. This whole CX piece has to make sure that, from a customer's standpoint, no matter where the portfolio, whether core, edge, IOT, all these various devices, I should have a simplified experience today, which isn't necessarily, my words, Cisco's legacy. How do you make sure, is software a unifying factor inside the company? Give us a little bit about those dynamics inside. >> Absolutely, so we take a life cycle approach. It's not one and done. From the time there's a concept where you want to build out a blueprint, but there's no transformation journey, we have to make sure we walk the client through preparation, planning, design, architecture optimization, but then making sure they actually adopt, and get the true value. So we're working with our customers to make sure that they go around the entire life cycle, from end to end, from cradle to grave, and be able to constantly optimize. You're hearing the word continuous pretty much everywhere. It's kind of the fundamental of CICD, so we believe in a continuous life cycle approach that we're walking the customers end to end to make sure from the point of purchase to the point of decommissioning, making sure they're getting the most value out of the solutions they're getting from Cisco. >> All right Dave, we'll give you the last word on Cisco Live 2019. Thoughts? Takeaways? >> I think there's just amazing energy here, and there's a lot more to come. Come down to the CX booth and we'll have to show you some more gadgets and solutions where we're taking our forward customers. >> Great. David, thank you very much for coming to The Cube. >> Pleasure, thank you. >> All right, 28,000 people and The Cube bringing it to you live. This is Dave Vellante with Stu Miniman. Lisa Martin is also in the house. We'll be right back from Cisco Live San Diego 2019, Day 3. You're watching The Cube.

Published Date : Jun 12 2019

SUMMARY :

Brought to you by Cisco and its ecosystem partners. We go out to the events, What do you got to go through to achieve that status? It's one of the most highest technical I think you said you gave three presentations here? and then we got a zillion questions for you. and how do you orchestrate workloads seamlessly? What's the difference to you from a technology perspective? So you may have some things happening in the branch, and AWS is what's driving that technology and does ML and IA, is that a Cisco specific? and then be able to layer workload orchestration on top? So Stu, I think there's still some confusion around Dave, it's that big elephant in the room, So you got, to my count, five companies and what gives you confidence that and you don't have to learn anything new or OpenShift, you don't care. So why should you be able to discriminate that Cisco's delivering to help enable this So again, to your point, it's completely open. and shorten time to react, and how are you helping them solve them? and it's not just about the network alone, So how do you get from data to insights? and our customers and the community and the relationship you have and want the outcome we want to drive for their clients. One of the other things we've been watching is and get the true value. All right Dave, we'll give you Come down to the CX booth and we'll have to show you David, thank you very much for coming to The Cube. The Cube bringing it to you live.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave MalikPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VellantePERSON

0.99+

CiscoORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

DavidPERSON

0.99+

AmazonORGANIZATION

0.99+

30QUANTITY

0.99+

DavePERSON

0.99+

David GoeckelerPERSON

0.99+

Stu MinimanPERSON

0.99+

AWSORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

San DiegoLOCATION

0.99+

San Diego, CaliforniaLOCATION

0.99+

40 yearsQUANTITY

0.99+

LambdaTITLE

0.99+

PythonTITLE

0.99+

last weekDATE

0.99+

28,000 peopleQUANTITY

0.99+

StuPERSON

0.99+

five companiesQUANTITY

0.99+

one appQUANTITY

0.99+

third hostQUANTITY

0.99+

each providerQUANTITY

0.99+

RakutenORGANIZATION

0.99+

third oneQUANTITY

0.98+

AzureORGANIZATION

0.98+

eachQUANTITY

0.98+

EC2TITLE

0.98+

hundredsQUANTITY

0.97+

LinkedInORGANIZATION

0.97+

IBM Red HatORGANIZATION

0.97+

oneQUANTITY

0.97+

FirstQUANTITY

0.96+

both worldsQUANTITY

0.96+

todayDATE

0.96+

five yearsQUANTITY

0.96+

TenserFlowTITLE

0.96+

three presentationsQUANTITY

0.96+

AnthosORGANIZATION

0.95+

single paneQUANTITY

0.94+

Day 3QUANTITY

0.94+

Pivotal Cloud FoundryTITLE

0.93+

OneQUANTITY

0.92+

one areaQUANTITY

0.91+

The CubeTITLE

0.91+

10, 15 years agoDATE

0.89+

one keyQUANTITY

0.88+

single applicationsQUANTITY

0.88+

singleQUANTITY

0.87+

MLTITLE

0.86+

CXTITLE

0.86+

OpenShiftTITLE

0.84+

IATITLE

0.84+

theCUBEORGANIZATION

0.81+

O365TITLE

0.8+

AnthosTITLE

0.8+

2019TITLE

0.77+

a zillion questionsQUANTITY

0.73+

Nutanix .Next | NOLA | Day 1 | AM Keynote


 

>> PA Announcer: Off the plastic tab, and we'll turn on the colors. Welcome to New Orleans. ♪ This is it ♪ ♪ The part when I say I don't want ya ♪ ♪ I'm stronger than I've been before ♪ ♪ This is the part when I set your free ♪ (New Orleans jazz music) ("When the Saints Go Marching In") (rock music) >> PA Announcer: Ladies and gentleman, would you please welcome state of Louisiana chief design officer Matthew Vince and Choice Hotels director of infrastructure services Stacy Nigh. (rock music) >> Well good morning New Orleans, and welcome to my home state. My name is Matt Vince. I'm the chief design office for state of Louisiana. And it's my pleasure to welcome you all to .Next 2018. State of Louisiana is currently re-architecting our cloud infrastructure and Nutanix is the first domino to fall in our strategy to deliver better services to our citizens. >> And I'd like to second that warm welcome. I'm Stacy Nigh director of infrastructure services for Choice Hotels International. Now you may think you know Choice, but we don't own hotels. We're a technology company. And Nutanix is helping us innovate the way we operate to support our franchisees. This is my first visit to New Orleans and my first .Next. >> Well Stacy, you're in for a treat. New Orleans is known for its fabulous food and its marvelous music, but most importantly the free spirit. >> Well I can't wait, and speaking of free, it's my pleasure to introduce the Nutanix Freedom video, enjoy. ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ Ah, ah, ♪ ♪ Ah, ah, ♪ ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ I'm free, I'm free, I'm free, I'm free ♪ ♪ Gritting your teeth, you hold onto me ♪ ♪ It's never enough, I'm never complete ♪ ♪ Tell me to prove, expect me to lose ♪ ♪ I push it away, I'm trying to move ♪ ♪ I'm desperate to run, I'm desperate to leave ♪ ♪ If I lose it all, at least I'll be free ♪ ♪ Ah, ah ♪ ♪ Ah, ah ♪ ♪ Hallelujah, I'm free ♪ >> PA Announcer: Ladies and gentlemen, please welcome chief marketing officer Ben Gibson ♪ Ah, ah ♪ ♪ Ah, ah ♪ ♪ Hallelujah, I'm free ♪ >> Welcome, good morning. >> Audience: Good morning. >> And welcome to .Next 2018. There's no better way to open up a .Next conference than by hearing from two of our great customers. And Matthew, thank you for welcoming us to this beautiful, your beautiful state and city. And Stacy, this is your first .Next, and I know she's not alone because guess what It's my first .Next too. And I come properly attired. In the front row, you can see my Nutanix socks, and I think my Nutanix blue suit. And I know I'm not alone. I think over 5,000 people in attendance here today are also first timers at .Next. And if you are here for the first time, it's in the morning, let's get moving. I want you to stand up, so we can officially welcome you into the fold. Everyone stand up, first time. All right, welcome. (audience clapping) So you are all joining not just a conference here. This is truly a community. This is a community of the best and brightest in our industry I will humbly say that are coming together to share best ideas, to learn what's happening next, and in particular it's about forwarding not only your projects and your priorities but your careers. There's so much change happening in this industry. It's an opportunity to learn what's coming down the road and learn how you can best position yourself for this whole new world that's happening around cloud computing and modernizing data center environments. And this is not just a community, this is a movement. And it's a movement that started quite awhile ago, but the first .Next conference was in the quiet little town of Miami, and there was about 800 of you in attendance or so. So who in this hall here were at that first .Next conference in Miami? Let me hear from you. (audience members cheering) Yep, well to all of you grizzled veterans of the .Next experience, welcome back. You have started a movement that has grown and this year across many different .Next conferences all over the world, over 20,000 of your community members have come together. And we like to do it in distributed architecture fashion just like here in Nutanix. And so we've spread this movement all over the world with .Next conferences. And this is surging. We're also seeing just today the current count 61,000 certifications and climbing. Our Next community, close to 70,000 active members of our online community because .Next is about this big moment, and it's about every other day and every other week of the year, how we come together and explore. And my favorite stat of all. Here today in this hall amongst the record 5,500 registrations to .Next 2018 representing 71 countries in whole. So it's a global movement. Everyone, welcome. And you know when I got in Sunday night, I was looking at the tweets and the excitement was starting to build and started to see people like Adile coming from Casablanca. Adile wherever you are, welcome buddy. That's a long trip. Thank you so much for coming and being here with us today. I saw other folks coming from Geneva, from Denmark, from Japan, all over the world coming together for this moment. And we are accomplishing phenomenal things together. Because of your trust in us, and because of some early risk candidly that we have all taken together, we've created a movement in the market around modernizing data center environments, radically simplifying how we operate in the services we deliver to our businesses everyday. And this is a movement that we don't just know about this, but the industry is really taking notice. I love this chart. This is Gartner's inaugural hyperconvergence infrastructure magic quadrant chart. And I think if you see where Nutanix is positioned on there, I think you can agree that's a rout, that's a homerun, that's a mic drop so to speak. What do you guys think? (audience clapping) But here's the thing. It says Nutanix up there. We can honestly say this is a win for this hall here. Because, again, without your trust in us and what we've accomplished together and your partnership with us, we're not there. But we are there, and it is thanks to everyone in this hall. Together we have created, expanded, and truly made this market. Congratulations. And you know what, I think we're just getting started. The same innovation, the same catalyst that we drove into the market to converge storage network compute, the next horizon is around multi-cloud. The next horizon is around whether by accident or on purpose the strong move with different workloads moving into public cloud, some into private cloud moving back and forth, the promise of application mobility, the right workload on the right cloud platform with the right economics. Economics is key here. If any of you have a teenager out there, and they have a hold of your credit card, and they're doing something online or the like. You get some surprises at the end of the month. And that surprise comes in the form of spiraling public cloud costs. And this isn't to say we're not going to see a lot of workloads born and running in public cloud, but the opportunity is for us to take a path that regains control over infrastructure, regain control over workloads and where they're run. And the way I look at it for everyone in this hall, it's a journey we're on. It starts with modernizing those data center environments, continues with embracing the full cloud stack and the compelling opportunity to deliver that consumer experience to rapidly offer up enterprise compute services to your internal clients, lines of businesses and then out into the market. It's then about how you standardize across an enterprise cloud environment, that you're not just the infrastructure but the management, the automation, the control, and running any tier one application. I hear this everyday, and I've heard this a lot already this week about customers who are all in with this approach and running those tier one applications on Nutanix. And then it's the promise of not only hyperconverging infrastructure but hyperconverging multiple clouds. And if we do that, this journey the way we see it what we are doing is building your enterprise cloud. And your enterprise cloud is about the private cloud. It's about expanding and managing and taking back control of how you determine what workload to run where, and to make sure there's strong governance and control. And you're radically simplifying what could be an awfully complicated scenario if you don't reclaim and put your arms around that opportunity. Now how do we do this different than anyone else? And this is going to be a big theme that you're going to see from my good friend Sunil and his good friends on the product team. What are we doing together? We're taking all of that legacy complexity, that friction, that inability to be able to move fast because you're chained to old legacy environments. I'm talking to folks that have applications that are 40 years old, and they are concerned to touch them because they're not sure if they can react if their infrastructure can meet the demands of a new, modernized workload. We're making all that complexity invisible. And if all of that is invisible, it allows you to focus on what's next. And that indeed is the spirit of this conference. So if the what is enterprise cloud, and the how we do it different is by making infrastructure invisible, data centers, clouds, then why are we all here today? What is the binding principle that spiritually, that emotionally brings us all together? And we think it's a very simple, powerful word, and that word is freedom. And when we think about freedom, we think about as we work together the freedom to build the data center that you've always wanted to build. It's about freedom to run the applications where you choose based on the information and the context that wasn't available before. It's about the freedom of choice to choose the right cloud platform for the right application, and again to avoid a lot of these spiraling costs in unanticipated surprises whether it be around security, whether it be around economics or governance that come to the forefront. It's about the freedom to invent. It's why we got into this industry in the first place. We want to create. We want to build things not keep the lights on, not be chained to mundane tasks day by day. And it's about the freedom to play. And I hear this time and time again. My favorite tweet from a Nutanix customer to this day is just updated a lot of nodes at 38,000 feed on United Wifi, on my way to spend vacation with my family. Freedom to play. This to me is emotionally what brings us all together and what you saw with the Freedom video earlier, and what you see here is this new story because we want to go out and spread the word and not only talk about the enterprise cloud, not only talk about how we do it better, but talk about why it's so compelling to be a part of this hall here today. Now just one note of housekeeping for everyone out there in case I don't want anyone to take a wrong turn as they come to this beautiful convention center here today. A lot of freedom going on in this convention center. As luck may have it, there's another conference going on a little bit down that way based on another high growth, disruptive industry. Now MJBizCon Next, and by coincidence it's also called next. And I have to admire the creativity. I have to admire that we do share a, hey, high growth business model here. And in case you're not quite sure what this conference is about. I'm the head of marketing here. I have to show the tagline of this. And I read the tagline from license to launch and beyond, the future of the, now if I can replace that blank with our industry, I don't know, to me it sounds like a new, cool Sunil product launch. Maybe launching a new subscription service or the like. Stay tuned, you never know. I think they're going to have a good time over there. I know we're going to have a wonderful week here both to learn as well as have a lot of fun particularly in our customer appreciation event tonight. I want to spend a very few important moments on .Heart. .Heart is Nutanix's initiative to promote diversity in the technology arena. In particular, we have a focus on advancing the careers of women and young girls that we want to encourage to move into STEM and high tech careers. You have the opportunity to engage this week with this important initiative. Please role the video, and let's learn more about how you can do so. >> Video Plays (electronic music) >> So all of you have received these .Heart tokens. You have the freedom to go and choose which of the four deserving charities can receive donations to really advance our cause. So I thank you for your engagement there. And this community is behind .Heart. And it's a very important one. So thank you for that. .Next is not the community, the moment it is without our wonderful partners. These are our amazing sponsors. Yes, it's about sponsorship. It's also about how we integrate together, how we innovate together, and we're about an open community. And so I want to thank all of these names up here for your wonderful sponsorship of this event. I encourage everyone here in this room to spend time, get acquainted, get reacquainted, learn how we can make wonderful music happen together, wonderful music here in New Orleans happen together. .Next isn't .Next with a few cool surprises. Surprise number one, we have a contest. This is a still shot from the Freedom video you saw right before I came on. We have strategically placed a lucky seven Nutanix Easter eggs in this video. And if you go to Nutanix.com/freedom, watch the video. You may have to use the little scrubbing feature to slow down 'cause some of these happen quickly. You're going to find some fun, clever Easter eggs. List all seven, tweet that out, or as many as you can, tweet that out with hashtag nextconf, C, O, N, F, and we'll have a random drawing for an all expenses paid free trip to .Next 2019. And just to make sure everyone understands Easter egg concept. There's an eighth one here that's actually someone that's quite famous in our circles. If you see on this still shot, there's someone in the back there with a red jacket on. That's not just anyone. We're targeting in here. That is our very own Julie O'Brien, our senior vice president of corporate marketing. And you're going to hear from Julie later on here at .Next. But Julie and her team are the engine and the creativity behind not only our new Freedom campaign but more importantly everything that you experience here this week. Julie and her team are amazing, and we can't wait for you to experience what they've pulled together for you. Another surprise, if you go and visit our Freedom booths and share your stories. So they're like video booths, you share your success stories, your partnerships, your journey that I talked about, you will be entered to win a beautiful Nutanix brand compliant, look at those beautiful colors, bicycle. And it's not just any bicycle. It's a beautiful bicycle made by our beautiful customer Trek. I actually have a Trek bike. I love cycling. Unfortunately, I'm not eligible, but all of you are. So please share your stories in the Freedom Nutanix's booths and put yourself in the running, or in the cycling to get this prize. One more thing I wanted to share here. Yesterday we had a great time. We had our inaugural Nutanix hackathon. This hackathon brought together folks that were in devops practices, many of you that are in this room. We sold out. We thought maybe we'd get four or five teams. We had to shutdown at 14 teams that were paired together with a Nutanix mentor, and you coded. You used our REST APIs. You built new apps that integrated in with Prism and Clam. And it was wonderful to see this. Everyone I talked to had a great time on this. We had three winners. In third place, we had team Copper or team bronze, but team Copper. Silver, Not That Special, they're very humble kind of like one of our key mission statements. And the grand prize winner was We Did It All for the Cookies. And you saw them coming in on our Mardi Gras float here. We Did It All for Cookies, they did this very creative job. They leveraged an Apple Watch. They were lighting up VMs at a moments notice utilizing a lot of their coding skills. Congratulations to all three, first, second, and third all receive $2,500. And then each of them, then were able to choose a charity to deliver another $2,500 including Ronald McDonald House for the winner, we did it all for the McDonald Land cookies, I suppose, to move forward. So look for us to do more of these kinds of events because we want to bring together infrastructure and application development, and this is a great, I think, start for us in this community to be able to do so. With that, who's ready to hear form Dheeraj? You ready to hear from Dheeraj? (audience clapping) I'm ready to hear from Dheeraj, and not just 'cause I work for him. It is my distinct pleasure to welcome on the stage our CEO, cofounder and chairman Dheeraj Pandey. ("Free" by Broods) ♪ Hallelujah, I'm free ♪ >> Thank you Ben and good morning everyone. >> Audience: Good morning. >> Thank you so much for being here. It's just such an elation when I'm thinking about the Mardi Gras crowd that came here, the partners, the customers, the NTCs. I mean there's some great NTCs up there I could relate to because they're on Slack as well. How many of you are in Slack Nutanix internal Slack channel? Probably 5%, would love to actually see this community grow from here 'cause this is not the only even we would love to meet you. We would love to actually do this in a real time bite size communication on our own internal Slack channel itself. Now today, we're going to talk about a lot of things, but a lot of hard things, a lot of things that take time to build and have evolved as the industry itself has evolved. And one of the hard things that I want to talk about is multi-cloud. Multi-cloud is a really hard problem 'cause it's full of paradoxes. It's really about doing things that you believe are opposites of each other. It's about frictionless, but it's also about governance. It's about being simple, and it's also about being secure at the same time. It's about delight, it's about reducing waste, it's about owning, and renting, and finally it's also about core and edge. How do you really make this big at a core data center whether it's public or private? Or how do you really shrink it down to one or two nodes at the edge because that's where your machines are, that's where your people are? So this is a really hard problem. And as you hear from Sunil and the gang there, you'll realize how we've actually evolved our solutions to really cater to some of these. One of the approaches that we have used to really solve some of these hard problems is to have machines do more, and I said a lot of things in those four words, have machines do more. Because if you double-click on that sentence, it really means we're letting design be at the core of this. And how do you really design data centers, how do you really design products for the data center that hush all the escalations, the details, the complexities, use machine-learning and AI and you know figure our anomaly detection and correlations and patter matching? There's a ton of things that you need to do to really have machines do more. But along the way, the important lesson is to make machines invisible because when machines become invisible, it actually makes something else visible. It makes you visible. It makes governance visible. It makes applications visible, and it makes services visible. A lot of things, it makes teams visible, careers visible. So while we're really talking about invisibility of machines, we're talking about visibility of people. And that's how we really brought all of you together in this conference as well because it makes all of us shine including our products, and your careers, and your teams as well. And I try to define the word customer success. You know it's one of the favorite words that I'm actually using. We've just hired a great leader in customer success recently who's really going to focus on this relatively hard problem, yet another hard problem of customer success. We think that customer success, true customer success is possible when we have machines tend towards invisibility. But along the way when we do that, make humans tend towards freedom. So that's the real connection, the yin-yang of machines and humans that Nutanix is really all about. And that's why design is at the core of this company. And when I say design, I mean reducing friction. And it's really about reducing friction. And everything we do, the most mundane of things which could be about migrating applications, spinning up VMs, self-service portals, automatic upgrades, and automatic scale out, and all the things we do is about reducing friction which really makes machines become invisible and humans gain freedom. Now one of the other convictions we have is how all of us are really tied at the hip. You know our success is tied to your success. If we make you successful, and when I say you, I really mean Main Street. Main Street being customers, and partners, and employees. If we make all of you successful, then we automatically become successful. And very coincidentally, Main Street and Wall Street are also tied in that very same relation as well. If we do a great job at Main Street, I think the Wall Street customer, i.e. the investor, will take care of itself. You'll have you know taken care of their success if we took care of Main Street success itself. And that's the narrative that our CFO Dustin Williams actually went and painted to our Wall Street investors two months ago at our investor day conference. We talked about a $3 billion number. We said look as a company, as a software company, we can go and achieve $3 billion in billings three years from now. And it was a telling moment for the company. It was really about talking about where we could be three years from now. But it was not based on a hunch. It was based on what we thought was customer success. Now realize that $3 billion in pure software. There's only 10 to 15 companies in the world that actually have that kind of software billings number itself. But at the core of this confidence was customer success, was the fact that we were doing a really good job of not over promising and under delivering but under promising starting with small systems and growing the trust of the customers over time. And this is one of the statistics we actually talk about is repeat business. The first dollar that a Global 2000 customer spends in Nutanix, and if we go and increase their trust 15 times by year six, and we hope to actually get 17 1/2 and 19 times more trust in the years seven and eight. It's very similar numbers for non Global 2000 as well. Again, we go and really hustle for customer success, start small, have you not worry about paying millions of dollars upfront. You know start with systems that pay as they grow, you pay as they grow, and that's the way we gain trust. We have the same non Global 2000 pay $6 1/2 for the first dollar they've actually spent on us. And with this, I think the most telling moment was when Dustin concluded. And this is key to this audience here as well. Is how the current cohorts which is this audience here and many of them were not here will actually carry the weight of $3 billion, more than 50% of it if we did a great job of customer success. If we were humble and honest and we really figured out what it meant to take care of you, and if we really understood what starting small was and having to gain the trust with you over time, we think that more than 50% of that billings will actually come from this audience here without even looking at new logos outside. So that's the trust of customer success for us, and it takes care of pretty much every customer not just the Main Street customer. It takes care of Wall Street customer. It takes care of employees. It takes care of partners as well. Now before I talk about technology and products, I want to take a step back 'cause many of you are new in this audience. And I think that it behooves us to really talk about the history of this company. Like we've done a lot of things that started out as science projects. In fact, I see some tweets out there and people actually laugh at Nutanix cloud. And this is where we were in 2012. So if you take a step back and think about where the company was almost seven, eight years ago, we were up against giants. There was a $30 billion industry around network attached storage, and storage area networks and blade servers, and hypervisors, and systems management software and so on. So what did we start out with? Very simple premise that we will collapse the architecture of the data center because three tier is wasteful and three tier is not delightful. It was a very simple hunch, we said we'll take rack mount servers, we'll put a layer of software on top of it, and that layer of software back then only did storage. It didn't do networks and security, and it ran on top of a well known hypervisor from VMware. And we said there's one non negotiable thing. The fact that the design must change. The control plane for this data center cannot be the old control plane. It has to be rethought through, and that's why Prism came about. Now we went and hustled hard to add more things to it. We said we need to make this diverse because it can't just be for one application. We need to make it CPU heavy, and memory heavy, and storage heavy, and flash heavy and so on. And we built a highly configurable HCI. Now all of them are actually configurable as you know of today. And this was not just innovation in technologies, it was innovation in business and sizing, capacity planning, quote to cash business processes. A lot of stuff that we had to do to make this highly configurable, so you can really scale capacity and performance independent of each other. Then in 2014, we did something that was very counterintuitive, but we've done this on, and on, and on again. People said why are you disrupting yourself? You know you've been doing a good job of shipping appliances, but we also had the conviction that HCI was not about hardware. It was about a form factor, but it was really about an operating system. And we started to compete with ourselves when we said you know what we'll do arm's length distribution, we'll do arm's length delivery of products when we give our software to our Dell partner, to Dell as a partner, a loyal partner. But at the same time, it was actually seen with a lot of skepticism. You know these guys are wondering how to really make themselves vanish because they're competing with themselves. But we also knew that if we didn't compete with ourselves someone else will. Now one of the most controversial decisions was really going and doing yet another hypervisor. In the year 2015, it was really preposterous to build yet another hypervisor. It was a very mature market. This was coming probably 15 years too late to the market, or at least 10 years too late to market. And most people said it shouldn't be done because hypervisor is a commodity. And that's the word we latched on to. That this commodity should not have to be paid for. It shouldn't have a team of people managing it. It should actually be part of your overall stack, but it should be invisible. Just like storage needs to be invisible, virtualization needs to be invisible. But it was a bold step, and I think you know at least when we look at our current numbers, 1/3rd of our customers are actually using AHV. At least every quarter that we look at it, our new deployments, at least 35% of it is actually being used on AHV itself. And again, a very preposterous thing to have said five years ago, four years ago to where we've actually come. Thank you so much for all of you who've believed in the fact that virtualization software must be invisible and therefore we should actually try out something that is called AHV today. Now we went and added Lenovo to our OEM mix, started to become even more of a software company in the year 2016. Went and added HP and Cisco in some of very large deals that we talk about in earnings call, our HP deals and Cisco deals. And some very large customers who have procured ELAs from us, enterprise license agreements from us where they want to mix and match hardware. They want to mix Dell hardware with HP hardware but have common standard Nutanix entitlements. And finally, I think this was another one of those moments where we say why should HCI be only limited to X86. You know this operating systems deserves to run on a non X86 architecture as well. And that gave birth to this idea of HCI and Power Systems from IBM. And we've done a great job of really innovating with them in the last three, four quarters. Some amazing innovation that has come out where you can now run AIX 7.x on Nutanix. And for the first time in the history of data center, you can actually have a single software not just a data plane but a control plane where you can manage an IBM farm, an Power farm, and open Power farm and an X86 farm from the same control plane and have you know the IBM farm feed storage to an Intel compute farm and vice versa. So really good things that we've actually done. Now along the way, something else was going on while we were really busy building the private cloud, we knew there was a new consumption model on computing itself. People were renting computing using credit cards. This is the era of the millennials. They were like really want to bypass people because at the end of the day, you know why can't computing be consumed the way like eCommerce is? And that devops movement made us realize that we need to add to our stack. That stack will now have other computing clouds that is AWS and Azure and GCP now. So similar to the way we did Prism. You know Prism was really about going and making hypervisors invisible. You know we went ahead and said we'll add Calm to our portfolio because Calm is now going to be what Prism was to us back when we were really dealing with multi hypervisor world. Now it's going to be multi-cloud world. You know it's one of those things we had a gut around, and we really come to expect a lot of feedback and real innovation. I mean yesterday when we had the hackathon. The center, the epicenter of the discussion was Calm, was how do you automate on multiple clouds without having to write a single line of code? So we've come a long way since the acquisition of Calm two years ago. I think it's going to be a strong pillar in our overall product portfolio itself. Now the word multi-cloud is going to be used and over used. In fact, it's going to be blurring its lines with the idea of hyperconvergence of clouds, you know what does it mean. We just hope that hyperconvergence, the way it's called today will morph to become hyperconverged clouds not just hyperconverged boxes which is a software defined infrastructure definition itself. But let's focus on the why of multi-cloud. Why do we think it can't all go into a public cloud itself? The one big reason is just laws of the land. There's data sovereignty and computing sovereignty, regulations and compliance because of which you need to be in where the government with the regulations where the compliance rules want you to be. And by the way, that's just one reason why the cloud will have to disperse itself. It can't just be 10, 20 large data centers around the world itself because you have 200 plus countries and half of computing actually gets done outside the US itself. So it's a really important, very relevant point about the why of multi-cloud. The second one is just simple laws of physics. You know if there're machines at the edge, and they're producing so much data, you can't bring all the data to the compute. You have to take the compute which is stateless, it's an app. You take the app to where the data is because the network is the enemy. The network has always been the enemy. And when we thought we've made fatter networks, you've just produced more data as well. So this just goes without saying that you take something that's stateless that's without gravity, that's lightweight which is compute and the application and push it close to where the data itself is. And the third one which is related is just latency reasons you know? And it's not just about machine latency and electrons transferring over the speed light, and you can't defy the speed of light. It's also about human latency. It's also about multiple teams saying we need to federate and delegate, and we need to push things down to where the teams are as opposed to having to expect everybody to come to a very large computing power itself. So all the ways, the way they are, there will be at least three different ways of looking at multi-cloud itself. There's a centralized core cloud. We all go and relate to this because we've seen large data centers and so on. And that's the back office workhorse. It will crunch numbers. It will do processing. It will do a ton of things that will go and produce results for you know how we run our businesses, but there's also the dispersal of the cloud, so ROBO cloud. And this is the front office server that's really serving. It's a cloud that's going to serve people. It's going to be closer to people, and that's what a ROBO cloud is. We have a ton of customers out here who actually use Nutanix and the ROBO environments themselves as one node, two node, three node, five node servers, and it just collapses the entire server closet room in these ROBOs into something really, really small and minuscule. And finally, there's going to be another dispersed edge cloud because that's where the machines are, that's where the data is. And there's going to be an IOT machine fog because we need to miniaturize computing to something even smaller, maybe something that can really land in the palm in a mini server which is a PC like server, but you need to run everything that's enterprise grade. You should be able to go and upgrade them and monitor them and analyze them. You know do enough computing up there, maybe event-based processing that can actually happen. In fact, there's some great innovation that we've done at the edge with IOTs that I'd love for all of you to actually attend some sessions around as well. So with that being said, we have a hole in the stack. And that hole is probably one of the hardest problems that we've been trying to solve for the last two years. And Sunil will talk a lot about that. This idea of hybrid. The hybrid of multi-cloud is one of the hardest problems. Why? Because we're talking about really blurring the lines with owning and renting where you have a single-tenant environment which is your data center, and a multi-tenant environment which is the service providers data center, and the two must look like the same. And the two must look like the same is that hard a problem not just for burst out capacity, not just for security, not just for identity but also for networks. Like how do you blur the lines between networks? How do you blur the lines for storage? How do you really blur the lines for a single pane of glass where you can think of availability zones that look highly symmetric even though they're not because one of 'em is owned by you, and it's single-tenant. The other one is not owned by you, that's multi-tenant itself. So there's some really hard problems in hybrid that you'll hear Sunil talk about and the team. And some great strides that we've actually made in the last 12 months of really working on Xi itself. And that completes the picture now in terms of how we believe the state of computing will be going forward. So what are the must haves of a multi-cloud operating system? We talked about marketplace which is catalogs and automation. There's a ton of orchestration that needs to be done for multi-cloud to come together because now you have a self-service portal which is providing an eCommerce view. It's really about you know getting to do a lot of requests and workflows without having people come in the way, without even having tickets. There's no need for tickets if you can really start to think like a self-service portal as if you're just transacting eCommerce with machines and portals themselves. Obviously the next one is networking security. You need to blur the lines between on-prem and off-prem itself. These two play a huge role. And there's going to be a ton of details that you'll see Sunil talk about. But finally, what I want to focus on the rest of the talk itself here is what governance and compliance. This is a hard problem, and it's a hard problem because things have evolved. So I'm going to take a step back. Last 30 years of computing, how have consumption models changed? So think about it. 30 years ago, we were making decisions for 10 plus years, you know? Mainframe, at least 10 years, probably 20 plus years worth of decisions. These were decisions that were extremely waterfall-ish. Make 10s of millions of dollars worth of investment for a device that we'd buy for at least 10 to 20 years. Now as we moved to client-server, that thing actually shrunk. Now you're talking about five years worth of decisions, and these things were smaller. So there's a little bit more velocity in our decisions. We were not making as waterfall-ish decision as we used to with mainframes. But still five years, talk about virtualized, three tier, maybe three to five year decisions. You know they're still relatively big decisions that we were making with computer and storage and SAN fabrics and virtualization software and systems management software and so on. And here comes Nutanix, and we said no, no. We need to make it smaller. It has to become smaller because you know we need to make more agile decisions. We need to add machines every week, every month as opposed to adding you know machines every three to five years. And we need to be able to upgrade them, you know any point in time. You can do the upgrades every month if you had to, every week if you had to and so on. So really about more agility. And yet, we were not complete because there's another evolution going on, off-prem in the public cloud where people are going and doing reserved instances. But more than that, they were doing on demand stuff which no the decision was days to weeks. Some of these things that unitive compute was being rented for days to weeks, not years. And if you needed something more, you'd shift a little to the left and use reserved instances. And then spot pricing, you could do spot pricing for hours and finally lambda functions. Now you could to function as a service where things could actually be running only for minutes not even hours. So as you can see, there's a wide spectrum where when you move to the right, you get more elasticity, and when you move to the left, you're talking about predictable decision making. And in fact, it goes from minutes on one side to 10s of years on the other itself. And we hope to actually go and blur the lines between where NTNX is today where you see Nutanix right now to where we really want to be with reserved instances and on demand. And that's the real ask of Nutanix. How do you take care of this discontinuity? Because when you're owning things, you actually end up here, and when you're renting things, you end up here. What does it mean to really blur the lines between these two because people do want to make decisions that are better than reserved instance in the public cloud. We'll talk about why reserved instances which looks like a proxy for Nutanix it's still very, very wasteful even though you might think it's delightful, it's very, very wasteful. So what does it mean for on-prem and off-prem? You know you talk about cost governance, there's security compliance. These high velocity decisions we're actually making you know where sometimes you could be right with cost but wrong on security, but sometimes you could be right in security but wrong on cost. We need to really figure out how machines make some of these decisions for us, how software helps us decide do we have the right balance between cost, governance, and security compliance itself? And to get it right, we have introduced our first SAS service called Beam. And to talk more about Beam, I want to introduce Vijay Rayapati who's the general manager of Beam engineering to come up on stage and talk about Beam itself. Thank you Vijay. (rock music) So you've been here a couple of months now? >> Yes. >> At the same time, you spent the last seven, eight years really handling AWS. Tell us more about it. >> Yeah so we spent a lot of time trying to understand the last five years at Minjar you know how customers are really consuming in this new world for their workloads. So essentially what we tried to do is understand the consumption models, workload patterns, and also build algorithms and apply intelligence to say how can we lower this cost and you know improve compliance of their workloads.? And now with Nutanix what we're trying to do is how can we converge this consumption, right? Because what happens here is most customers start with on demand kind of consumption thinking it's really easy, but the total cost of ownership is so high as the workload elasticity increases, people go towards spot or a scaling, but then you need a lot more automation that something like Calm can help them. But predictability of the workload increases, then you need to move towards reserved instances, right to lower costs. >> And those are some of the things that you go and advise with some of the software that you folks have actually written. >> But there's a lot of waste even in the reserved instances because what happens it while customers make these commitments for a year or three years, what we see across, like we track a billion dollars in public cloud consumption you know as a Beam, and customers use 20%, 25% of utilization of their commitments, right? So how can you really apply, take the data of consumption you know apply intelligence to essentially reduce their you know overall cost of ownership. >> You said something that's very telling. You said reserved instances even though they're supposed to save are still only 20%, 25% utilized. >> Yes, because the workloads are very dynamic. And the next thing is you can't do hot add CPU or hot add memory because you're buying them for peak capacity. There is no convergence of scaling that apart from the scaling as another node. >> So you actually sized it for peak, but then using 20%, 30%, you're still paying for the peak. >> That's right. >> Dheeraj: That can actually add up. >> That's what we're trying to say. How can we deliver visibility across clouds? You know how can we deliver optimization across clouds and consumption models and bring the control while retaining that agility and demand elasticity? >> That's great. So you want to show us something? >> Yeah absolutely. So this is Beam as just Dheeraj outlined, our first SAS service. And this is my first .Next. And you know glad to be here. So what you see here is a global consumption you know for a business across different clouds. Whether that's in a public cloud like Amazon, or Azure, or Nutanix. We kind of bring the consumption together for the month, the recent month across your accounts and services and apply intelligence to say you know what is your spent efficiency across these clouds? Essentially there's a lot of intelligence that goes in to detect your workloads and consumption model to say if you're spending $100, how efficiently are you spending? How can you increase that? >> So you have a centralized view where you're looking at multiple clouds, and you know you talk about maybe you can take an example of an account and start looking at it? >> Yes, let's go into a cloud provider like you know for this business, let's go and take a loot at what's happening inside an Amazon cloud. Here we get into the deeper details of what's happening with the consumption of a specific services as well as the utilization of both on demand and RI. You know what can you do to lower your cost and detect your spend efficiency of a dollar to see you know are there resources that are provisioned by teams for applications that are not being used, or are there resources that we should go and rightsize because you know we have all this monitoring data, configuration data that we crunch through to basically detect this? >> You think there's billions of events that you look at everyday. You're already looking at a billon dollars worth of AWS spend. >> Right, right. >> So billions of events, billing, metering events every year to really figure out and optimize for them. >> So what we have here is a very popular international government organization. >> Dheeraj: Wow, so it looks like Russians are everywhere, the cloud is everywhere actually. >> Yes, it's quite popular. So when you bring your master account into Beam, we kind of detect all the linked accounts you know under that. Then you can go and take a look at not just at the organization level within it an account level. >> So these are child objects, you know. >> That's right. >> You can think of them as ephemeral accounts that you create because you don't want to be on the record when you're doing spams on Facebook for example. >> Right, let's go and take a look at what's happening inside a Facebook ad spend account. So we have you know consumption of the services. Let's go deeper into compute consumption, and you kind of see a trendline. You can do a lot of computing. As you see, looks like one campaign has ended. They started another campaign. >> Dheeraj: It looks like they're not stopping yet, man. There's a lot of money being made in Facebook right now. (Vijay laughing) >> So not only just get visibility at you know compute as a service inside a cloud provider, you can go deeper inside compute and say you know what is a service that I'm really consuming inside compute along with the CPUs n'stuff, right? What is my data transfer? You know what is my network? What is my load blancers? So essentially you get a very deeper visibility you know as a service right. Because we have three goals for Beam. How can we deliver visibility across clouds? How can we deliver visibility across services? And how can we deliver, then optimization? >> Well I think one thing that I just want to point out is how this SAS application was an extremely teachable moment for me to learn about the different resources that people could use about the public cloud. So all of you who actually have not gone deep enough into the idea of public cloud. This could be a great app for you to learn about things, the resources, you know things that you could do to save and security and things of that nature. >> Yeah. And we really believe in creating the single pane view you know to mange your optimization of a public cloud. You know as Ben spoke about as a business, you need to have freedom to use any cloud. And that's what Beam delivers. How can you make the right decision for the right workload to use any of the cloud of your choice? >> Dheeraj: How 'about databases? You talked about compute as well but are there other things we could look at? >> Vijay: Yes, let's go and take a look at database consumption. What you see here is they're using inside Facebook ad spending, they're using all databases except Oracle. >> Dheeraj: Wow, looks like Oracle sales folks have been active in Russia as well. (Vijay laughing) >> So what we're seeing here is a global view of you know what is your spend efficiency and which is kind of a scorecard for your business for the dollars that you're spending. And the great thing is Beam kind of brings together you know through its intelligence and algorithms to detect you know how can you rightsize resources and how can you eliminate things that you're not using? And we deliver and one click fix, right? Let's go and take a look at resources that are maybe provisioned for storage and not being used. We deliver the seamless one-click philosophy that Nutanix has to eliminate it. >> So one click, you can actually just pick some of these wasteful things that might be looking delightful because using public cloud, using credit cards, you can go in and just say click fix, and it takes care of things. >> Yeah, and not only remove the resources that are unused, but it can go and rightsize resources across your compute databases, load balancers, even past services, right? And this is where the power of it kind of comes for a business whether you're using on-prem and off-prem. You know how can you really converge that consumption across both? >> Dheeraj: So do you have something for Nutanix too? >> Vijay: Yes, so we have basically been working on Nutanix with something that we're going to deliver you know later this year. As you can see here, we're bringing together the consumption for the Nutanix, you know the services that you're using, the licensing and capacity that is available. And how can you also go and optimize within Nutanix environments >> That's great. >> for the next workload. Now let me quickly show you what we have on the compliance side. This is an extremely powerful thing that we've been working on for many years. What we deliver here just like in cost governance, a global view of your compliance across cloud providers. And the most powerful thing is you can go into a cloud provider, get the next level of visibility across cloud regimes for hundreds of policies. Not just policies but those policies across different regulatory compliances like HIPA, PCI, CAS. And that's very powerful because-- >> So you're saying a lot of what you folks have done is codified these compliance checks in software to make sure that people can sleep better at night knowing that it's PCI, and HIPA, and all that compliance actually comes together? >> And you can build this not just by cloud accounts, you can build them across cloud accounts which is what we call security centers. Essentially you can go and take a deeper look at you know the things. We do a whole full body scan for your cloud infrastructure whether it's AWS Amazon or Azure, and you can go and now, again, click to fix things. You know that had been probably provisioned that are violating the security compliance rules that should be there. Again, we have the same one-click philosophy to say how can you really remove things. >> So again, similar to save, you're saying you can go and fix some of these security issues by just doing one click. >> Absolutely. So the idea is how can we give our people the freedom to get visibility and use the right cloud and take the decisions instantly through one click. That's what Beam delivers you know today. And you know get really excited, and it's available at beam.nutanix.com. >> Our first SAS service, ladies and gentleman. Thank you so much for doing this, Vijay. It looks like there's going to be a talk here at 10:30. You'll talk more about the midterm elections there probably? >> Yes, so you can go and write your own security compliances as well. You know within Beam, and a lot of powerful things you can do. >> Awesome, thank you so much, Vijay. I really appreciate it. (audience clapping) So as you see, there's a lot of work that we're doing to really make multi-cloud which is a hard problem. You know think about working the whole body of it and what about cost governance? What about security compliance? Obviously what about hybrid networks, and security, and storage, you know compute, many of the things that you've actually heard from us, but we're taking it to a level where the business users can now understand the implications. A CFO's office can understand the implications of waste and delight. So what does customer success mean to us? You know again, my favorite word in a long, long time is really go and figure out how do you make you, the customer, become operationally efficient. You know there's a lot of stuff that we deliver through software that's completely uncovered. It's so latent, you don't even know you have it, but you've paid for it. So you've got to figure out what does it mean for you to really become operationally efficient, organizationally proficient. And it's really important for training, education, stuff that you know you're people might think it's so awkward to do in Nutanix, but it could've been way simpler if you just told you a place where you can go and read about it. Of course, I can just use one click here as opposed to doing things the old way. But most importantly to make it financially accountable. So the end in all this is, again, one of the things that I think about all the time in building this company because obviously there's a lot of stuff that we want to do to create orphans, you know things above the line and top line and everything else. There's also a bottom line. Delight and waste are two sides of the same coin. You know when we're talking about developers who seek delight with public cloud at the same time you're looking at IT folks who're trying to figure out governance. They're like look you know the CFOs office, the CIOs office, they're trying to figure out how to curb waste. These two things have to go hand in hand in this era of multi-cloud where we're talking about frictionless consumption but also governance that looks invisible. So I think, at the end of the day, this company will do a lot of stuff around one-click delight but also go and figure out how do you reduce waste because there's so much waste including folks there who actually own Nutanix. There's so much software entitlement. There's so much waste in the public cloud itself that if we don't go and put our arms around, it will not lead to customer success. So to talk more about this, the idea of delight and the idea of waste, I'd like to bring on board a person who I think you know many of you actually have talked about it have delightful hair but probably wasted jokes. But I think has wasted hair and delightful jokes. So ladies and gentlemen, you make the call. You're the jury. Sunil R.M.J. Potti. ("Free" by Broods) >> So that was the first time I came out from the bottom of a screen on a stage. I actually now know what it feels to be like a gopher. Who's that laughing loudly at the back? Okay, do we have the... Let's see. Okay, great. We're about 15 minutes late, so that means we're running right on time. That's normally how we roll at this conference. And we have about three customers and four demos. Like I think there's about three plus six, about nine folks coming onstage. So we'll have our own version of the parade as well on the main stage for the next 70 minutes. So let's just jump right into it. I think we've been pretty consistent in terms of our longterm plans since we started the company. And it's become a lot more clearer over the last few years about our plans to essentially make computing invisible as Dheeraj mentioned. We're doing this across multiple acts. We started with HCI. We call it making infrastructure invisible. We extended that to making data centers invisible. And then now we're in this mode of essentially extending it to converging clouds so that you can actually converge your consumption models. And so today's conference and essentially the theme that you're going to be seeing throughout the breakout sessions is about a journey towards invisible clouds, but make sure that you internalize the fact that we're investing heavily in each of the three phases. It's just not about the hybrid cloud with Nutanix, it's about actually finishing the job about making infrastructure invisible, expanding that to kind of go after the full data center, and then of course embark on some real meaningful things around invisible clouds, okay? And to start the session, I think you know the part that I wanted to make sure that we are all on the same page because most of us in the room are still probably in this phase of the journey which is about invisible infrastructure. And there the three key products and especially two of them that most of you guys know are Acropolis and Prism. And they're sort of like the bedrock of our company. You know especially Acropolis which is about the web scale architecture. Prism is about consumer grade design. And with Acropolis now being really mature. It's in the seventh year of innovation. We still have more than half of our company in terms of R and D spend still on Acropolis and Prism. So our core product is still sort of where we think we have a significant differentiation on. We're not going to let our foot off the peddle there. You know every time somebody comes to me and says look there's a new HCI render popping out or an existing HCI render out there, I ask a simple question to our customers saying show me 100 customers with 100 node deployments, and it will be very hard to find any other render out there that does the same thing. And that's the power of Acropolis the code platform. And then it's you know the fact that the velocity associated with Acropolis continues to be on a fast pace. We came out with various new capabilities in 5.5 and 5.6, and one of the most complicated things to get right was the fact to shrink our three node cluster to a one node, two node deployment. Most of you actually had requirements on remote office, branch office, or the edge that actually allowed us to kind of give us you know sort of like the impetus to kind of go design some new capabilities into our core OS to get this out. And associated with Acropolis and expanding into Prism, as you will see, the first couple of years of Prism was all about refactoring the user interface, doing a good job with automation. But more and more of the investments around Prism is going to be based on machine learning. And you've seen some variants of that over the last 12 months, and I can tell you that in the next 12 to 24 months, most of our investments around infrastructure operations are going to be driven by AI techniques starting with most of our R and D spend also going into machine-learning algorithms. So when you talk about all the enhancements that have come on with Prism whether it be formed by you know the management console changing to become much more automated, whether now we give you automatic rightsizing, anomaly detection, or a series of functionality that have gone into it, the real core sort of capabilities that we're putting into Prism and Acropolis are probably best served by looking at the quality of the product. You probably have seen this slide before. We started showing the number of nodes shipped by Nutanix two years ago at this conference. It was about 35,000 plus nodes at that time. And since then, obviously we've you know continued to grow. And we would draw this line which was about enterprise class quality. That for the number of bugs found as a percentage of nodes shipped, there's a certain line that's drawn. World class companies do about probably 2% to 3%, number of CFDs per node shipped. And we were just broken that number two years ago. And to give you guys an idea of how that curve has shown up, it's now currently at .95%. And so along with velocity, you know this focus on being true to our roots of reliability and stability continues to be, you know it's an internal challenge, but it's also some of the things that we keep a real focus on. And so between Acropolis and Prism, that's sort of like our core focus areas to sort of give us the confidence that look we have this really high bar that we're sort of keeping ourselves accountable to which is about being the most advanced enterprise cloud OS on the planet. And we will keep it this way for the next 10 years. And to complement that, over a period of time of course, we've added a series of services. So these are services not just for VMs but also for files, blocks, containers, but all being delivered in that single one-click operations fashion. And to really talk more about it, and actually probably to show you the real deal there it's my great pleasure to call our own version of Moses inside the company, most of you guys know him as Steve Poitras. Come on up, Steve. (audience clapping) (rock music) >> Thanks Sunil. >> You barely fit in that door, man. Okay, so what are we going to talk about today, Steve? >> Absolutely. So when we think about when Nutanix first got started, it was really focused around VDI deployments, smaller workloads. However over time as we've evolved the product, added additional capabilities and features, that's grown from VDI to business critical applications as well as cloud native apps. So let's go ahead and take a look. >> Sunil: And we'll start with like Oracle? >> Yeah, that's one of the key ones. So here we can see our Prism central user interface, and we can see our Thor cluster obviously speaking to the Avengers theme here. We can see this is doing right around 400,000 IOPs at around 360 microseconds latency. Now obviously Prism central allows you to mange all of your Nutanix deployments, but this is just running on one single Nutanix cluster. So if we hop over here to our explore tab, we can see we have a few categories. We have some Kubernetes, some AFS, some Xen desktop as well as Oracle RAC. Now if we hope over to Oracle RAC, we're running a SLOB workload here. So obviously with Oracle enterprise applications performance, consistency, and extremely low latency are very critical. So with this SLOB workload, we're running right around 300 microseconds of latency. >> Sunil: So this is what, how many node Oracle RAC cluster is this? >> Steve: This is a six node Oracle RAC deployment. >> Sunil: Got it. And so what has gone into the product in recent releases to kind of make this happen? >> Yeah so obviously on the hardware front, there's been a lot of evolutions in storage mediums. So with the introduction of NVME, persistent memory technologies like 3D XPoint, that's meant storage media has become a lot faster. Now to allow you to full take advantage of that, that's where we've had to do a lot of optimizations within the storage stack. So with AHV, we have what we call AHV turbo mode which allows you to full take advantage of those faster storage mediums at that much lower latency. And then obviously on the networking front, technologies such as RDMA can be leveraged to optimize that network stack. >> Got it. So that was Oracle RAC running on a you know Nutanix cluster. It used to be a big deal a couple of years ago. Now we've got many customers doing that. On the same environment though, we're going to show you is the advent of actually putting file services in the same scale out environment. And you know many of you in the audience probably know about AFS. We released it about 12 to 14 months ago. It's been one of our most popular new products of all time within Nutanix's history. And we had SMB support was for user file shares, VDI deployments, and it took awhile to bake, to get to scale and reliability. And then in the last release, in the recent release that we just shipped, we now added NFS for support so that we can no go after the full scale file server consolidation. So let's take a look at some of that stuff. >> Yep, let's do it. So hopping back over to Prism, we can see our four cluster here. Overall cluster-wide latency right around 360 microseconds. Now we'll hop down to our file server section. So here we can see we have our Next A File Server hosting right about 16.2 million files. Now if you look at our shares and exports, we can see we have a mix of different shares. So one of the shares that you see there is home directories. This is an SMB share which is actually mapped and being leveraged by our VDI desktops for home folders, user profiles, things of that nature. We can also see this Oracle backup share here which is exposed to our rack host via NFS. So RMAN is actually leveraging this to provide native database backups. >> Got it. So Oracle VMs, backup using files, or for any other file share requirements with AFS. Do we have the cluster also showing, I know, so I saw some Kubernetes as well on it. Let's talk about what we're thinking of doing there. >> Yep, let's do it. So if we think about cloud, cloud's obviously a big buzz word, so is containers in Kubernetes. So with ACS 1.0 what we did is we introduced native support for Docker integration. >> And pause there. And we screwed up. (laughing) So just like the market took a left turn on Kubernetes, obviously we realized that, and now we're working on ACS 2.0 which is what we're going to talk about, right? >> Exactly. So with ACS 2.0, we've introduced native Kubernetes support. Now when I think about Kubernetes, there's really two core areas that come to mind. The first one is around native integration. So with that, we have our Kubernetes volume integration, we're obviously doing a lot of work on the networking front, and we'll continue to push there from an integration point of view. Now the other piece is around the actual deployment of Kubernetes. When we think about a lot of Nutanix administrators or IT admins, they may have never deployed Kubernetes before, so this could be a very daunting task. And true to the Nutanix nature, we not only want to make our platform simple and intuitive, we also want to do this for any ecosystem products. So with ACS 2.0, we've simplified the full Kubernetes deployment and switching over to our ACS two interface, we can see this create cluster button. Now this actually pops up a full wizard. This wizard will actually walk you through the full deployment process, gather the necessary inputs for you, and in a matter of a few clicks and a few minutes, we have a full Kubernetes deployment fully provisioned, the masters, the workers, all the networking fully done for you, very simple and intuitive. Now if we hop back over to Prism, we can see we have this ACS2 Kubernetes category. Clicking on that, we can see we have eight instances of virtual machines. And here are Kubernetes virtual machines which have actually been deployed as part of this ACS2 installer. Now one of the nice things is it makes the IT administrator's job very simple and easy to do. The deployment straightforward monitoring and management very straightforward and simple. Now for the developer, the application architect, or engineers, they interface and interact with Kubernetes just like they would traditionally on any platform. >> Got it. So the goal of ACS is to ensure that the developer ecosystem still uses whatever tools that they are you know preferring while at that same time allowing this consolidation of containers along with VMs all on that same, single runtime, right? So that's ACS. And then if you think about where the OS is going, there's still some open space at the end. And open space has always been look if you just look at a public cloud, you look at blocks, files, containers, the most obvious sort of storage function that's left is objects. And that's the last horizon for us in completing the storage stack. And we're going to show you for the first time a preview of an upcoming product called the Acropolis Object Storage Services Stack. So let's talk a little bit about it and then maybe show the demo. >> Yeah, so just like we provided file services with AFS, block services with ABS, with OSS or Object Storage Services, we provide native object storage, compatibility and capability within the Nutanix platform. Now this provides a very simply common S3 API. So any integrations you've done with S3 especially Kubernetes, you can actually leverage that out of the box when you've deployed this. Now if we hop back over to Prism, I'll go here to my object stores menu. And here we can see we have two existing object storage instances which are running. So you can deploy however many of these as you wanted to. Now just like the Kubernetes deployment, deploying a new object instance is very simple and easy to do. So here I'll actually name this instance Thor's Hammer. >> You do know he loses it, right? He hasn't seen the movies yet. >> Yeah, I don't want any spoilers yet. So once we specified the name, we can choose our capacity. So here we'll just specify a large instance or type. Obviously this could be any amount or storage. So if you have a 200 node Nutanix cluster with petabytes worth of data, you could do that as well. Once we've selected that, we'll select our expected performance. And this is going to be the number of concurrent gets and puts. So essentially how many operations per second we want this instance to be able to facilitate. Once we've done that, the platform will actually automatically determine how many virtual machines it needs to deploy as well as the resources and specs for those. And once we've done that, we'll go ahead and click save. Now here we can see it's actually going through doing the deployment of the virtual machines, applying any necessary configuration, and in the matter of a few clicks and a few seconds, we actually have this Thor's Hammer object storage instance which is up and running. Now if we hop over to one of our existing object storage instances, we can see this has three buckets. So one for Kafka-queue, I'm actually using this for my Kafka cluster where I have right around 62 million objects all storing ProtoBus. The second one there is Spark. So I actually have a Spark cluster running on our Kubernetes deployed instance via ACS 2.0. Now this is doing analytics on top of this data using S3 as a storage backend. Now for these objects, we support native versioning, native object encryption as well as worm compliancy. So if you want to have expiry periods, retention intervals, that sort of thing, we can do all that. >> Got it. So essentially what we've just shown you is with upcoming objects as well that the same OS can now support VMs, files, objects, containers, all on the same one click operational fabric. And so that's in some way the real power of Nutanix is to still keep that consistency, scalability in place as we're covering each and every workload inside the enterprise. So before Steve gets off stage though, I wanted to talk to you guys a little bit about something that you know how many of you been to our Nutanix headquarters in San Jose, California? A few. I know there's like, I don't know, 4,000 or 5,000 people here. If you do come to the office, you know when you land in San Jose Airport on the way to longterm parking, you'll pass our office. It's that close. And if you come to the fourth floor, you know one of the cubes that's where I sit. In the cube beside me is Steve. Steve sits in the cube beside me. And when I first joined the company, three or four years ago, and Steve's if you go to his cube, it no longer looks like this, but it used to have a lot of this stuff. It was like big containers of this. I remember the first time. Since I started joking about it, he started reducing it. And then Steve eventually got married much to our surprise. (audience laughing) Much to his wife's surprise. And then he also had a baby as a bigger surprise. And if you come over to our office, and we welcome you, and you come to the fourth floor, find my cube or you'll find Steve's Cube, it now looks like this. Okay, so thanks a lot, my man. >> Cool, thank you. >> Thanks so much. (audience clapping) >> So single OS, any workload. And like Steve who's been with us for awhile, it's my great pleasure to invite one of our favorite customers, CSC Karen who's also been with us for three to four years. And I'll share some fond memories about how she's been with the company for awhile, how as partners we've really done a lot together. So without any further ado, let me bring up Karen. Come on up, Karen. (rock music) >> Thank you for having me. >> Yeah, thank you. So I remember, so how many of you guys were with Nutanix first .Next in Miami? I know there was a question like that asked last time. Not too many. You missed it. We wished we could go back to that. We wouldn't fit 3/4s of this crowd. But Karen was our first customer in the keynote in 2015. And we had just talked about that story at that time where you're just become a customer. Do you want to give us some recap of that? >> Sure. So when we made the decision to move to hyperconverged infrastructure and chose Nutanix as our partner, we rapidly started to deploy. And what I mean by that is Sunil and some of the Nutanix executives had come out to visit with us and talk about their product on a Tuesday. And on a Wednesday after making the decision, I picked up the phone and said you know what I've got to deploy for my VDI cluster. So four nodes showed up on Thursday. And from the time it was plugged in to moving over 300 VDIs and 50 terabytes of storage and turning it over for the business for use was less than three days. So it was really excellent testament to how simple it is to start, and deploy, and utilize the Nutanix infrastructure. Now part of that was the delight that we experienced from our customers after that deployment. So we got phone calls where people were saying this report it used to take so long that I'd got out and get a cup of coffee and come back, and read an article, and do some email, and then finally it would finish. Those reports are running in milliseconds now. It's one click. It's very, very simple, and we've delighted our customers. Now across that journey, we have gone from the simple workloads like VDIs to the much more complex workloads around Splunk and Hadoop. And what's really interesting about our Splunk deployment is we're handling over a billion events being logged everyday. And the deployment is smaller than what we had with a three tiered infrastructure. So when you hear people talk about waste and getting that out and getting to an invisible environment where you're just able to run it, that's what we were able to achieve both with everything that we're running from our public facing websites to the back office operations that we're using which include Splunk and even most recently our Cloudera and Hadoop infrastructure. What it does is it's got 30 crawlers that go out on the internet and start bringing data back. So it comes back with over two terabytes of data everyday. And then that environment, ingests that data, does work against it, and responds to the business. And that again is something that's smaller than what we had on traditional infrastructure, and it's faster and more stable. >> Got it. And it covers a lot of use cases as well. You want to speak a few words on that? >> So the use cases, we're 90%, 95% deployed on Nutanix, and we're covering all of our use cases. So whether that's a customer facing app or a back office application. And what are business is doing is it's handling large portfolios of data for fortune 500 companies and law firms. And these applications are all running with improved stability, reliability, and performance on the Nutanix infrastructure. >> And the plan going forward? >> So the plan going forward, you actually asked me that in Miami, and it's go global. So when we started in Miami and that first deployment, we had four nodes. We now have 283 nodes around the world, and we started with about 50 terabytes of data. We've now got 3.8 petabytes of data. And we're deployed across four data centers and six remote offices. And people ask me often what is the value that we achieved? So simplification. It's all just easier, and it's all less expensive. Being able to scale with the business. So our Cloudera environment ended up with one day where it spiked to 1,000 times more load, 1,000 times, and it just responded. We had rally cries around improved productivity by six times. So 600% improved productivity, and we were able to actually achieve that. The numbers you just saw on the slide that was very, very fast was we calculated a 40% reduction in total cost of ownership. We've exceeded that. And when we talk about waste, that other number on the board there is when I saved the company one hour of maintenance activity or unplanned downtime in a month which we're now able to do the majority of our maintenance activities without disrupting any of our business solutions, I'm saving $750,000 each time I save that one hour. >> Wow. All right, Karen from CSE. Thank you so much. That was great. Thank you. I mean you know some of these data points frankly as I started talking to Karen as well as some other customers are pretty amazing in terms of the genuine value beyond financial value. Kind of like the emotional sort of benefits that good products deliver to some of our customers. And I think that's one of the core things that we take back into engineering is to keep ourselves honest on either velocity or quality even hiring people and so forth. Is to actually the more we touch customers lives, the more we touch our partner's lives, the more it allows us to ensure that we can put ourselves in their shoes to kind of make sure that we're doing the right thing in terms of the product. So that was the first part, invisible infrastructure. And our goal, as we've always talked about, our true North is to make sure that this single OS can be an exact replica, a truly modern, thoughtful but original design that brings the power of public cloud this AWS or GCP like architectures into your mainstream enterprises. And so when we take that to the next level which is about expanding the scope to go beyond invisible infrastructure to invisible data centers, it starts with a few things. Obviously, it starts with virtualization and a level of intelligent management, extends to automation, and then as we'll talk about, we have to embark on encompassing the network. And that's what we'll talk about with Flow. But to start this, let me again go back to one of our core products which is the bedrock of our you know opinionated design inside this company which is Prism and Acropolis. And Prism provides, I mentioned, comes with a ton of machine-learning based intelligence built into the product in 5.6 we've done a ton of work. In fact, a lot of features are coming out now because now that PC, Prism Central that you know has been decoupled from our mainstream release strain and will continue to release on its own cadence. And the same thing when you actually flip it to AHV on its own train. Now AHV, two years ago it was all about can I use AHV for VDI? Can I use AHV for ROBO? Now I'm pretty clear about where you cannot use AHV. If you need memory overcome it, stay with VMware or something. If you need, you know Metro, stay with another technology, else it's game on, right? And if you really look at the adoption of AHV in the mainstream enterprise, the customers now speak for themselves. These are all examples of large global enterprises with multimillion dollar ELAs in play that have now been switched over. Like I'll give you a simple example here, and there's lots of these that I'm sure many of you who are in the audience that are in this camp, but when you look at the breakout sessions in the pods, you'll get a sense of this. But I'll give you one simple example. If you look at the online payment company. I'm pretty sure everybody's used this at one time or the other. They had the world's largest private cloud on open stack, 21,000 nodes. And they were actually public about it three or four years ago. And in the last year and a half, they put us through a rigorous VOC testing scale, hardening, and it's a full blown AHV only stack. And they've started cutting over. Obviously they're not there yet completely, but they're now literally in hundreds of nodes of deployment of Nutanix with AHV as their primary operating system. So it is primetime from a deployment perspective. And with that as the base, no cloud is complete without actually having self-service provisioning that truly drives one-click automation, and can you do that in this consumer grade design? And Calm was acquired, as you guys know, in 2016. We had a choice of taking Calm. It was reasonably feature complete. It supported multiple clouds. It supported ESX, it supported Brownfield, It supported AHV. I mean they'd already done the integration with Nutanix even before the acquisition. And we had a choice. The choice was go down the path of dynamic ops or some other products where you took it for revenue or for acceleration, you plopped it into the ecosystem and sold it at this power sucking alien on top of our stack, right? Or we took a step back, re-engineered the product, kept some of the core essence like the workflow engine which was good, the automation, the object model and all, but refactored it to make it look like a natural extension of our operating system. And that's what we did with Calm. And we just launched it in December, and it's been one of our most popular new products now that's flying off the shelves. If you saw the number of registrants, I got a notification of this for the breakout sessions, the number one session that has been preregistered with over 500 people, the first two sessions are around Calm. And justifiably so because it just as it lives up to its promise, and it'll take its time to kind of get to all the bells and whistles, all the capabilities that have come through with AHV or Acropolis in the past. But the feature functionality, the product market fit associated with Calm is dead on from what the feedback that we can receive. And so Calm itself is on its own rapid cadence. We had AWS and AHV in the first release. Three or four months later, we now added ESX support. We added GCP support and a whole bunch of other capabilities, and I think the essence of Calm is if you can combine Calm and along with private cloud automation but also extend it to multi-cloud automation, it really sets Nutanix on its first genuine path towards multi-cloud. But then, as I said, if you really fixate on a software defined data center message, we're not complete as a full blown AWS or GCP like IA stack until we do the last horizon of networking. And you probably heard me say this before. You heard Dheeraj and others talk about it before is our problem in networking isn't the same in storage. Because the data plane in networking works. Good L2 switches from Cisco, Arista, and so forth, but the real problem networking is in the control plane. When something goes wrong at a VM level in Nutanix, you're able to identify whether it's a storage problem or a compute problem, but we don't know whether it's a VLAN that's mis-configured, or there've been some packets dropped at the top of the rack. Well that all ends now with Flow. And with Flow, essentially what we've now done is take the work that we've been working on to create built-in visibility, put some network automation so that you can actually provision VLANs when you provision VMs. And then augment it with micro segmentation policies all built in this easy to use, consume fashion. But we didn't stop there because we've been talking about Flow, at least the capabilities, over the last year. We spent significant resources building it. But we realized that we needed an additional thing to augment its value because the world of applications especially discovering application topologies is a heady problem. And if we didn't address that, we wouldn't be fulfilling on this ambition of providing one-click network segmentation. And so that's where Netsil comes in. Netsil might seem on the surface yet another next generation application performance management tool. But the innovations that came from Netsil started off at the research project at the University of Pennsylvania. And in fact, most of the team right now that's at Nutanix is from the U Penn research group. And they took a really original, fresh look at how do you sit in a network in a scale out fashion but still reverse engineer the packets, the flow through you, and then recreate this application topology. And recreate this not just on Nutanix, but do it seamlessly across multiple clouds. And to talk about the power of Flow augmented with Netsil, let's bring Rajiv back on stage, Rajiv. >> How you doing? >> Okay so we're going to start with some Netsil stuff, right? >> Yeah, let's talk about Netsil and some of the amazing capabilities this acquisition's bringing to Nutanix. First of all as you mentioned, Netsil's completely non invasive. So it installs on the network, it does all its magic from there. There're no host agents, non of the complexity and compatibility issues that entails. It's also monitoring the network at layer seven. So it's actually doing a deep packet inspection on all your application data, and can give you insights into services and APIs which is very important for modern applications and the way they behave. To do all this of course performance is key. So Netsil's built around a completely distributed architecture scaled to really large workloads. Very exciting technology. We're going to use it in many different ways at Nutanix. And to give you a flavor of that, let me show you how we're thinking of integrating Flow and Nestil together, so micro segmentation and Netsil. So to do that, we install Netsil in one of our Google accounts. And that's what's up here now. It went out there. It discovered all the VMs we're running on that account. It created a map essentially of all their interactions, and you can see it's like a Google Maps view. I can zoom into it. I can look at various things running. I can see lots of HTTP servers over here, some databases. >> Sunil: And it also has stats, right? You can go, it actually-- >> It does. We can take a look at that for a second. There are some stats you can look at right away here. Things like transactions per second and latencies and so on. But if I wanted to micro segment this application, it's not really clear how to do so. There's no real pattern over here. Taking the Google Maps analogy a little further, this kind of looks like the backstreets of Cairo or something. So let's do this step by step. Let me first filter down to one application. Right now I'm looking at about three or four different applications. And Netsil integrates with the metadata. So this is that the clouds provide. So I can search all the tags that I have. So by doing that, I can zoom in on just the financial application. And when I do this, the view gets a little bit simpler, but there's still no real pattern. It's not clear how to micro segment this, right? And this is where the power of Netsil comes in. This is a fairly naive view. This is what tool operating at layer four just looking at ports and TCP traffic would give you. But by doing deep packet inspection, Netsil can get into the services layer. So instead of grouping these interactions by hostname, let's group them by service. So you go service tier. And now you can see this is a much simpler picture. Now I have some patterns. I have a couple of load balancers, an HA proxy and an Nginx. I have a web application front end. I have some application servers running authentication services, search services, et cetera, a database, and a database replica. I could go ahead and micro segment at this point. It's quite possible to do it at this point. But this is almost too granular a view. We actually don't usually want to micro segment at individual service level. You think more in terms of application tiers, the tiers that different services belong to. So let me go ahead and group this differently. Let me group this by app tier. And when I do that, a really simple picture emerges. I have a load balancing tier talking to a web application front end tier, an API tier, and a database tier. Four tiers in my application. And this is something I can work with. This is something that I can micro segment fairly easily. So let's switch over to-- >> Before we dot that though, do you guys see how he gave himself the pseudonym called Dom Toretto? >> Focus Sunil, focus. >> Yeah, for those guys, you know that's not the Avengers theme, man, that's the Fast and Furious theme. >> Rajiv: I think a year ahead. This is next years theme. >> Got it, okay. So before we cut over from Netsil to Flow, do we want to talk a few words about the power of Flow, and what's available in 5.6? >> Sure so Flow's been around since the 5.6 release. Actually some of the functionality came in before that. So it's got invisibility into the network. It helps you debug problems with WLANs and so on. We had a lot of orchestration with other third party vendors with load balancers, with switches to make publishing much simpler. And then of course with our most recent release, we GA'ed our micro segmentation capabilities. And that of course is the most important feature we have in Flow right now. And if you look at how Flow policy is set up, it looks very similar to what we just saw with Netsil. So we have load blancer talking to a web app, API, database. It's almost identical to what we saw just a moment ago. So while this policy was created manually, it is something that we can automate. And it is something that we will do in future releases. Right now, it's of course not been integrated at that level yet. So this was created manually. So one thing you'll notice over here is that the database tier doesn't get any direct traffic from the internet. All internet traffic goes to the load balancer, only specific services then talk to the database. So this policy right now is in monitoring mode. It's not actually being enforced. So let's see what happens if I try to attack the database, I start a hack against the database. And I have my trusty brute force password script over here. It's trying the most common passwords against the database. And if I happen to choose a dictionary word or left the default passwords on, eventually it will log into the database. And when I go back over here in Flow what happens is it actually detects there's now an ongoing a flow, a flow that's outside of policy that's shown up. And it shows this in yellow. So right alongside the policy, I can visualize all the noncompliant flows. This makes it really easy for me now to make decisions, does this flow should it be part of the policy, should it not? In this particular case, obviously it should not be part of the policy. So let me just switch from monitoring mode to enforcement mode. I'll apply the policy, give it a second to propagate. The flow goes away. And if I go back to my script, you can see now the socket's timing out. I can no longer connect to the database. >> Sunil: Got it. So that's like one click segmentation and play right now? >> Absolutely. It's really, really simple. You can compare it to other products in the space. You can't get simpler than this. >> Got it. Why don't we got back and talk a little bit more about, so that's Flow. It's shipping now in 5.6 obviously. It'll come integrated with Netsil functionality as well as a variety of other enhancements in that next few releases. But Netsil does more than just simple topology discovery, right? >> Absolutely. So Netsil's actually gathering a lot of metrics from your network, from your host, all this goes through a data pipeline. It gets processed over there and then gets captured in a time series database. And then we can slice and dice that in various different ways. It can be used for all kinds of insights. So let's see how our application's behaving. So let me say I want to go into the API layer over here. And I instantly get a variety of metrics on how the application's behaving. I get the most requested endpoints. I get the average latency. It looks reasonably good. I get the average latency of the slowest endpoints. If I was having a performance problem, I would know exactly where to go focus on. Right now, things look very good, so we won't focus on that. But scrolling back up, I notice that we have a fairly high error rate happening. We have like 11.35% of our HTTP requests are generating errors, and that deserves some attention. And if I scroll down again, and I see the top five status codes I'm getting, almost 10% of my requests are generating 500 errors, HTTP 500 errors which are internal server errors. So there's something going on that's wrong with this application. So let's dig a little bit deeper into that. Let me go into my analytics workbench over here. And what I've plotted over here is how my HTTP requests are behaving over time. Let me filter down to just the 500 ones. That will make it easier. And I want the 500s. And I'll also group this by the service tier so that I can see which services are causing the problem. And the better view for this would be a bar graph. Yes, so once I do this, you can see that all the errors, all the 500 errors that we're seeing have been caused by the authentication service. So something's obviously wrong with that part of my application. I can go look at whether Active Directory is misbehaving and so on. So very quickly from a broad problem that I was getting a high HTTP error rate. In fact, usually you will discover there's this customer complaining about a lot of errors happening in your application. You can quickly narrow down to exactly what the cause was. >> Got it. This is what we mean by hyperconvergence of the network which is if you can truly isolate network related problems and associate them with the rest of the hyperconvergence infrastructure, then we've essentially started making real progress towards the next level of hyperconvergence. Anyway, thanks a lot, man. Great job. >> Thanks, man. (audience clapping) >> So to talk about this evolution from invisible infrastructure to invisible data centers is another customer of ours that has embarked on this journey. And you know it's not just using Nutanix but a variety of other tools to actually fulfill sort of like the ambition of a full blown cloud stack within a financial organization. And to talk more about that, let me call Vijay onstage. Come on up, Vijay. (rock music) >> Hey. >> Thank you, sir. So Vijay looks way better in real life than in a picture by the way. >> Except a little bit of gray. >> Unlike me. So tell me a little bit about this cloud initiative. >> Yeah. So we've won the best cloud initiative twice now hosted by Incisive media a large magazine. It's basically they host a bunch of you know various buy side, sell side, and you can submit projects in various categories. So we've won the best cloud twice now, 2015 and 2017. The 2017 award is when you know as part of our private cloud journey we were laying the foundation for our private cloud which is 100% based on hyperconverged infrastructure. So that was that award. And then 2017, we've kind of built on that foundation and built more developer-centric next gen app services like PAS, CAS, SDN, SDS, CICD, et cetera. So we've built a lot of those services on, and the second award was really related to that. >> Got it. And a lot of this was obviously based on an infrastructure strategy with some guiding principles that you guys had about three or four years ago if I remember. >> Yeah, this is a great slide. I use it very often. At the core of our infrastructure strategy is how do we run IT as a business? I talk about this with my teams, they were very familiar with this. That's the mindset that I instill within the teams. The mission, the challenge is the same which is how do we scale infrastructure while reducing total cost of ownership, improving time to market, improving client experience and while we're doing that not lose sight of reliability, stability, and security? That's the mission. Those are some of our guiding principles. Whenever we take on some large technology investments, we take 'em through those lenses. Obviously Nutanix went through those lenses when we invested in you guys many, many years ago. And you guys checked all the boxes. And you know initiatives change year on year, the mission remains the same. And more recently, the last few years, we've been focused on converged platforms, converged teams. We've actually reorganized our teams and aligned them closer to the platforms moving closer to an SRE like concept. >> And then you've built out a full stack now across computer storage, networking, all the way with various use cases in play? >> Yeah, and we're aggressively moving towards PAS, CAS as our method of either developing brand new cloud native applications or even containerizing existing applications. So the stack you know obviously built on Nutanix, SDS for software fine storage, compute and networking we've got SDN turned on. We've got, again, PAS and CAS built on this platform. And then finally, we've hooked our CICD tooling onto this. And again, the big picture was always frictionless infrastructure which we're very close to now. You know 100% of our code deployments into this environment are automated. >> Got it. And so what's the net, net in terms of obviously the business takeaway here? >> Yeah so at Northern we don't do tech for tech. It has to be some business benefits, client benefits. There has to be some outcomes that we measure ourselves against, and these are some great metrics or great ways to look at if we're getting the outcomes from the investments we're making. So for example, infrastructure scale while reducing total cost of ownership. We're very focused on total cost of ownership. We, for example, there was a build team that was very focus on building servers, deploying applications. That team's gone down from I think 40, 45 people to about 15 people as one example, one metric. Another metric for reducing TCO is we've been able to absorb additional capacity without increasing operating expenses. So you're actually building capacity in scale within your operating model. So that's another example. Another example, right here you see on the screen. Faster time to market. We've got various types of applications at any given point that we're deploying. There's a next gen cloud native which go directly on PAS. But then a majority of the applications still need the traditional IS components. The time to market to deploy a complex multi environment, multi data center application, we've taken that down by 60%. So we can deliver server same day, but we can deliver entire environments, you know add it to backup, add it to DNS, and fully compliant within a couple of weeks which is you know something we measure very closely. >> Great job, man. I mean that's a compelling I think results. And in the journey obviously you got promoted a few times. >> Yep. >> All right, congratulations again. >> Thank you. >> Thanks Vijay. >> Hey Vijay, come back here. Actually we forgot our joke. So razzled by his data points there. So you're supposed to wear some shoes, right? >> I know my inner glitch. I was going to wear those sneakers, but I forgot them at the office maybe for the right reasons. But the story behind those florescent sneakers, I see they're focused on my shoes. But I picked those up two years ago at a Next event, and not my style. I took 'em to my office. They've been sitting in my office for the last couple years. >> Who's received shoes like these by the way? I'm sure you guys have received shoes like these. There's some real fans there. >> So again, I'm sure many of you liked them. I had 'em in my office. I've offered it to so many of my engineers. Are you size 11? Do you want these? And they're unclaimed? >> So that's the only feature of Nutanix that you-- >> That's the only thing that hasn't worked, other than that things are going extremely well. >> Good job, man. Thanks a lot. >> Thanks. >> Thanks Vijay. So as we get to the final phase which is obviously as we embark on this multi-cloud journey and the complexity that comes with it which Dheeraj hinted towards in his session. You know we have to take a cautious, thoughtful approach here because we don't want to over set expectations because this will take us five, 10 years to really do a good job like we've done in the first act. And the good news is that the market is also really, really early here. It's just a fact. And so we've taken a tiered approach to it as we'll start the discussion with multi-cloud operations, and we've talked about the stack in the prior session which is about look across new clouds. So it's no longer Nutanix, Dell, Lenova, HP, Cisco as the new quote, unquote platforms. It's Nutanix, Xi, GCP, AWS, Azure as the new platforms. That's how we're designing the fabric going forward. On top of that, you obviously have the hybrid OS both on the data plane side and control plane side. Then what you're seeing with the advent of Calm doing a marketplace and automation as well as Beam doing governance and compliance is the fact that you'll see more and more such capabilities of multi-cloud operations burnt into the platform. And example of that is Calm with the new 5.7 release that they had. Launch supports multiple clouds both inside and outside, but the fundamental premise of Calm in the multi-cloud use case is to enable you to choose the right cloud for the right workload. That's the automation part. On the governance part, and this we kind of went through in the last half an hour with Dheeraj and Vijay on stage is something that's even more, if I can call it, you know first order because you get the provisioning and operations second. The first order is to say look whatever my developers have consumed off public cloud, I just need to first get our arm around to make sure that you know what am I spending, am I secure, and then when I get comfortable, then I am able to actually expand on it. And that's the power of Beam. And both Beam and Calm will be the yin and yang for us in our multi-cloud portfolio. And we'll have new products to complement that down the road, right? But along the way, that's the whole private cloud, public cloud. They're the two ends of the barbell, and over time, and we've been working on Xi for awhile, is this conviction that we've built talking to many customers that there needs to be another type of cloud. And this type of a cloud has to feel like a public cloud. It has to be architected like a public cloud, be consumed like a public cloud, but it needs to be an extension of my data center. It should not require any changes to my tooling. It should not require and changes to my operational infrastructure, and it should not require lift and shift, and that's a super hard problem. And this problem is something that a chunk of our R and D team has been burning the midnight wick on for the last year and a half. Because look this is not about taking our current OS which does a good job of scaling and plopping it into a Equinix or a third party data center and calling it a hybrid cloud. This is about rebuilding things in the OS so that we can deliver a true hybrid cloud, but at the same time, give those functionality back on premises so that even if you don't have a hybrid cloud, if you just have your own data centers, you'll still need new services like DR. And if you think about it, what are we doing? We're building a full blown multi-tenant virtual network designed in a modern way. Think about this SDN 2.0 because we have 10 years worth of looking backwards on how GCP has done it, or how Amazon has done it, and now sort of embodying some of that so that we can actually give it as part of this cloud, but do it in a way that's a seamless extension of the data center, and then at the same time, provide new services that have never been delivered before. Everyone obviously does failover and failback in DR it just takes months to do it. Our goal is to do it in hours or minutes. But even things such as test. Imagine doing a DR test on demand for you business needs in the middle of the day. And that's the real bar that we've set for Xi that we are working towards in early access later this summer with GA later in the year. And to talk more about this, let me invite some of our core architects working on it, Melina and Rajiv. (rock music) Good to see you guys. >> You're messing up the names again. >> Oh Rajiv, Vinny, same thing, man. >> You need to back up your memory from Xi. >> Yeah, we should. Okay, so what are we going to talk about, Vinny? >> Yeah, exactly. So today we're going to talk about how Xi is pushing the envelope and beyond the state of the art as you were saying in the industry. As part of that, there's a whole bunch of things that we have done starting with taking a private cloud, seamlessly extending it to the public cloud, and then creating a hybrid cloud experience with one-click delight. We're going to show that. We've done a whole bunch of engineering work on making sure the operations and the tooling is identical on both sides. When you graduate from a private cloud to a hybrid cloud environment, you don't want the environments to be different. So we've copied the environment for you with zero manual intervention. And finally, building on top of that, we are delivering DR as a service with unprecedented simplicity with one-click failover, one-click failback. We're going to show you one click test today. So Melina, why don't we start with showing how you go from a private cloud, seamlessly extend it to consume Xi. >> Sounds good, thanks Vinny. Right now, you're looking at my Prism interface for my on premises cluster. In one-click, I'm going to be able to extend that to my Xi cloud services account. I'm doing this using my my Nutanix credential and a password manager. >> Vinny: So here as you notice all the Nutanix customers we have today, we have created an account for them in Xi by default. So you don't have to log in somewhere and create an account. It's there by default. >> Melina: And just like that we've gone ahead and extended my data center. But let's go take a look at the Xi side and log in again with my my Nutanix credentials. We'll see what we have over here. We're going to be able to see two availability zones, one for on premises and one for Xi right here. >> Vinny: Yeah as you see, using a log in account that you already knew mynutanix.com and 30 seconds in, you can see that you have a hybrid cloud view already. You have a private cloud availability zone that's your own Prism central data center view, and then a Xi availability zone. >> Sunil: Got it. >> Melina: Exactly. But of course we want to extend my network connection from on premises to my Xi networks as well. So let's take a look at our options there. We have two ways of doing this. Both are one-click experience. With direct connect, you can create a dedicated network connection between both environments, or VPN you can use a public internet and a VPN service. Let's go ahead and enable VPN in this environment. Here we have two options for how we want to enable our VPN. We can bring our own VPN and connect it, or we will deploy a VPN for you on premises. We'll do the option where we deploy the VPN in one-click. >> And this is another small sign or feature that we're building net new as part of Xi, but will be burned into our core Acropolis OS so that we can also be delivering this as a stand alone product for on premises deployment as well, right? So that's one of the other things to note as you guys look at the Xi functionality. The goal is to keep the OS capabilities the same on both sides. So even if I'm building a quote, unquote multi data center cloud, but it's just a private cloud, you'll still get all the benefits of Xi but in house. >> Exactly. And on this second step of the wizard, there's a few inputs around how you want the gateway configured, your VLAN information and routing and protocol configuration details. Let's go ahead and save it. >> Vinny: So right now, you know what's happening is we're taking the private network that our customers have on premises and extending it to a multi-tenant public cloud such that our customers can use their IP addresses, the subnets, and bring their own IP. And that is another step towards making sure the operation and tooling is kept consistent on both sides. >> Melina: Exactly. And just while you guys were talking, the VPN was successfully created on premises. And we can see the details right here. You can track details like the status of the connection, the gateway, as well as bandwidth information right in the same UI. >> Vinny: And networking is just tip of the iceberg of what we've had to work on to make sure that you get a consistent experience on both sides. So Melina, why don't we show some of the other things we've done? >> Melina: Sure, to talk about how we preserve entities from my on-premises to Xi, it's better to use my production environment. And first thing you might notice is the log in screen's a little bit different. But that's because I'm logging in using my ADFS credentials. The first thing we preserved was our users. In production, I'm running AD obviously on-prem. And now we can log in here with the same set of credentials. Let me just refresh this. >> And this is the Active Directory credential that our customers would have. They use it on-premises. And we allow the setting to be set on the Xi cloud services as well, so it's the same set of users that can access both sides. >> Got it. There's always going to be some networking problem onstage. It's meant to happen. >> There you go. >> Just launching it again here. I think it maybe timed out. This is a good sign that we're running on time with this presentation. >> Yeah, yeah, we're running ahead of time. >> Move the demos quicker, then we'll time out. So essentially when you log into Xi, you'll be able to see what are the environment capabilities that we have copied to the Xi environment. So for example, you just saw that the same user is being used to log in. But after the use logs in, you'll be able to see their images, for example, copied to the Xi side. You'll be able to see their policies and categories. You know when you define these policies on premises, you spend a lot of effort and create them. And now when you're extending to the public cloud, you don't want to do it again, right? So we've done a whole lot of syncing mechanisms making sure that the two sides are consistent. >> Got it. And on top of these policies, the next step is to also show capabilities to actually do failover and failback, but also do integrated testing as part of this compatibility. >> So one is you know just the basic job of making the environments consistent on two sides, but then it's also now talking about the data part, and that's what DR is about. So if you have a workload running on premises, we can take the data and replicate it using your policies that we've already synced. Once the data is available on the Xi side, at that point, you have to define a run book. And the run book essentially it's a recovery plan. And that says okay I already have the backups of my VMs in case of disaster. I can take my recovery plan and hit you know either failover or maybe a test. And then my application comes up. First of all, you'll talk about the boot order for your VMs to come up. You'll talk about networking mapping. Like when I'm running on-prem, you're using a particular subnet. You have an option of using the same subnet on the Xi side. >> Melina: There you go. >> What happened? >> Sunil: It's finally working.? >> Melina: Yeah. >> Vinny, you can stop talking. (audience clapping) By the way, this is logging into a live Xi data center. We have two regions West Coat, two data centers East Coast, two data centers. So everything that you're seeing is essentially coming off the mainstream Xi profile. >> Vinny: Melina, why don't we show the recovery plan. That's the most interesting piece here. >> Sure. The recovery plan is set up to help you specify how you want to recover your applications in the event of a failover or a test failover. And it specifies all sorts of details like the boot sequence for the VMs as well as network mappings. Some of the network mappings are things like the production network I have running on premises and how it maps to my production network on Xi or the test network to the test network. What's really cool here though is we're actually automatically creating your subnets on Xi from your on premises subnets. All that's part of the recovery plan. While we're on the screen, take a note of the .100 IP address. That's a floating IP address that I have set up to ensure that I'm going to be able to access my three tier web app that I have protected with this plan after a failover. So I'll be able to access it from the public internet really easily from my phone or check that it's all running. >> Right, so given how we make the environment consistent on both sides, now we're able to create a very simple DR experience including failover in one-click, failback. But we're going to show you test now. So Melina, let's talk about test because that's one of the most common operations you would do. Like some of our customers do it every month. But usually it's very hard. So let's see how the experience looks like in what we built. >> Sure. Test and failover are both one-click experiences as you know and come to expect from Nutanix. You can see it's failing over from my primary location to my recovery location. Now what we're doing right now is we're running a series of validation checks because we want to make sure that you have your network configured properly, and there's other configuration details in place for the test to be successful. Looks like the failover was initiated successfully. Now while that failover's happening though, let's make sure that I'm going to be able to access my three tier web app once it fails over. We'll do that by looking at my network policies that I've configured on my test network. Because I want to access the application from the public internet but only port 80. And if we look here under our policies, you can see I have port 80 open to permit. So that's good. And if I needed to create a new one, I could in one click. But it looks like we're good to go. Let's go back and check the status of my recovery plan. We click in, and what's really cool here is you can actually see the individual tasks as they're being completed from that initial validation test to individual VMs being powered on as part of the recovery plan. >> And to give you guys an idea behind the scenes, the entire recovery plan is actually a set of workflows that are built on Calm's automation engine. So this is an example of where we're taking some of power of workflow and automation that Clam has come to be really strong at and burning that into how we actually operationalize many of these workflows for Xi. >> And so great, while you were explaining that, my three tier web app has restarted here on Xi right in front of you. And you can see here there's a floating IP that I mentioned early that .100 IP address. But let's go ahead and launch the console and make sure the application started up correctly. >> Vinny: Yeah, so that .100 IP address is a floating IP that's a publicly visible IP. So it's listed here, 206.80.146.100. And that's essentially anybody in the audience here can go use your laptop or your cell phone and hit that and start to work. >> Yeah so by the way, just to give you guys an idea while you guys maybe use the IP to kind of hit it, is a real set of VMs that we've just failed over from Nutanix's corporate data center into our West region. >> And this is running live on the Xi cloud. >> Yeah, you guys should all go and vote. I'm a little biased towards Xi, so vote for Xi. But all of them are really good features. >> Scroll up a little bit. Let's see where Xi is. >> Oh Xi's here. I'll scroll down a little bit, but keep the... >> Vinny: Yes. >> Sunil: You guys written a block or something? >> Melina: Oh good, it looks like Xi's winning. >> Sunil: Okay, great job, Melina. Thank you so much. >> Thank you, Melina. >> Melina: Thanks. >> Thank you, great job. Cool and calm under pressure. That's good. So that was Xi. What's something that you know we've been doing around you know in addition to taking say our own extended enterprise public cloud with Xi. You know we do recognize that there are a ton of workloads that are going to be residing on AWS, GCP, Azure. And to sort of really assist in the try and call it transformation of enterprises to choose the right cloud for the right workload. If you guys remember, we actually invested in a tool over last year which became actually quite like one of those products that took off based on you know groundswell movement. Most of you guys started using it. It's essentially extract for VMs. And it was this product that's obviously free. It's a tool. But it enables customers to really save tons of time to actually migrate from legacy environments to Nutanix. So we took that same framework, obviously re-platformed it for the multi-cloud world to kind of solve the problem of migrating from AWS or GCP to Nutanix or vice versa. >> Right, so you know, Sunil as you said, moving from a private cloud to the public cloud is a lift and shift, and it's a hard you know operation. But moving back is not only expensive, it's a very hard problem. None of the cloud vendors provide change block tracking capability. And what that means is when you have to move back from the cloud, you have an extended period of downtime because there's now way of figuring out what's changing while you're moving. So you have to keep it down. So what we've done with our app mobility product is we have made sure that, one, it's extremely simple to move back. Two, that the downtime that you'll have is as small as possible. So let me show you what we've done. >> Got it. >> So here is our app mobility capability. As you can see, on the left hand side we have a source environment and target environment. So I'm calling my AWS environment Asgard. And I can add more environments. It's very simple. I can select AWS and then put in my credentials for AWS. It essentially goes and discovers all the VMs that are running and all the regions that they're running. Target environment, this is my Nutanix environment. I call it Earth. And I can add target environment similarly, IP address and credentials, and we do the rest. Right, okay. Now migration plans. I have Bifrost one as my migration plan, and this is how migration works. First you create a plan and then say start seeding. And what it does is takes a snapshot of what's running in the cloud and starts migrating it to on-prem. Once it is an on-prem and the difference between the two sides is minimal, it says I'm ready to cutover. At that time, you move it. But let me show you how you'd create a new migration plan. So let me name it, Bifrost 2. Okay so what I have to do is select a region, so US West 1, and target Earth as my cluster. This is my storage container there. And very quickly you can see these are the VMs that are running in US West 1 in AWS. I can select SQL server one and two, go to next. Right now it's looking at the target Nutanix environment and seeing it had enough space or not. Once that's good, it gives me an option. And this is the step where it enables the Nutanix service of change block tracking overlaid on top of the cloud. There are two options one is automatic where you'll give us the credentials for your VMs, and we'll inject our capability there. Or manually you could do. You could copy the command either in a windows VM or Linux VM and run it once on the VM. And change block tracking since then in enabled. Everything is seamless after that. Hit next. >> And while Vinny's setting it up, he said a few things there. I don't know if you guys caught it. One of the hardest problems in enabling seamless migration from public cloud to on-prem which makes it harder than the other way around is the fact that public cloud doesn't have things like change block tracking. You can't get delta copies. So one of the core innovations being built in this app mobility product is to provide that overlay capability across multiple clouds. >> Yeah, and the last step here was to select the target network where the VMs will come up on the Nutanix environment, and this is a summary of the migration plan. You can start it or just save it. I'm saving it because it takes time to do the seeding. I have the other plan which I'll actually show the cutover with. Okay so now this is Bifrost 1. It's ready to cutover. We started it four hours ago. And here you can see there's a SQL server 003. Okay, now I would like to show the AWS environment. As you can see, SQL server 003. This VM is actually running in AWS right now. And if you go to the Prism environment, and if my login works, right? So we can go into the virtual machine view, tables, and you see the VM is not there. Okay, so we go back to this, and we can hit cutover. So this is essentially telling our system, okay now it the time. Quiesce the VM running in AWS, take the last bit of changes that you have to the database, ship it to on-prem, and in on-prem now start you know configure the target VM and start bringing it up. So let's go and look at AWS and refresh that screen. And you should see, okay so the SQL server is now stopping. So that means it has quiesced and stopping the VM there. If you go back and look at the migration plan that we had, it says it's completed. So it has actually migrated all the data to the on-prem side. Go here on-prem, you see the production SQL server is running already. I can click launch console, and let's see. The Windows VM is already booting up. >> So essentially what Vinny just showed was a live cutover of an AWS VM to Nutanix on-premises. >> Yeah, and what we have done. (audience clapping) So essentially, this is about making two things possible, making it simple to migrate from cloud to on-prem, and making it painless so that the downtime you have is very minimal. >> Got it, great job, Vinny. I won't forget your name again. So last step. So to really talk about this, one of our favorite partners and customers has been in the cloud environment for a long time. And you know Jason who's the CTO of Cyxtera. And he'll introduce who Cyxtera is. Most of you guys are probably either using their assets or not without knowing their you know the new name. But is someone that was in the cloud before it was called cloud as one of the original founders and technologists behind Terremark, and then later as one of the chief architects of VMware's cloud. And then they started this new company about a year or so ago which I'll let Jason talk about. This journey that he's going to talk about is how a partner, slash customer is working with us to deliver net new transformations around the traditional industry of colo. Okay, to talk more about it, Jason, why don't you come up on stage, man? (rock music) Thank you, sir. All right so Cyxtera obviously a lot of people don't know the name. Maybe just give a 10 second summary of why you're so big already. >> Sure, so Cyxtera was formed, as you said, about a year ago through the acquisition of the CenturyLink data centers. >> Sunil: Which includes Savvis and a whole bunch of other assets. >> Yeah, there's a long history of those data centers, but we have all of them now as well as the software companies owned by Medina capital. So we're like the world's biggest startup now. So we have over 50 data centers around the world, about 3,500 customers, and a portfolio of security and analytics software. >> Sunil: Got it, and so you have this strategy of what we're calling revolutionizing colo deliver a cloud based-- >> Yeah so, colo hasn't really changed a lot in the last 20 years. And to be fair, a lot of what happens in data centers has to have a person physically go and do it. But there are some things that we can simplify and automate. So we want to make things more software driven, so that's what we're doing with the Cyxtera extensible data center or CXD. And to do that, we're deploying software defined networks in our facilities and developing automations so customers can go and provision data center services and the network connectivity through a portal or through REST APIs. >> Got it, and what's different now? I know there's a whole bunch of benefits with the integrated platform that one would not get in the traditional kind of on demand data center environment. >> Sure. So one of the first services we're launching on CXD is compute on demand, and it's powered by Nutanix. And we had to pick an HCI partner to launch with. And we looked at players in the space. And as you mentioned, there's actually a lot of them, more than I thought. And we had a lot of conversations, did a lot of testing in the lab, and Nutanix really stood out as the best choice. You know Nutanix has a lot of focus on things like ease of deployment. So it's very simple for us to automate deploying compute for customers. So we can use foundation APIs to go configure the servers, and then we turn those over to the customer which they can then manage through Prism. And something important to keep in mind here is that you know this isn't a manged service. This isn't infrastructure as a service. The customer has complete control over the Nutanix platform. So we're turning that over to them. It's connected to their network. They're using their IP addresses, you know their tools and processes to operate this. So it was really important for the platform we picked to have a really good self-service story for things like you know lifecycle management. So with one-click upgrade, customers have total control over patches and upgrades. They don't have to call us to do it. You know they can drive that themselves. >> Got it. Any other final words around like what do you see of the partnership going forward? >> Well you know I think this would be a great platform for Xi, so I think we should probably talk about that. >> Yeah, yeah, we should talk about that separately. Thanks a lot, Jason. >> Thanks. >> All right, man. (audience clapping) So as we look at the full journey now between obviously from invisible infrastructure to invisible clouds, you know there is one thing though to take away beyond many updates that we've had so far. And the fact is that everything that I've talked about so far is about completing a full blown true IA stack from all the way from compute to storage, to vitualization, containers to network services, and so forth. But every public cloud, a true cloud in that sense, has a full blown layer of services that's set on top either for traditional workloads or for new workloads, whether it be machine-learning, whether it be big data, you know name it, right? And in the enterprise, if you think about it, many of these services are being provisioned or provided through a bunch of our partners. Like we have partnerships with Cloudera for big data and so forth. But then based on some customer feedback and a lot of attention from what we've seen in the industry go out, just like AWS, and GCP, and Azure, it's time for Nutanix to have an opinionated view of the past stack. It's time for us to kind of move up the stack with our own offering that obviously adds value but provides some of our core competencies in data and takes it to the next level. And it's in that sense that we're actually launching Nutanix Era to simplify one of the hardest problems in enterprise IT and short of saving you from true Oracle licensing, it solves various other Oracle problems which is about truly simplifying databases much like what RDS did on AWS, imagine enterprise RDS on demand where you can provision, lifecycle manage your database with one-click. And to talk about this powerful new functionality, let me invite Bala and John on stage to give you one final demo. (rock music) Good to see you guys. >> Yep, thank you. >> All right, so we've got lots of folks here. They're all anxious to get to the next level. So this demo, really rock it. So what are we going to talk about? We're going to start with say maybe some database provisioning? Do you want to set it up? >> We have one dream, Sunil, one single dream to pass you off, that is what Nutanix is today for IT apps, we want to recreate that magic for devops and get back those weekends and freedom to DBAs. >> Got it. Let's start with, what, provisioning? >> Bala: Yep, John. >> Yeah, we're going to get in provisioning. So provisioning databases inside the enterprise is a significant undertaking that usually involves a myriad of resources and could take days. It doesn't get any easier after that for the longterm maintence with things like upgrades and environment refreshes and so on. Bala and team have been working on this challenge for quite awhile now. So we've architected Nutanix Era to cater to these enterprise use cases and make it one-click like you said. And Bala and I are so excited to finally show this to the world. We think it's actually Nutanix's best kept secrets. >> Got it, all right man, let's take a look at it. >> So we're going to be provisioning a sales database today. It's a four-step workflow. The first part is choosing our database engine. And since it's our sales database, we want it to be highly available. So we'll do a two node rack configuration. From there, it asks us where we want to land this service. We can either land it on an existing service that's already been provisioned, or if we're starting net new or for whatever reason, we can create a new service for it. The key thing here is we're not asking anybody how to do the work, we're asking what work you want done. And the other key thing here is we've architected this concept called profiles. So you tell us how much resources you need as well as what network type you want and what software revision you want. This is actually controlled by the DBAs. So DBAs, and compute administrators, and network administrators, so they can set their standards without having a DBA. >> Sunil: Got it, okay, let's take a look. >> John: So if we go to the next piece here, it's going to personalize their database. The key thing here, again, is that we're not asking you how many data files you want or anything in that regard. So we're going to be provisioning this to Nutanix's best practices. And the key thing there is just like these past services you don't have to read dozens of pages of best practice guides, it just does what's best for the platform. >> Sunil: Got it. And so these are a multitude of provisioning steps that normally one would take I guess hours if not days to provision and Oracle RAC data. >> John: Yeah, across multiple teams too. So if you think about the lifecycle especially if you have onshore and offshore resources, I mean this might even be longer than days. >> Sunil: Got it. And then there are a few steps here, and we'll lead into potentially the Time Machine construct too? >> John: Yeah, so since this is a critical database, we want data protection. So we're going to be delivering that through a feature called Time Machines. We'll leave this at the defaults for now, but the key thing to not here is we've got SLAs that deliver both continuous data protection as well as telescoping checkpoints for historical recovery. >> Sunil: Got it. So that's provisioning. We've kicked off Oracle, what, two node database and so forth? >> John: Yep, two node database. So we've got a handful of tasks that this is going to automate. We'll check back in in a few minutes. >> Got it. Why don't we talk about the other aspects then, Bala, maybe around, one of the things that, you know and I know many of you guys have seen this, is the fact that if you look at database especially Oracle but in general even SQL and so forth is the fact that look if you really simplified it to a developer, it should be as simple as I copy my production database, and I paste it to create my own dev instance. And whenever I need it, I need to obviously do it the opposite way, right? So that was the goal that we set ahead for us to actually deliver this new past service around Era for our customers. So you want to talk a little bit more about it? >> Sure Sunil. If you look at most of the data management functionality, they're pretty much like flavors of copy paste operations on database entities. But the trouble is the seemingly simple, innocuous operations of our daily lives becomes the most dreaded, complex, long running, error prone operations in data center. So we actually planned to tame this complexity and bring consumer grade simplicity to these operations, also make these clones extremely efficient without compromising the quality of service. And the best part is, the customers can enjoy these services not only for databases running on Nutanix, but also for databases running on third party systems. >> Got it. So let's take a look at this functionality of I guess snapshoting, clone and recovery that you've now built into the product. >> Right. So now if you see the core feature of this whole product is something we call Time Machine. Time Machine lets the database administrators actually capture the database tape to the granularity of seconds and also lets them create clones, refresh them to any point in time, and also recover the databases if the databases are running on the same Nutanix platform. Let's take a look at the demo with the Time Machine. So here is our customer relationship database management database which is about 2.3 terabytes. If you see, the Time Machine has been active about four months, and SLA has been set for continuously code revision of 30 days and then slowly tapers off 30 days of daily backup and weekly backups and so on, so forth. On the right hand side, you will see different colors. The green color is pretty much your continuously code revision, what we call them. That lets you to go back to any point in time to the granularity of seconds within those 30 days. And then the discreet code revision lets you go back to any snapshot of the backup that is maintained there kind of stuff. In a way, you see this Time Machine is pretty much like your modern day car with self driving ability. All you need to do is set the goals, and the Time Machine will do whatever is needed to reach up to the goal kind of stuff. >> Sunil: So why don't we quickly do a snapshot? >> Bala: Yeah, some of these times you need to create a snapshot for backup purposes, Time Machine has manual controls. All you need to do is give it a snapshot name. And then you have the ability to actually persist this snapshot data into a third party or object store so that your durability and that global data access requirements are met kind of stuff. So we kick off a snapshot operation. Let's look at what it is doing. If you see what is the snapshot operation that this is going through, there is a step called quiescing the databases. Basically, we're using application-centric APIs, and here it's actually RMAN of Oracle. We are using the RMan of Oracle to quiesce the database and performing application consistent storage snapshots with Nutanix technology. Basically we are fusing application-centric and then Nutanix platform and quiescing it. Just for a data point, if you have to use traditional technology and create a backup for this kind of size, it takes over four to six hours, whereas on Nutanix it's going to be a matter of seconds. So it almost looks like snapshot is done. This is full sensitive backup. You can pretty much use it for database restore kind of stuff. Maybe we'll do a clone demo and see how it goes. >> John: Yeah, let's go check it out. >> Bala: So for clone, again through the simplicity of command Z command, all you need to do is pick the time of your choice maybe around three o'clock in the morning today. >> John: Yeah, let's go with 3:02. >> Bala: 3:02, okay. >> John: Yeah, why not? >> Bala: You select the time, all you need to do is click on the clone. And most of the inputs that are needed for the clone process will be defaulted intelligently by us, right? And you have to make two choices that is where do you want this clone to be created with a brand new VM database server, or do you want to place that in your existing server? So we'll go with a brand new server, and then all you need to do is just give the password for you new clone database, and then clone it kind of stuff. >> Sunil: And this is an example of personalizing the database so a developer can do that. >> Bala: Right. So here is the clone kicking in. And what this is trying to do is actually it's creating a database VM and then registering the database, restoring the snapshot, and then recoding the logs up to three o'clock in the morning like what we just saw that, and then actually giving back the database to the requester kind of stuff. >> Maybe one finally thing, John. Do you want to show us the provision database that we kicked off? >> Yeah, it looks like it just finished a few seconds ago. So you can see all the tasks that we were talking about here before from creating the virtual infrastructure, and provisioning the database infrastructure, and configuring data protection. So I can go access this database now. >> Again, just to highlight this, guys. What we just showed you is an Oracle two node instance provisioned live in a few minutes on Nutanix. And this is something that even in a public cloud when you go to RDS on AWS or anything like that, you still can't provision Oracle RAC by the way, right? But that's what you've seen now, and that's what the power of Nutanix Era is. Okay, all right? >> Thank you. >> Thanks. (audience clapping) >> And one final thing around, obviously when we're building this, it's built as a past service. It's not meant just for operational benefits. And so one of the core design principles has been around being API first. You want to show that a little bit? >> Absolutely, Sunil, this whole product is built on API fist architecture. Pretty much what we have seen today and all the functionality that we've been able to show today, everything is built on Rest APIs, and you can pretty much integrate with service now architecture and give you your devops experience for your customers. We do have a plan for full fledged self-service portal eventually, and then make it as a proper service. >> Got it, great job, Bala. >> Thank you. >> Thanks, John. Good stuff, man. >> Thanks. >> All right. (audience clapping) So with Nutanix Era being this one-click provisioning, lifecycle management powered by APIs, I think what we're going to see is the fact that a lot of the products that we've talked about so far while you know I've talked about things like Calm, Flow, AHV functionality that have all been released in 5.5, 5.6, a bunch of the other stuff are also coming shortly. So I would strongly encourage you guys to kind of space 'em, you know most of these products that we've talked about, in fact, all of the products that we've talked about are going to be in the breakout sessions. We're going to go deep into them in the demos as well as in the pods. So spend some quality time not just on the stuff that's been shipping but also stuff that's coming out. And so one thing to keep in mind to sort of takeaway is that we're doing this all obviously with freedom as the goal. But from the products side, it has to be driven by choice whether the choice is based on platforms, it's based on hypervisors, whether it's based on consumption models and eventually even though we're starting with the management plane, eventually we'll go with the data plane of how do I actually provide a multi-cloud choice as well. And so when we wrap things up, and we look at the five freedoms that Ben talked about. Don't forget the sixth freedom especially after six to seven p.m. where the whole goal as a Nutanix family and extended family make sure we mix it up. Okay, thank you so much, and we'll see you around. (audience clapping) >> PA Announcer: Ladies and gentlemen, this concludes our morning keynote session. Breakouts will begin in 15 minutes. ♪ To do what I want ♪

Published Date : May 9 2018

SUMMARY :

PA Announcer: Off the plastic tab, would you please welcome state of Louisiana And it's my pleasure to welcome you all to And I'd like to second that warm welcome. the free spirit. the Nutanix Freedom video, enjoy. And I read the tagline from license to launch You have the freedom to go and choose and having to gain the trust with you over time, At the same time, you spent the last seven, eight years and apply intelligence to say how can we lower that you go and advise with some of the software to essentially reduce their you know they're supposed to save are still only 20%, 25% utilized. And the next thing is you can't do So you actually sized it for peak, and bring the control while retaining that agility So you want to show us something? And you know glad to be here. to see you know are there resources that you look at everyday. So billions of events, billing, metering events So what we have here is a very popular are everywhere, the cloud is everywhere actually. So when you bring your master account that you create because you don't want So we have you know consumption of the services. There's a lot of money being made So not only just get visibility at you know compute So all of you who actually have not gone the single pane view you know to mange What you see here is they're using have been active in Russia as well. to detect you know how can you rightsize So one click, you can actually just pick Yeah, and not only remove the resources the consumption for the Nutanix, you know the services And the most powerful thing is you can go to say how can you really remove things. So again, similar to save, you're saying So the idea is how can we give our people It looks like there's going to be a talk here at 10:30. Yes, so you can go and write your own security So the end in all this is, again, one of the things And to start the session, I think you know the part You barely fit in that door, man. that's grown from VDI to business critical So if we hop over here to our explore tab, in recent releases to kind of make this happen? Now to allow you to full take advantage of that, On the same environment though, we're going to show you So one of the shares that you see there is home directories. Do we have the cluster also showing, So if we think about cloud, cloud's obviously a big So just like the market took a left turn on Kubernetes, Now for the developer, the application architect, So the goal of ACS is to ensure So you can deploy however many of these He hasn't seen the movies yet. And this is going to be the number And if you come over to our office, and we welcome you, Thanks so much. And like Steve who's been with us for awhile, So I remember, so how many of you guys And the deployment is smaller than what we had And it covers a lot of use cases as well. So the use cases, we're 90%, 95% deployed on Nutanix, So the plan going forward, you actually asked And the same thing when you actually flip it to AHV And to give you a flavor of that, let me show you And now you can see this is a much simpler picture. Yeah, for those guys, you know that's not the Avengers This is next years theme. So before we cut over from Netsil to Flow, And that of course is the most important So that's like one click segmentation and play right now? You can compare it to other products in the space. in that next few releases. And if I scroll down again, and I see the top five of the network which is if you can truly isolate (audience clapping) And you know it's not just using Nutanix than in a picture by the way. So tell me a little bit about this cloud initiative. and the second award was really related to that. And a lot of this was obviously based on an infrastructure And you know initiatives change year on year, So the stack you know obviously built on Nutanix, of obviously the business takeaway here? There has to be some outcomes that we measure And in the journey obviously you got So you're supposed to wear some shoes, right? for the last couple years. I'm sure you guys have received shoes like these. So again, I'm sure many of you liked them. That's the only thing that hasn't worked, Thanks a lot. is to enable you to choose the right cloud Yeah, we should. of the art as you were saying in the industry. that to my Xi cloud services account. So you don't have to log in somewhere and create an account. But let's go take a look at the Xi side that you already knew mynutanix.com and 30 seconds in, or we will deploy a VPN for you on premises. So that's one of the other things to note the gateway configured, your VLAN information Vinny: So right now, you know what's happening is And just while you guys were talking, of the other things we've done? And first thing you might notice is And we allow the setting to be set on the Xi cloud services There's always going to be some networking problem onstage. This is a good sign that we're running So for example, you just saw that the same user is to also show capabilities to actually do failover And that says okay I already have the backups is essentially coming off the mainstream Xi profile. That's the most interesting piece here. or the test network to the test network. So let's see how the experience looks like details in place for the test to be successful. And to give you guys an idea behind the scenes, And so great, while you were explaining that, And that's essentially anybody in the audience here Yeah so by the way, just to give you guys Yeah, you guys should all go and vote. Let's see where Xi is. I'll scroll down a little bit, but keep the... Thank you so much. What's something that you know we've been doing And what that means is when you have And very quickly you can see these are the VMs So one of the core innovations being built So that means it has quiesced and stopping the VM there. So essentially what Vinny just showed and making it painless so that the downtime you have And you know Jason who's the CTO of Cyxtera. of the CenturyLink data centers. bunch of other assets. So we have over 50 data centers around the world, And to be fair, a lot of what happens in data centers in the traditional kind of on demand is that you know this isn't a manged service. of the partnership going forward? Well you know I think this would be Thanks a lot, Jason. And in the enterprise, if you think about it, We're going to start with say maybe some to pass you off, that is what Nutanix is Got it. And Bala and I are so excited to finally show this And the other key thing here is we've architected And the key thing there is just like these past services if not days to provision and Oracle RAC data. So if you think about the lifecycle And then there are a few steps here, but the key thing to not here is we've got So that's provisioning. that this is going to automate. is the fact that if you look at database And the best part is, the customers So let's take a look at this functionality On the right hand side, you will see different colors. And then you have the ability to actually persist of command Z command, all you need to do Bala: You select the time, all you need the database so a developer can do that. back the database to the requester kind of stuff. Do you want to show us the provision database So you can see all the tasks that we were talking about here What we just showed you is an Oracle two node instance (audience clapping) And so one of the core design principles and all the functionality that we've been able Good stuff, man. But from the products side, it has to be driven by choice PA Announcer: Ladies and gentlemen,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KarenPERSON

0.99+

JuliePERSON

0.99+

MelinaPERSON

0.99+

StevePERSON

0.99+

MatthewPERSON

0.99+

Julie O'BrienPERSON

0.99+

VinnyPERSON

0.99+

CiscoORGANIZATION

0.99+

DellORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

DheerajPERSON

0.99+

RussiaLOCATION

0.99+

LenovoORGANIZATION

0.99+

MiamiLOCATION

0.99+

AmazonORGANIZATION

0.99+

HPORGANIZATION

0.99+

2012DATE

0.99+

AcropolisORGANIZATION

0.99+

Stacy NighPERSON

0.99+

Vijay RayapatiPERSON

0.99+

StacyPERSON

0.99+

PrismORGANIZATION

0.99+

IBMORGANIZATION

0.99+

RajivPERSON

0.99+

$3 billionQUANTITY

0.99+

2016DATE

0.99+

Matt VincePERSON

0.99+

GenevaLOCATION

0.99+

twoQUANTITY

0.99+

ThursdayDATE

0.99+

VijayPERSON

0.99+

one hourQUANTITY

0.99+

100%QUANTITY

0.99+

$100QUANTITY

0.99+

Steve PoitrasPERSON

0.99+

15 timesQUANTITY

0.99+

CasablancaLOCATION

0.99+

2014DATE

0.99+

Choice Hotels InternationalORGANIZATION

0.99+

Dheeraj PandeyPERSON

0.99+

DenmarkLOCATION

0.99+

4,000QUANTITY

0.99+

2015DATE

0.99+

DecemberDATE

0.99+

threeQUANTITY

0.99+

3.8 petabytesQUANTITY

0.99+

six timesQUANTITY

0.99+

40QUANTITY

0.99+

New OrleansLOCATION

0.99+

LenovaORGANIZATION

0.99+

NetsilORGANIZATION

0.99+

two sidesQUANTITY

0.99+

100 customersQUANTITY

0.99+

20%QUANTITY

0.99+

Pat Gelsinger | VMworld 2013


 

(upbeat music) >> Hey welcome back to VMWorld 2013. This is theCUBE, flagship program. We go out to the events to extract the signal from the noise. I'm John Furrier, the founder of SiliconANGLE. I'm joined with David Vilante, my co-host from Wikibon.org and we're kicking off today with an awesome interview. CEO of VMWare, Pat Gelsinger, CUBE Alumni. Been on the theCUBE with Dave and I multiple times. So many times. You are in like the leaderboards. So in terms of overall guest frequency, you've been up there, but also you're also the top dog at VMWare and great to see you again. How are you feeling? >> Thank you, thank you. Good morning, guys. >> Pleasure. >> Good to see you. >> So what's new? I mean obviously you're running the show here. You're running around. Last night you were at the NetApp event. You ran through CIO, R&D. You got to go out and touch all the bases out here. >> Yeah, yeah. >> What does that look like? What have you done and obviously, you did, the key note was awesome. What else is going on? >> You know, everything, you know, VMWorld is just, it's just overwhelming, right? I mean 23,000 people almost. I mean you know the amount of activities around that and it really has become the infrastructure event for the industry and you know, if you're anything related to infrastructure, right, what's going on, right in the enterprise side of IT, you got to be here, right? And there's parties everywhere. Every vendor has their events. Every you know, different particular technology area, a bunch of the things that we're doing, and of course to me, it's just delightful that I can go touch as many people and you know, they get excited to see the CEO. I have no idea why, but hey I get to show up. It's good. >> You've been in the industry for a long time. Obviously you've seen all the movies before and we've talked about the seas of change in the EMC world when you were there, but we had two guests on yesterday that were notable. Steve Herrod who's now a venture capitalist at Generalcatalyst and Jerry Chen who's a VC at Graylock, and we have a 10-year run here at VMWare which is esteemed by convention, but the first five years were a lot different than the last five years, and certainly, the last year you were at the helm. So what's changed in the past 24 months? A lot of stuff has certainly evolved, right? So the Nicira acquisition certainly changed up, changed everything, right? You saw software-defined data center now come into focus this year, but really, just about less than 24 months, a massive kind of change. What, how do you view all that? How do you talk to your employees and the customers about that change? >> Well you know, as we think about the software-defined data center vision, right, it is a broad comprehensive powerful vision for rearchitecting how the data center is operated, how customers take advantage of it. You know and the results and the agility and efficiency that comes from that. And obviously the Nicira acquisition is sort of the shot heard 'round the world as the really, "Okay, these guys are really serious "about making that happen." And it changes every aspect of the data center in that regard. You know and this year's VMWorld is really, I'll say, putting the beef on the bones, right? We talked about the vision, we talked about each of the four legs of it: compute, networking, storage and management of automation. So this year it's really putting the beef on the bones and the NSX announcement, putting substance behind it. The vSAN announcement, putting substance behind it. The continuing progress of management and automation. And I think everything that we've seen here in the customer conversations, the ecosystem of partner conversations are SDDC is real. Now get started. >> Can you, I think you've had some fundamental assumptions in that scenario, particularly around x86 in the service business. Essentially if I understand it, you've said that x86 will dominate that space. You're expecting status quo in the sense that it will continue to go in the cadence of you know, cores and Moore's Law curve even though we know that's changing. But that essentially will stay as is and it's the other parts, the networking and the storage piece that you're really, where you define conventions. Is that right? >> Yeah certainly we expect a continuing momentum by the x86 by Intel in that space, but as you go think about software-defined everything in the data center really is taking the power of that same core engine and applying it to these other areas because when we say software-defined networking, right, you need a very high packet flow capability and that's running a software on x86. We need to talk about data services running in software, right? You need high performance. It's snapshots, file systems, etc. running on software, no longer bound to you know physical array. So it really is taking that same power, that same formula right, and applying it to the rest of the elements of the data center and yeah, we're betting big right, that that engine will continue and that we'll be successful in being able to deliver that value in this software layer running on that core powerful Silicon engine. >> So Pat, so obviously when you came on board, the first thing you did was say, "Hey, the pricing. "I want to change some things." Hyper-Visor's always been kind of this debate. Everyone always debates about what to do with Hyper-Visor. But still, virtualization's still the enabling technology so you know, you kind of had this point where the ball's moving down the field and all of a sudden, in 2012, it changed significantly, and that was a lot in part with your vision with infrastructure. As infrastructure gets commoditized, what is going to change in the IT infrastructure and for service providers, and the value chains that's going to be disrupted? Obviously economics are changing. What specifically is virtualization going to do next with software defined that's going to be enabling that technology? >> Yeah, you know and I, you know, we're not out to commoditize. We're out to enable innovation. We're out to enable agility, right, and then the course of that, it changes what you expect and what the underlying hardware does. But you know, it's enabling that ecosystem of innovation is what we're about and customers to get value from that and as you go look at these new areas, "Hey, you know, we're changing how you do networking." Right, all of a sudden, we're going to create a virtual network overlay that has all of these services associated with it that are proficient just like VMs in seconds. We're creating a new layer of how storage is going to be enabled. You know, this policy-driven capability. Taking those capabilities that before were tightly bound to hardware, delivering it through the software layer, enabling this new magnificent level of automation and yesterday's demo with Carl. I mean Carl does a great CTO impersonation, doesn't he? And he's getting some celebrity action. He's like, "I got the bottle." >> Oh yeah. >> Steve Herrod gave him a thumbs up too. >> Yes, yeah Steve gave him a good job. But you know, so all of those pieces coming together, right, is you know, really, and you know, just the customer and the ecosystem response here at the show has been, "Oh, you know, right, "SDDC, it's not some crazy thing out there in the future. "This is something I can start realizing value for now." >> Well it's coming into focus. It's not 100% clear for a lot of the customers because they're still getting into the cloud and the hybrid cloud, I call it the halfway house to kind of a fully evolved IT environment, but you know. How do you define? >> No it is the endgame. Hyper cloud is not a halfway house. What are you talking about? What are you talking about? >> To to full all-utility computing. That is ultimately what we're saying. >> Halfway house? >> I don't mean it that way. (group laughs) >> Help me. >> Okay next question. >> (chuckles) When you're in a hole, stop digging, buddy. >> So how do you define the total adjusted mark at 50 billion that Carl talked about? >> Yeah you know, as we looked at that, we said across the three things, right that we said, software-defined data center, 28 billion dollars; hyper cloud, 14 billion; eight billion for the end-user computing; that's just 50 billion opportunity. But even there, I think that dramatically understates the market opportunity. IT overall is $1.7 trillion, right? The communications, the services, outsourcing, etc. And actually the piece that we're talking about is really the underpinnings for a much larger set of impact in the part of what applications are going to be developed, how services are delivered, how consumers and businesses are able to take advantage of IT. So yes, that's the $50 billion. We'll give you the math, we'll show you all the details of Gartner's and IDC's to support it. But to us, the vision and the impact that we're out for is far more dramatic than that would even imply. >> Well that's good news because we said to Carl, "It's good that your market cap is bigger than--" (Pat laughs) >> Oh yeah your TAM is bigger than your market cap. Well okay now we-- >> Yeah, that's nice, yeah. Yeah, we're out to fix the market cap. >> Yeah he said, "Now we got to get the 50 billion. So I'm glad to hear there's upside to the TAM. But I wanted to ask you about the ecosystem conversation. When you talk about getting things like you know, software defined network and software defined source, what's the discourse like in ecosystem? For guys like, let's take the storage side. EMC, NetApp last night, they say, "Hey you know, software defined storage. "We really like that, but we want to be in that business." so what, talk about that discussion. >> Yeah, clearly every piece of software defined, whether it's software defined storage, software defined data services, software defined security services or networking, every piece of that has ecosystem implications along the way. But if you go talk to a NetApp or a EMC, they'd say, "You're an appliance vendor." And they would quickly respond and say, "No, our value's in software, "and we happen to deliver it as an appliance." And we'd say, "Great, let's start delivering "the software value as a software appliance "through virtualization and through the software delivery "mechanisms that we're talking about for this new platform." Now each one of them has to adjust their product strategies, their, you know, business strategies to enable those software components, right, independent of their hardware elements for full execution and embodiment into the software-defined data center feature. But for the most part, every one of them is saying, "Yes, now how do we figure out how to get there, "and how do we decompose our value, embody it it in new ways "and how can we enable that in "this new software-defined data center vision?" >> And they've always done that with software companies. I mean certainly Microsoft and Oracle have always grabbed a piece of the storage stack and put it into their own, but it's been very narrow, within their own spaces, and of course, VMWare is running any application anywhere. So it's more of a general purpose platform. >> Absolutely. >> Is it a tricker fit for the ecosystem to figure out where that white space is? >> Absolutely. Every one of them has to figure out their strategy. If you're F5, you know, I was with John McAdam this morning. "Okay, how do I take my value?" And you would very quickly say, "Hey, our value's in software. "We deliver it as mostly as appliances, "but how do we shift, you know, your checkpoint?" Okay, you know, they're already, right, you know, our largest software value or Riverbed, you know, the various software vendors and security as well. Each one of them are having to rethink their strategies and the context of software define. Our customers are saying, "Wow, this is powerful. "The agility and the benefits that I get from it, "they're driving them to go there." >> So what's the key to giving them confidence? Is it transparency? You're sharing roadmaps during integration? >> Yes, yes, yes. >> Anything else? Am I missing anything there? >> You know, also how we work with them and go to market as well. You know, they're expecting from us that, okay, "you know, if this is one of our accounts, "come in and work with us on those accounts as well." So we do have to be transparent. We have to the APIs and enable them to do integration. We have to work with them in terms of enabling their innovation and the context of this platform that we're building. But as we work along the way, we're getting good responses to that. >> Pat, how do you look at the application market? Now with end-user computing, you guys are picking that up. You got Sanjay Poonen coming in and obviously mobile and cloud, we talked about this before on theCUBE, but core IT has always been enabling kind of the infrastructure and then you get what you get from what you have in IT. Now the shift is, application is coming from outside IT. Business units and outside from partners, whether they're resellers. How do you view that tsunami of apps coming in that need infrastructure on demand or horizontally scalable at will? >> Yeah so first point is, yes, right, we do see that, you know, as infrastructure becomes more agile and more self provisioned, right, more aligned to the requirements of applications, we do see that it becomes a tsunami of new applications. We're also working very hard to enable IT to be the friend of the line of business. No longer seen as a barrier, but really seen as a friend, partner enabler of what they're trying to do because many of the, you know, line of businesses have been finding way. You know, how do I get around the slow-moving IT? Well we want to make IT fast-moving and enabling to meet their security, governance, SLA requirements while they're also enabling these powerful new applications to emerge and that to us is what infrastructure is all about for the future is enabling, you know, businesses to move at the speed of business and not have infrastructure being a limiter and as we're doing things, you know, like the big data announcements that we did, enabling infrastructure that's more agility, you see us do more things in the AppDev area over time, and enabling the management tools to integrate more effectively to those environments. Self-service portals that are enabling that and obviously with guys like Sanjay in our mobile initiative, yeah that's a big step up. Don't you like Sanjay? He's a great addition to the team. >> Yeah Sanjay's awesome. He's been great and he has done a lot on the mobile side. Obviously that is something that the end users want. >> That's an interesting way that I put him into that business group first. (group chuckles) >> Well on the Flash side, so under the hood, right? So we look under the hood. You got big data on the dashboard. Everyone's driving this car to the new future of IT. Under the hood, you got Flash. That's changing storage a bit and certainly reconfiguring what a DaaS is and NaaS and SaaS and obviously you talked about vSAN in your key note. What is happening, in your vision, with compute? I mean obviously as you have more and more apps hitting IT, coming in outside core IT but having to be managed by core IT, does that change the computing paradigm? Does it make it more distributed, more software? I mean how do you look at that 'cause that's changing the configuration of say the compute architecture. >> Sure and I mean a couple of things, if you think about the show here that we've done, two of them in particular in this space, one is vSAN, right? A vSAN is creating converged infrastructure that includes storage. Why do you do that? Well now you have storage, you know, apps are about data, right? Apps need data to operate on so now we've created an integrated storage tier that essentially presents an integrated application environment in converged infrastructure. That changes the game. We talked about the Hadoop extension. It changes how you think about these big data applications. Also the Cloud Foundry announcement. Right on/off premise of PaaS layer to uniquely enable applications and as they've done that on the PaaS layer, boy, you don't have to think about the infrastructure requirements to deploy that on or off premise or increasingly as I forecast for the future, hybrid applications, born in the hybrid, not born in the cloud, but born in the hybrid cloud applications that truly put the stuff that belongs on premise on premise, puts the stuff that belongs on the cloud in the cloud, right and enables them to fundamentally work together in a secure operational manner. >> So the apps are dictating through the infrastructure basically on demand resources, and essentially combine all that. >> Absolutely. Right. The infrastructure says, "Here's the services "that I have already, right, in catalogs "that you can immediately take advantage of, "and if this, you fit inside "of these catalogs, you're done." It's self-provisions from that point on and we've automated the operations and everything to go against that. >> So that concept of "born in the hybrid" is a good one. So obviously that's your sweet spot. You're going from a position. >> Yeah and this stupid halfway house hybrid comment. I mean I've never heard something so idiotic before. >> One person, yeah. (group chuckles) >> I don't know, it was probably an Andreessen comment or something, I don't know. (group chuckles) >> He's done good for himself, Marc Andreessen. >> Google and Amazon are obviously going to have a harder time with that, you know, born in the hybrid. What about Microsoft? They got a good shot at born in the hybrid, don't they? >> Yeah, you know and I think I've said the four companies that I think have a real shot to be you know, very large significant players for public cloud infrastructure services. You know, clearly Amazon, you know Google, they have a large, substantive very creative company. Yeah Microsoft, they have a large position. Azure, what they've done with Hyper-V and ourselves, and I think that those, you know the two that sort of have the natural assets to participate in the hybrid space are us and Microsoft at that level, and obviously you know we think we have lots of advantages versus Microsoft. We think we're miles ahead of them and SDDC, right, we think the seamlessness and the compatibility that we're building with one software stack, not two. It's not Azure and Hyper-V. It is SDDC in the cloud and on premise that that gives us significant advantages and then we're going to build these value rate of services on top of it, you know, as we announced with Desktop as a Service, Cloud Foundry as a Service, DR as a service. We're going to quickly build that stack of capabilities. That just gives substantial value to enterprise customers. >> So I got to ask you, talk about hybrid since you brought it up again. So software defined data center software. So what happens to the data center, the actual physical data center? You mentioned about the museum. I mean what is it going to look like? I mean right now there's still power and cooling. You're going to have utility competing with cloud resources on demand. People are still going to run data centers. >> You're talking about the facility? >> Yeah, the actual facility. I'm still going to have servers. This will be an on premise. Do you see that, how do you see that phasing out to hybrid? What does that look like physically for someone to manage? Just to get power, facility management, all that stuff. >> Yeah and in many ways, I think here, the you know, the cloud guys, Googles and Amazons and Yahoos and Facebooks have actually led the way in doing some pretty creative work. These things become you know, highly standardized, highly modularized, highly scalable, you know, very few number of admins per server ratio. As we go forward, these become very automated factories, right, of cloud execution. Some of those will be on premise. Some of those will be off premise. But for the most part, they'll look the same, right, in how they operate and our vision for software defined data center is that software layer is taking away the complexity, right, of what operates underneath it. You know, they'll be standardized, they'll be modularized. You plug in power, you plug in cooling, you plug in network, right, and these things will operate. >> Basically efficient down to the bone. >> Yeah. >> Fully operated software. >> Yeah and you know, people will decide what they put in their private cloud, you know, based on business requirements. SLAs, you know, privacy requirements, data governance requirements, right? I mean in Europe, got to be on premise in these locations and then they'll say, "Put stuff in the public cloud "that allows me to burst effectively. "Maybe a DR because I don't do that real well. Or these applications that belongs in the cloud, right because it's distributive in nature, but keep the data on premise. You know, and really treat it as a menu of options to optimize the business requirements between capex to opex, regulatory requirements, scale requirements, expertise, mission critical and all of those things then are delivered by a sustainable position. Not some stupid hybrid halfway house. A sustainable position that optimizes against the business requirements that they have. >> Let me take one of those points, SLA. Everybody likes to attack Amazon and its SLAs, but in many regards. >> Yeah, I'm glad I got your attention. >> Yeah, that's good, we're going to come back to that John. (group chuckles) >> In my head right now. >> I don't think we're done with that talk track. (laughs) So it's easy to attack Amazon and SLAs, but in essence, the SLA is, to the degree of risk that you're willing to take and put on paper at scale. So how transparent will you be with your SLAs with the hybrid cloud and you know, will they exceed what Amazon and Google have been willing and HP for that matter have been willing to promise at scale? >> Oh yeah, absolutely. I mean we're going to be transparent. The SLAs will have real teeth associated with them, you know, real business consequences for lack of execution against them. You know, they will be highly transparent. You know, we're going to have true, we're going to measure these things and you know, provide uptime commitments, etc. against them. That's what an enterprise service is expected, right? At the end of the day, that's what enterprises demand, right? When you pick up the phone and need support, you get it, right. And in our, the VMWare support is legendary. I'm just delighted by the support services that we offer and the customer response to those is, "Hey you fixed my problem even when "it wasn't your problem and make it work." And that's what enterprise customers want because that's what they have to turn around and commit back to their businesses against all of the other things as well. You know, regulatory requirements, audit requirements, all of those types of things. That's what being an enterprise provider is all about. >> John wants to get that. Talk about public cloud. (Pat laughs) >> I want to talk about OpenStack because you guys are big behind OpenStack. You talk about it as a market expansion. Internally what are some of the development conversations and sales conversations with customers around OpenStack instead of status, what's it doing, how you guys are looking at that and getting involved? >> Yeah, you know, we've clearly said you know, that you have to think about OpenStack in the proper way. OpenStack is a framework for building clouds, and you know, for people who are wanting to build their own cloud as opposed to get the free package cloud, right, you know, this is our strategy to enable those APIs, to give our components to those customers to help them go build it, right and those customers, largely are service providers, internet providers who have unique scale, integration and other requirements and we're finding that it's a good market expansion opportunity for us to put our components in those areas, contribute to the open source projects where we truly have IP and can differentiate for it like at the Hyper-Visor level, like at the right networking layer and it's actually going pretty well. You know, in our Q2 earnings call, you might recall, you know, I talked about that our business with the public OpenStack customers was growing faster than the rest of our business. That's pretty significant, right, to say, "Wow, if it's growing faster, "that says the strategy is working." Right, and we are seeing a good response there and clearly we want to communicate. We're going to continue that strategy going forward. >> And the installed base of virtualization is obviously impressive and the question I want to ask you is how do you see the evolution of the IT worker? I mean they have the old model, DBA, system admins, and then now you have data science on the big data side so with software defined data center, the virtualization team seems to be the center point for that. What roles do you see changing with hybrid cloud and software defined data center and user computing? >> Well I think sort of the theme of our conference is defy convention. Right and why do we do that? Because we really see that the, you know, the virtual admin and the virtual infrastructure that they have really become the center of IT. Now we need the competence of networking, the security guys, the database guys, but that now has to happen in the context, right, of a virtualized environment. DBA doesn't get to control his unique infrastructure. The Hadoop guy doesn't get his own unique infrastructure. They're all just workloads that run on this virtualized infrastructure that is increasingly adept and adaptable, right, to these different workload areas and that's what we see going forward as we reach into these new areas and the virtual admin, he has to go make best buddies with the networking guy and say, "Let me talk to you about virtual networking "and how we're going to cross between the virtual overlay "domain and the physical domain and how these things "are going to stitch together for making your job better "right, and delivering a better solution "for our line of business and for our customers." >> One thing you did to defy convention is get on stage with Marc Andreessen. So I want to talk about that a little bit. You guys had I would call it, you know, slight disagreements and, into the future. >> Just a little. >> But I thought you were kind to him. And he said, you know, "No startup that I work with "is going to buy any servers." And I thought you were going to add, no never mind. I won't even go there. (group laughs) I won't even go there, I want to be friends. No so talk about that a little bit, that discussion that you had. Your view of the world and Marc's. How do you respond to that statement? Do they grow up into VMWare customers? Is that the obvious answer? >> I mean I have a lot of regard. You know, Marc and I have known each other for probably close to two decades now and you know, we partnered and sparred together for a long time and he's a smart, successful guy and I appreciate his opinions. You know, but he takes a very narrow view, right, of a venture seed fund, right, who is optimizing cashflow, and why would they spend capital on cashflow when they can go get it as a service? That's exactly the right thing for a very early stage startup company to do in most cases, right? Marc driving his customers to do that makes a lot of sense, but at the end of the day, right, if you want to reach into enterprise customers, you got to deliver enterprise services, right? You got to be able to scale these things. You got to be cost-effective at these things and then all the other aspects of governance, SLAs, etc. that we already talked about. So in that view, I think Marc's view is very perspective. >> Also Zynga and those guys, when they grew up on Amazon, they went right to bare metals as soon as they started scale. >> They had to bring it back in right 'cause they needed the SLAs, they needed the cost structures. They wanted to have the controls of some of those applications. >> And rental is more expensive at the end of the day. >> There you go. Somebody's got to pay the margins, right, you know, on top of that, to the providers so you know, I appreciate the perspective, but to me it is very narrow and periconchal to that point of view and I think the industry is much broader and things like policy and regulation are going to take decades, right? Not years, you know, multiple decades for these things to change and roll out to enable us a mostly public cloud world ever, right, and that's why I say I think the hybrid is not a waystation, right? It is the right balance point that gives customers flexibility to meet their business demands across the range of things and Marc and I obviously, we're quite in disagreement over that particular point. >> And John once again, Nick Carr missed the mark. We made a lot of money. >> I think Marc Andreessen wants to put a lot of money into that book. Everyone could be the next Facebook where you you know, you build your own and I think that's not a reality in enterprise. They kind of want to be like Facebook-like applications, but I wanted to ask you about automation. So we talked to a lot of customers here in theCUBE and we all asked them a question. Automation orchestration's at the top of the stack. They all want it, but they all say they have different processes and you really can't have a general purpose software approach. So Dave and I were commenting last night when we got back after the NetApp event was you know, you and Paul Murray were talking in 2010 around this hardened top when you introduced that stack and with infrastructure as a service, is there a hardened top where functionality is more important than which hardware you buy so you can enable some of those service catalogs, some of those agility features in automation because every customer will have a different process to be automated. >> Yeah. >> And how do you do that without human intervention? So where is that hardened top now? I mean is it platform as a service or is it still at the infrastructure as a service model? >> Yeah, I think clearly the line between infrastructure as a service and platform as a service will blur, right, and you know, it's not really clear where you can quite draw that line. Also as we make infrastructure more application aware, right, and have more application development services associated with it, that line will blur even more. So I think it's going to be hard to call, you know, "Here's that simple line associated with it." We'd also argue that in this world that customers, they have heterogeneous tools that they need to work with. Some will have bought in a big way into some of the legacy tools and as much as we're going to try help them move past some of those brittle environments, well that takes a long time as well. I'd also say that you know, it's the age of APIS, not UIs, and for us it's very much to expose our value through programmatic interfaces so customers truly can have the flexibility to integrate those and give them more choice even as we're trying to build a more deeply integrated and automated stack that meets a general set of needs for customers. >> So that begs the question, at the top of the stack where end user computing's going to sit and you're going to advance that piece, what's, what's the to do item for you? What needs to happen there? Is it, on a scale of one to 10, 10 being fully baked out, where is it, what are the white spaces that need to be tweaked either by partners or by VMWare? >> Yeah and I think we're pretty quickly finishing the stack with regard to the traditional PC environments and I think the amount of work to do for the mobile environment is still quite enormous as we go forward and in that, you know, we're excited about Horizon getting some good uptake, a number of partner announcements this week, but there's a lot to be done in that space because people want to be able to secure apps, provision apps, deprovision apps, have secure work spaces, social experiences, a rich range of integrations to the authentication devices associated with it to be able to have applications that are developed in that environment that access this hybrid infrastructure effectively over time, be able to self-compose those applications, put them into enterprise, right, stores and operations, be able to access this big data infrastructure. There's a whole lot of work to be done in that space and I think that'll keep us busy for quite a number of years. >> This is great. We're here with Pat Gelsinger inside theCUBE. We could keep rolling until we get to the hook, but a couple more final questions is the analogy of cloud has always been like the grid, electricity. You kind of hinted to this earlier. I mean is that a fair comparison? The electricity's kind of clean and stable. We have an actual national grid. It doesn't have bad data and hackers coming through it so is that a fair view of cloud to kind of look, talk about plugging electricity in the wall for IT. >> I think that is so trite, right? It came up in the panel we had with Andreessen, Bechtolsheim, Graeme, and myself because you know, it's so standardized. 120 volts AC right and hey you know, maybe it gets distributed as four, 440, three phase, but you know, it is so standardized. It hasn't moved. Sockets standards, right, you're done. Think how fast this cloud world is evolving. Right the line between IA as in PaaS as we just touched upon, the services that are being offered on top of it. >> Security, security. >> Yeah, yeah, all these different things. To me, it is such a trite, simple analogy that has become so used and abused in the process that I think it leads people to such wrong conclusions right, about what we're doing and the innovation that's going on here and the potential that we're going to offer. So I hope that every one of our competitors takes that and says, "That's the right model." Because I think it leads them to exactly the wrong conclusion. >> I couldn't agree more. The big switch is a big myth. I wanted to get tactical for a minute. I listened to your conference calls. I can't wait to read the transcript. I just go, I got to listen to the calls, but just observing those and the conversations around here, I just wanted to ask you. I always ask CEOs, "What keeps you up at night?" They always say execution so let's focus on execution in the next 12 to 18 months. I came up with the following. "To maintain dominance in vSphere, "get revenue beyond vSphere, "broaden end user license agreements, "increase end user computing adoption "and proof points around hybrid cloud." Are those the big ones? Did I miss anything? >> That's a good list. >> Yeah? >> That's a good list. >> So those are the things an observer should watch in let's say 12 to 18 months of indicators of success and of what you're doing and what you're driving. >> Yeah and you know, clearly inside of that, with SDDC, obviously we think this environment for networking, right, and what we've really, I'll say delivered that. That would be one in particular inside of that category that we would call out you know, with regard to our hybrid cloud strategy. It's clearly globalizing that platform. Right, we announced Savvis here, but we need to make this available on a global basis. You go to an enterprise customer and they're going to say, "I need services in Japan, I need services in Singapore. "I need to be able to operate in a global basis." So clearly having a platform, building out the services on top of it is another key aspect of building those hybrid user cases and more of the value on top of it and then in the EUC space, we touched a bit on the mobile thing already. >> So we'll have Martin on later, but his PowerPoint demonstration. >> What a rockstar, what a rockstar. >> He is a rockstar and we've had him on before. He's fantastic, but his PowerPoint demonstration is very simple, made it seem so simple. It's not going to be that easy to virtualize the network. Can you talk about the headwinds there and the challenges that you have and the things that you have to do to actually make progress there and really move the needle? >> Yeah it really sort of boils down in two aspects. One is we are suggesting that there will be a software layer for networking that is far more scalable, agile and robust than you can do in a physical networking layer. That's a pretty tall order, right? I need to be able to scale to tens, hundreds, millions of VMs, right? I need to be able to scale to terabytes of cross-sectional packet flow through this. I need to be able to deliver services on top of this, right, that truly allow firewalls, load balancers, right, IDSes, all of those things to be agile, scale. Yeah, it is ambitious. >> Ambitious. >> This is, right, the most radical, architectural statements in networking in the last 20 or 30 years and that's what gets Martin passionate. So there's a lot of technical scale and we really feel good about what we've done, right, but being able to prove that with robust scalability, right, for which like the Hyper-Visor, it is more reliable than hardware today, in being able to make that same statement about NSX that just like ESX, it is better than hardware, right, in terms of its reliability, its resilience. That's an important thing for us to accomplish technically in that space, but then the other pieces, showing customer value, right? Getting those early customers and what a powerful picture. GE, Citigroup and eBay, right? It's like wow, right? These are massive customers, right, and being able to prove the value and the use cases in the customer settings, right, and if we do those two things, you know, we think that truly we all have accomplished something very very special in the networking domain. >> Pat, talk about the innovation strategy. You've been now a year under your belt at VMWare and you were obviously with EMC and Intel and we mentioned on theCUBE many times, cadence of Moore's Law was kind of the culture of Intel. Why don't you tell us about the innovation strategy of VMWare going forward, your vision, but also talk about the culture and talk about the one thing that VMWare has from a culture that makes it unique and what is that unique feature of the VMWare culture? >> We spent time as a team talking about what is it that drives our innovation, that drives our passion, and clearly as we've talked about our values as a team, it is very much about this passion for technology and passion for customers and how those two coming together, right, with fundamental disruptive "wow" kind of technologies where people just say, like they did when they first used ESX and they say, "Wow, I just didn't ever envision "that you could possibly do that." And that's the experience that we want to deliver over and over again, right, so you know, hugely disruptive powerful software driven virtualization technologies for these domains, but doing it in a way that customers just fall in love with our technologies and you know as, I got a note from Sanjay and I just asked him, "You know, what do you think of VMWorld?" And he said, right, "It is like a cult geek fest." Right, because there's just this deep passion around what people do with our technology, right, and they're not even at that point, they're not customers, they're not partners. They are deeply aligned passionate zealots around what we are doing to make their lives so much more powerful, so much more enabled, right, and ultimately, a lot more fun. >> People say it's like being a car buff. You know, you got to know the engine, you want to know the speeds and feeds. It is a tech culture. >> Yeah, it is absolutely great. >> Pat, thanks for coming on theCUBE. We scan spend a lot of time with you. I know we went a little over. I appreciate your time. Always great to see you. >> Great to see you too. >> Looking good. >> Thank you for that. >> Tech Athlete Pat Gelsinger touching all the bases here. We saw him last night at AT&T Park. Great event here, VMWare World 2013. This is theCUBE. We'll be right back with our next guest after this short break. Pat Gelsinger, CEO on theCUBE.

Published Date : Aug 28 2013

SUMMARY :

at VMWare and great to see you again. Thank you, thank you. running the show here. What have you done and obviously, for the industry and you know, in the EMC world when you were there, and the NSX announcement, in the cadence of you know, no longer bound to you the first thing you did and as you go look at these new areas, and the ecosystem and the hybrid cloud, I No it is the endgame. To to full all-utility computing. I don't mean it that way. a hole, stop digging, buddy. in the part of what applications bigger than your market cap. Yeah, we're out to fix the market cap. things like you know, and embodiment into the software-defined a piece of the storage stack and the context of software define. and go to market as well. from what you have in IT. and enabling the management that the end users want. into that business group first. Under the hood, you got Flash. on the PaaS layer, boy, you So the apps are dictating and everything to go against that. in the hybrid" is a good one. Yeah and this stupid (group chuckles) I don't know, it was He's done good for with that, you know, born in the hybrid. shot to be you know, You mentioned about the museum. see that phasing out to hybrid? the you know, the cloud Yeah and you know, people will decide Everybody likes to attack going to come back to that John. but in essence, the SLA and the customer response to those is, Talk about public cloud. the development conversations and you know, for people and the question I want to ask you is and the virtual admin, he You guys had I would call it, you know, Is that the obvious answer? but at the end of the day, right, Also Zynga and those guys, They had to bring it back in right at the end of the day. and periconchal to that point of view Nick Carr missed the mark. after the NetApp event was you know, be hard to call, you know, as we go forward and in that, you know, You kind of hinted to this earlier. but you know, it is so standardized. and abused in the process in the next 12 to 18 months. and of what you're doing and more of the value on top of it So we'll have Martin on later, and the things that you have to do I need to be able to scale and if we do those two things, you know, and you were obviously with EMC and Intel so you know, hugely disruptive You know, you got to know the engine, Always great to see you. right back with our next guest

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MarcPERSON

0.99+

Marc AndreessenPERSON

0.99+

DavePERSON

0.99+

David VilantePERSON

0.99+

Steve HerrodPERSON

0.99+

StevePERSON

0.99+

CarlPERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

JapanLOCATION

0.99+

Pat GelsingerPERSON

0.99+

CitigroupORGANIZATION

0.99+

twoQUANTITY

0.99+

2010DATE

0.99+

OracleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

SingaporeLOCATION

0.99+

John FurrierPERSON

0.99+

EuropeLOCATION

0.99+

2012DATE

0.99+

GEORGANIZATION

0.99+

NiciraORGANIZATION

0.99+

Paul MurrayPERSON

0.99+

AndreessenPERSON

0.99+

JohnPERSON

0.99+

Pat GelsingerPERSON

0.99+

tensQUANTITY

0.99+

Nick CarrPERSON

0.99+

EMCORGANIZATION

0.99+

50 billionQUANTITY

0.99+

ZyngaORGANIZATION

0.99+

eBayORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

Sanjay PoonenPERSON

0.99+

AmazonsORGANIZATION

0.99+

10-yearQUANTITY

0.99+

12QUANTITY

0.99+

YahoosORGANIZATION

0.99+

14 billionQUANTITY

0.99+

$50 billionQUANTITY

0.99+

$1.7 trillionQUANTITY

0.99+

MartinPERSON

0.99+

120 voltsQUANTITY

0.99+

GooglesORGANIZATION

0.99+

two guestsQUANTITY

0.99+

eight billionQUANTITY

0.99+

GeneralcatalystORGANIZATION

0.99+

FacebooksORGANIZATION

0.99+

IntelORGANIZATION

0.99+

John McAdamPERSON

0.99+

FacebookORGANIZATION

0.99+

100%QUANTITY

0.99+

Jerry ChenPERSON

0.99+

PowerPointTITLE

0.99+

CUBEORGANIZATION

0.99+

NSXORGANIZATION

0.99+

ESXTITLE

0.99+

18 monthsQUANTITY

0.99+

SanjayPERSON

0.99+

two thingsQUANTITY

0.99+

yesterdayDATE

0.99+

four legsQUANTITY

0.99+

440QUANTITY

0.99+

oneQUANTITY

0.99+

last yearDATE

0.99+

23,000 peopleQUANTITY

0.99+

28 billion dollarsQUANTITY

0.99+

PatPERSON

0.99+