Kubernetes on Any Infrastructure Top to Bottom Tutorials for Docker Enterprise Container Cloud
>>all right, We're five minutes after the hour. That's all aboard. Who's coming aboard? Welcome everyone to the tutorial track for our launchpad of them. So for the next couple of hours, we've got a SYRIZA videos and experts on hand to answer questions about our new product, Doctor Enterprise Container Cloud. Before we jump into the videos and the technology, I just want to introduce myself and my other emcee for the session. I'm Bill Milks. I run curriculum development for Mirant us on. And >>I'm Bruce Basil Matthews. I'm the Western regional Solutions architect for Moran Tissue esa and welcome to everyone to this lovely launchpad oven event. >>We're lucky to have you with us proof. At least somebody on the call knows something about your enterprise Computer club. Um, speaking of people that know about Dr Enterprise Container Cloud, make sure that you've got a window open to the chat for this session. We've got a number of our engineers available and on hand to answer your questions live as we go through these videos and disgusting problem. So that's us, I guess, for Dr Enterprise Container Cloud, this is Mirant asses brand new product for bootstrapping Doctor Enterprise Kubernetes clusters at scale Anything. The airport Abu's? >>No, just that I think that we're trying Thio. Uh, let's see. Hold on. I think that we're trying Teoh give you a foundation against which to give this stuff a go yourself. And that's really the key to this thing is to provide some, you know, many training and education in a very condensed period. So, >>yeah, that's exactly what you're going to see. The SYRIZA videos we have today. We're going to focus on your first steps with Dr Enterprise Container Cloud from installing it to bootstrapping your regional child clusters so that by the end of the tutorial content today, you're gonna be prepared to spin up your first documentary prize clusters using documented prize container class. So just a little bit of logistics for the session. We're going to run through these tutorials twice. We're gonna do one run through starting seven minutes ago up until I guess it will be ten fifteen Pacific time. Then we're gonna run through the whole thing again. So if you've got other colleagues that weren't able to join right at the top of the hour and would like to jump in from the beginning, ten. Fifteen Pacific time. We're gonna do the whole thing over again. So if you want to see the videos twice, you got public friends and colleagues that, you know you wanna pull in for a second chance to see this stuff, we're gonna do it all. All twice. Yeah, this session. Any any logistics I should add, Bruce that No, >>I think that's that's pretty much what we had to nail down here. But let's zoom dash into those, uh, feature films. >>Let's do Edmonds. And like I said, don't be shy. Feel free to ask questions in the chat or engineers and boosting myself are standing by to answer your questions. So let me just tee up the first video here and walk their cost. Yeah. Mhm. Yes. Sorry. And here we go. So our first video here is gonna be about installing the Doctor Enterprise Container Club Management cluster. So I like to think of the management cluster as like your mothership, right? This is what you're gonna use to deploy all those little child clusters that you're gonna use is like, Come on it as clusters downstream. So the management costs was always our first step. Let's jump in there >>now. We have to give this brief little pause >>with no good day video. Focus for this demo will be the initial bootstrap of the management cluster in the first regional clusters to support AWS deployments. The management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case, eight of us and the Elsie um, components on the UCP Cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a big strap note on this dependencies on handling with download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the adduce environment. The fourth configuring the deployment, defining things like the machine types on the fifth phase. Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node, just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now we're just checking through AWS to make sure that the account we want to use we have the correct credentials on the correct roles set up and validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just going to check that we can, from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next, we're going to run it. I'm in. Deploy it. Changing into that big struck folder. Just making see what's there. Right now we have no license file, so we're gonna get the license filed. Oh, okay. Get the license file through the more antis downloads site, signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, Once we've done that, we can now go ahead with the rest of the deployment. See that the follow is there. Uh, huh? That's again checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. All right, The next big step is valid in all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running an AWS policy create. So it is part of that is creating our Food trucks script, creating the mystery policy files on top of AWS, Just generally preparing the environment using a cloud formation script you'll see in a second will give a new policy confirmations just waiting for it to complete. Yeah, and there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created Today I am console. Go to that new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media Access key I D and the secret access key. We went, Yeah, usually then exported on the command line. Okay. Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Yes. Okay, that's the key. Secret X key. Right on. Let's kick it off. Yeah, So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you, and as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the west side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS. At the end of the process, that cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Okay. Local clusters boat just waiting for the various objects to get ready. Standard communities objects here Okay, so we speed up this process a little bit just for demonstration purposes. Yeah. There we go. So first note is being built the best in host. Just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for a W s to create the instance. Okay. Yes. Here, beauty there. Okay. Mhm. Okay. Yeah, yeah. Okay. On there. We got question. Host has been built on three instances for the management clusters have now been created. We're going through the process of preparing. Those nodes were now copying everything over. See that? The scaling up of controllers in the big Strap cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Yeah. Yeah, just waiting for key. Clark. Uh huh. Start to finish up. Yeah. No. What? Now we're shutting down control this on the local bootstrap node on preparing our I. D. C. Configuration. Fourth indication, soon as this is completed. Last phase will be to deploy stack light into the new cluster the last time Monitoring tool set way Go stack like to plan It has started. Mhm coming to the end of the deployment Mountain. Yeah, America. Final phase of the deployment. Onda, We are done. Okay, You'll see. At the end they're providing us the details of you. I log in so there's a keeper clogging. You can modify that initial default password is part of the configuration set up with one documentation way. Go Councils up way can log in. Yeah, yeah, thank you very much for watching. >>Excellent. So in that video are wonderful field CTO Shauna Vera bootstrapped up management costume for Dr Enterprise Container Cloud Bruce, where exactly does that leave us? So now we've got this management costume installed like what's next? >>So primarily the foundation for being able to deploy either regional clusters that will then allow you to support child clusters. Uh, comes into play the next piece of what we're going to show, I think with Sean O'Mara doing this is the child cluster capability, which allows you to then deploy your application services on the local cluster. That's being managed by the ah ah management cluster that we just created with the bootstrap. >>Right? So this cluster isn't yet for workloads. This is just for bootstrapping up the downstream clusters. Those or what we're gonna use for workings. >>Exactly. Yeah. And I just wanted to point out, since Sean O'Mara isn't around, toe, actually answer questions. I could listen to that guy. Read the phone book, and it would be interesting, but anyway, you can tell him I said that >>he's watching right now, Crusoe. Good. Um, cool. So and just to make sure I understood what Sean was describing their that bootstrap er knows that you, like, ran document fresh pretender Cloud from to begin with. That's actually creating a kind kubernetes deployment kubernetes and Docker deployment locally. That then hits the AWS a p i in this example that make those e c two instances, and it makes like a three manager kubernetes cluster there, and then it, like, copies itself over toe those communities managers. >>Yeah, and and that's sort of where the transition happens. You can actually see it. The output that when it says I'm pivoting, I'm pivoting from my local kind deployment of cluster AP, I toothy, uh, cluster, that's that's being created inside of AWS or, quite frankly, inside of open stack or inside of bare metal or inside of it. The targeting is, uh, abstracted. Yeah, but >>those air three environments that we're looking at right now, right? Us bare metal in open staff environments. So does that kind cluster on the bootstrap er go away afterwards. You don't need that afterwards. Yeah, that is just temporary. To get things bootstrapped, then you manage things from management cluster on aws in this example? >>Yeah. Yeah. The seed, uh, cloud that post the bootstrap is not required anymore. And there's no, uh, interplay between them after that. So that there's no dependencies on any of the clouds that get created thereafter. >>Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, be a temporary container that would bootstrap all the other containers. Go away. It's, uh, so sort of a similar, similar temporary transient bootstrapping model. Cool. Excellent. What will convict there? It looked like there wasn't a ton, right? It looked like you had to, like, set up some AWS parameters like credentials and region and stuff like that. But other than that, that looked like heavily script herbal like there wasn't a ton of point and click there. >>Yeah, very much so. It's pretty straightforward from a bootstrapping standpoint, The config file that that's generated the template is fairly straightforward and targeted towards of a small medium or large, um, deployment. And by editing that single file and then gathering license file and all of the things that Sean went through, um, that that it makes it fairly easy to script >>this. And if I understood correctly as well that three manager footprint for your management cluster, that's the minimum, right. We always insist on high availability for this management cluster because boy do not wanna see oh, >>right, right. And you know, there's all kinds of persistent data that needs to be available, regardless of whether one of the notes goes down or not. So we're taking care of all of that for you behind the scenes without you having toe worry about it as a developer. >>No, I think there's that's a theme that I think will come back to throughout the rest of this tutorial session today is there's a lot of there's a lot of expertise baked him to Dr Enterprise Container Cloud in terms of implementing best practices for you like the defaulter, just the best practices of how you should be managing these clusters, Miss Seymour. Examples of that is the day goes on. Any interesting questions you want to call out from the chap who's >>well, there was. Yeah, yeah, there was one that we had responded to earlier about the fact that it's a management cluster that then conduce oh, either the the regional cluster or a local child molester. The child clusters, in each case host the application services, >>right? So at this point, we've got, in some sense, like the simplest architectures for our documentary prize Container Cloud. We've got the management cluster, and we're gonna go straight with child cluster. In the next video, there's a more sophisticated architecture, which will also proper today that inserts another layer between those two regional clusters. If you need to manage regions like across a BS, reads across with these documents anything, >>yeah, that that local support for the child cluster makes it a lot easier for you to manage the individual clusters themselves and to take advantage of our observation. I'll support systems a stack light and things like that for each one of clusters locally, as opposed to having to centralize thumb >>eso. It's a couple of good questions. In the chat here, someone was asking for the instructions to do this themselves. I strongly encourage you to do so. That should be in the docks, which I think Dale helpfully thank you. Dale provided links for that's all publicly available right now. So just head on in, head on into the docks like the Dale provided here. You can follow this example yourself. All you need is a Mirante license for this and your AWS credentials. There was a question from many a hear about deploying this toe azure. Not at G. Not at this time. >>Yeah, although that is coming. That's going to be in a very near term release. >>I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. Very bracing. Cool. Okay. Any other thoughts on this one does. >>No, just that the fact that we're running through these individual pieces of the steps Well, I'm sure help you folks. If you go to the link that, uh, the gentleman had put into the chat, um, giving you the step by staff. Um, it makes it fairly straightforward to try this yourselves. >>E strongly encourage that, right? That's when you really start to internalize this stuff. OK, but before we move on to the next video, let's just make sure everyone has a clear picture in your mind of, like, where we are in the life cycle here creating this management cluster. Just stop me if I'm wrong. Who's creating this management cluster is like, you do that once, right? That's when your first setting up your doctor enterprise container cloud environment of system. What we're going to start seeing next is creating child clusters and this is what you're gonna be doing over and over and over again. When you need to create a cluster for this Deb team or, you know, this other team river it is that needs commodity. Doctor Enterprise clusters create these easy on half will. So this was once to set up Dr Enterprise Container Cloud Child clusters, which we're going to see next. We're gonna do over and over and over again. So let's go to that video and see just how straightforward it is to spin up a doctor enterprise cluster for work clothes as a child cluster. Undocumented brands contain >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster, the scaling of the cluster and how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the U I so you can switch. Project Mary only has access to development. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Nan Yes, this H Keys Associate ID for Mary into her team on the cloud credentials that allow you to create access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences, Right? Let's now set up semester search keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name, we copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our local machine. A simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, you go to the clusters tab. We hit the create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one release version five point seven is the current release Onda Attach. Mary's Key is necessary Key. We can then check the rest of the settings, confirming the provider Any kubernetes c r D r I p address information. We can change this. Should we wish to? We'll leave it default for now on. Then what components? A stack light I would like to deploy into my Custer. For this. I'm enabling stack light on logging on Aiken. Sit up the retention sizes Attention times on. Even at this stage, at any customer alerts for the watchdogs. E consider email alerting which I will need my smart host details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I commend side on the route device size. There we go, my three machines obviously creating. I now need to add some workers to this custom. So I go through the same process this time once again, just selecting worker. I'll just add to once again, the AM is extremely important. Will fail if we don't pick the right, Am I for a boon to machine in this case and the deployment has started. We can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here we've created the VPC. We've created the sub nets on We've created the Internet gateway. It's unnecessary made of us and we have no warnings of the stage. Yeah, this will then run for a while. We have one minute past waken click through. We can check the status of the machine bulls as individuals so we can check the machine info, details of the machines that we've assigned, right? Mhm Onda. See any events pertaining to the machine areas like this one on normal? Yeah. Just watch asked. The community's components are waiting for the machines to start. Go back to Custer's. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway on the stage. The machines have been built on assigned. I pick up the U. S. Thank you. Yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. Mhm. No speeding things up a little bit. This whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured. Mhm, mhm. And then we go. Cluster has been deployed. So once the classes deployed, we can now never get around our environment. Okay, Are cooking into configure cluster We could modify their cluster. We could get the end points for alert alert manager on See here The griffon occupying and Prometheus are still building in the background but the cluster is available on you would be able to put workloads on it the stretch to download the cube conflict so that I can put workloads on it. It's again three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster Mhm all right Now that the build is fully completed, we can check out cluster info on. We can see that Allow the satellite components have been built. All the storage is there, and we have access to the CPU. I So if we click into the cluster, we can access the UCP dashboard, right? Shit. Click the signing with Detroit button to use the SSO on. We give Mary's possible to use the name once again. Thing is, an unlicensed cluster way could license at this point. Or just skip it on. There. We have the UCP dashboard. You can see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon, a data just being automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Mhm. So we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah. General dashboard of Cuba navies cluster one of this is configurable. You can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster, all right to scale the cluster on to add a notice. A simple is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger disks and you'll see that worker has been added from the provisioning state on shortly. We will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workouts are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button, validating which release you would like to update to. In this case, the next available releases five point seven point one. Here I'm kicking the update by in the background We will coordinate. Drain each node slowly go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Girl, we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already on In a few minutes we'll see that there are great has been completed. There we go. Great. Done. Yeah. If you work loads of both using proper cloud native community standards, there will be no impact. >>Excellent. So at this point, we've now got a cluster ready to start taking our communities of workloads. He started playing or APs to that costume. So watching that video, the thing that jumped out to me at first Waas like the inputs that go into defining this workload cost of it. All right, so we have to make sure we were using on appropriate am I for that kind of defines the substrate about what we're gonna be deploying our cluster on top of. But there's very little requirements. A so far as I could tell on top of that, am I? Because Docker enterprise Container Cloud is gonna bootstrap all the components that you need. That s all we have is kind of kind of really simple bunch box that we were deploying these things on top of so one thing that didn't get dug into too much in the video. But it's just sort of implied. Bruce, maybe you can comment on this is that release that Shawn had to choose for his, uh, for his cluster in creating it. And that release was also the thing we had to touch. Wanted to upgrade part cluster. So you have really sharp eyes. You could see at the end there that when you're doing the release upgrade enlisted out a stack of components docker, engine, kubernetes, calico, aled, different bits and pieces that go into, uh, go into one of these commodity clusters that deploy. And so, as far as I can tell in that case, that's what we mean by a release. In this sense, right? It's the validated stack off container ization and orchestration components that you know we've tested out and make sure it works well, introduction environments. >>Yeah, and and And that's really the focus of our effort is to ensure that any CVS in any of the stack are taken care of that there is a fixes air documented and up streamed to the open stack community source community, um, and and that, you know, then we test for the scaling ability and the reliability in high availability configuration for the clusters themselves. The hosts of your containers. Right. And I think one of the key, uh, you know, benefits that we provide is that ability to let you know, online, high. We've got an update for you, and it's fixes something that maybe you had asked us to fix. Uh, that all comes to you online as your managing your clusters, so you don't have to think about it. It just comes as part of the product. >>You just have to click on Yes. Please give me that update. Uh, not just the individual components, but again. It's that it's that validated stack, right? Not just, you know, component X, y and Z work. But they all work together effectively Scalable security, reliably cool. Um, yeah. So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old universal control plane. Doctor Enterprise. On top of that, Sean had the classic comment there, you know? Yeah. Yeah. You'll see a little warnings and errors or whatever. When you're setting up, UCP don't handle, right, Just let it do its job, and it will converge all its components, you know, after just just a minute or two. But we saw in that video, we sped things up a little bit there just we didn't wait for, you know, progress fighters to complete. But really, in real life, that whole process is that anything so spend up one of those one of those fosters so quite quite quick. >>Yeah, and and I think the the thoroughness with which it goes through its process and re tries and re tries, uh, as you know, and it was evident when we went through the initial ah video of the bootstrapping as well that the processes themselves are self healing, as they are going through. So they will try and retry and wait for the event to complete properly on. And once it's completed properly, then it will go to the next step. >>Absolutely. And the worst thing you could do is panic at the first warning and start tearing things that don't don't do that. Just don't let it let it heal. Let take care of itself. And that's the beauty of these manage solutions is that they bake in a lot of subject matter expertise, right? The decisions that are getting made by those containers is they're bootstrapping themselves, reflect the expertise of the Mirant ISS crew that has been developing this content in these two is free for years and years now, over recognizing humanities. One cool thing there that I really appreciate it actually that it adds on top of Dr Enterprise is that automatic griffon a deployment as well. So, Dr Enterprises, I think everyone knows has had, like, some very high level of statistics baked into its dashboard for years and years now. But you know our customers always wanted a double click on that right to be able to go a little bit deeper. And Griffon are really addresses that it's built in dashboards. That's what's really nice to see. >>Yeah, uh, and all of the alerts and, uh, data are actually captured in a Prometheus database underlying that you have access to so that you are allowed to add new alerts that then go out to touch slack and say hi, You need to watch your disk space on this machine or those kinds of things. Um, and and this is especially helpful for folks who you know, want to manage the application service layer but don't necessarily want to manage the operations side of the house. So it gives them a tool set that they can easily say here, Can you watch these for us? And Miran tas can actually help do that with you, So >>yeah, yeah, I mean, that's just another example of baking in that expert knowledge, right? So you can leverage that without tons and tons of a long ah, long runway of learning about how to do that sort of thing. Just get out of the box right away. There was the other thing, actually, that you could sleep by really quickly if you weren't paying close attention. But Sean mentioned it on the video. And that was how When you use dark enterprise container cloud to scale your cluster, particularly pulling a worker out, it doesn't just like Territo worker down and forget about it. Right? Is using good communities best practices to cordon and drain the No. So you aren't gonna disrupt your workloads? You're going to just have a bunch of containers instantly. Excellent crash. You could really carefully manage the migration of workloads off that cluster has baked right in tow. How? How? Document? The brass container cloud is his handling cluster scale. >>Right? And And the kubernetes, uh, scaling methodology is is he adhered to with all of the proper techniques that ensure that it will tell you. Wait, you've got a container that actually needs three, uh, three, uh, instances of itself. And you don't want to take that out, because that node, it means you'll only be able to have to. And we can't do that. We can't allow that. >>Okay, Very cool. Further thoughts on this video. So should we go to the questions. >>Let's let's go to the questions >>that people have. Uh, there's one good one here, down near the bottom regarding whether an a p I is available to do this. So in all these demos were clicking through this web. You I Yes, this is all a p. I driven. You could do all of this. You know, automate all this away is part of the CSC change. Absolutely. Um, that's kind of the point, right? We want you to be ableto spin up. Come on. I keep calling them commodity clusters. What I mean by that is clusters that you can create and throw away. You know, easily and automatically. So everything you see in these demos eyes exposed to FBI? >>Yeah. In addition, through the standard Cube cuddle, Uh, cli as well. So if you're not a programmer, but you still want to do some scripting Thio, you know, set up things and deploy your applications and things. You can use this standard tool sets that are available to accomplish that. >>There is a good question on scale here. So, like, just how many clusters and what sort of scale of deployments come this kind of support our engineers report back here that we've done in practice up to a Zeman ia's like two hundred clusters. We've deployed on this with two hundred fifty nodes in a cluster. So were, you know, like like I said, hundreds, hundreds of notes, hundreds of clusters managed by documented press container fall and then those downstream clusters, of course, subject to the usual constraints for kubernetes, right? Like default constraints with something like one hundred pods for no or something like that. There's a few different limitations of how many pods you can run on a given cluster that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. >>Yeah, E. I mean, I don't think that we constrain any of the capabilities that are available in the, uh, infrastructure deliveries, uh, service within the goober Netease framework. So were, you know, But we are, uh, adhering to the standards that we would want to set to make sure that we're not overloading a node or those kinds of things, >>right. Absolutely cool. Alright. So at this point, we've got kind of a two layered our protection when we are management cluster, but we deployed in the first video. Then we use that to deploy one child clustering work, classroom, uh, for more sophisticated deployments where we might want to manage child clusters across multiple regions. We're gonna add another layer into our architectural we're gonna add in regional cluster management. So this idea you're gonna have the single management cluster that we started within the first video. On the next video, we're gonna learn how to spin up a regional clusters, each one of which would manage, for example, a different AWS uh, US region. So let me just pull out the video for that bill. We'll check it out for me. Mhm. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectures of you how to set up the management environment, prepare for the deployment deployment overview and then just to prove it, to play a regional child cluster. So, looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case AWS on the LCN components on the D you speak Cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need a regional cluster? Different platform architectures, for example aws who have been stack even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager we also Machine Manager were held. Mandel are managed as well as the actual provider logic. Mhm. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. And you see, it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster similar to what we're going to deploy now, also only has three managers once again, no workers. But as a comparison, here's a child cluster This one has three managers, but also has additional workers associate it to the cluster. All right, we need to connect. Tell bootstrap note. Preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine. All right. A few things we have to do to make sure the environment is ready. First thing we're going to see go into route. We'll go into our releases folder where we have the kozberg struck on. This was the original bootstrap used to build the original management cluster. Yeah, we're going to double check to make sure our cube con figures there once again, the one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything is working. A condom. No damages waken access to a swell. Yeah. Next we're gonna edit the machine definitions. What we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I. So that's found under the templates AWS directory. We don't need to edit anything else here. But we could change items like the size of the machines attempts. We want to use that The key items to ensure where you changed the am I reference for the junta image is the one for the region in this case AWS region for utilizing this was no construct deployment. We have to make sure we're pointing in the correct open stack images. Yeah, okay. Set the correct and my save file. Now we need to get up credentials again. When we originally created the bootstrap cluster, we got credentials from eight of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we're just exporting the AWS access key and I d. What's important is CAAs aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our cube conflict that we want to use for the management cluster. When we looked at earlier Yeah, now we're exporting that. Want to call the cluster region Is Frank Foods Socrates Frankfurt yet trying to use something descriptive It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed. Um, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at W s and waiting for that bastard and no to get started. Please. The best you nerd Onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy. Dr. Enterprise, this is probably the longest face. Yeah, seeing the second that all the nerds will go from the player deployed. Prepare, prepare. Yeah, You'll see their status changes updates. He was the first night ready. Second, just applying second already. Both my time. No waiting from home control. Let's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running the date of the U. S. All my stay. Ah, now we're playing Stockland. Switch over is done on. Done. Now I will build a child cluster in the new region very, very quickly to find the cluster will pick. Our new credential has shown up. We'll just call it Frankfurt for simplicity a key and customs to find. That's the machine. That cluster stop with three managers. Set the correct Am I for the region? Yeah, Do the same to add workers. There we go test the building. Yeah. Total bill of time Should be about fifteen minutes. Concedes in progress. It's going to expect this up a little bit. Check the events. We've created all the dependencies, machine instances, machines, a boat shortly. We should have a working cluster in Frankfurt region. Now almost a one note is ready from management. Two in progress. Yeah, on we're done. Clusters up and running. Yeah. >>Excellent. So at this point, we've now got that three tier structure that we talked about before the video. We got that management cluster that we do strapped in the first video. Now we have in this example to different regional clustering one in Frankfurt, one of one management was two different aws regions. And sitting on that you can do Strap up all those Doctor enterprise costumes that we want for our work clothes. >>Yeah, that's the key to this is to be able to have co resident with your actual application service enabled clusters the management co resident with it so that you can, you know, quickly access that he observation Elson Surfboard services like the graph, Ana and that sort of thing for your particular region. A supposed to having to lug back into the home. What did you call it when we started >>the mothership? >>The mothership. Right. So we don't have to go back to the mother ship. We could get >>it locally. Yeah, when, like to that point of aggregating things under a single pane of glass? That's one thing that again kind of sailed by in the demo really quickly. But you'll notice all your different clusters were on that same cluster. Your pain on your doctor Enterprise Container Cloud management. Uh, court. Right. So both your child clusters for running workload and your regional clusters for bootstrapping. Those child clusters were all listed in the same place there. So it's just one pane of glass to go look for, for all of your clusters, >>right? And, uh, this is kind of an important point. I was, I was realizing, as we were going through this. All of the mechanics are actually identical between the bootstrapped cluster of the original services and the bootstrapped cluster of the regional services. It's the management layer of everything so that you only have managers, you don't have workers and that at the child cluster layer below the regional or the management cluster itself, that's where you have the worker nodes. And those are the ones that host the application services in that three tiered architecture that we've now defined >>and another, you know, detail for those that have sharp eyes. In that video, you'll notice when deploying a child clusters. There's not on Lee. A minimum of three managers for high availability management cluster. You must have at least two workers that's just required for workload failure. It's one of those down get out of work. They could potentially step in there, so your minimum foot point one of these child clusters is fine. Violence and scalable, obviously, from a >>That's right. >>Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want to my last video. There's another question here about, like where these clusters can live. So again, I know these examples are very aws heavy. Honestly, it's just easy to set up down on the other us. We could do things on bare metal and, uh, open stack departments on Prem. That's what all of this still works in exactly the same way. >>Yeah, the, uh, key to this, especially for the the, uh, child clusters, is the provision hers? Right? See you establish on AWS provision or you establish a bare metal provision or you establish a open stack provision. Or and eventually that list will include all of the other major players in the cloud arena. But you, by selecting the provision or within your management interface, that's where you decide where it's going to be hosted, where the child cluster is to be hosted. >>Speaking off all through a child clusters. Let's jump into our last video in the Siri's, where we'll see how to spin up a child cluster on bare metal. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. So why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent. Provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and supports high performance workloads like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another opera visor. Lay it between so continue on the theme Why Communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p G A s G p us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. Uh, we can handle utilization in the scheduling. Better Onda we increase the performances and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project will add the bare metal hosts, including the host name. I put my credentials I pay my address the Mac address on then provide a machine type label to determine what type of machine it is for later use. Okay, let's get started. So well again. Was the operator thing. We'll go and we'll create a project for our machines to be a member off helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. So the first thing we had to be in post, Yeah, many of the machine A name. Anything you want, que experimental zero one. Provide the IAP my user name type my password. Okay. On the Mac address for the common interface with the boot interface and then the i p m I i p address These machines will be at the time storage worker manager. He's a manager. Yeah, we're gonna add a number of other machines on will. Speed this up just so you could see what the process looks like in the future. Better discovery will be added to the product. Okay. Okay. Getting back there we have it are Six machines have been added, are busy being inspected, being added to the system. Let's have a look at the details of a single note. Yeah, you can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. I see. Okay, let's go and create the cluster. Yeah, So we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So we'll credit custom. We'll give it a name, but if it were selecting bare metal on the region, we're going to select the version we want to apply. No way. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of dress range on update the address range that we want to use for the cluster. Check that the sea ideal blocks for the Cuban ladies and tunnels are what we want them to be. Enable disabled stack light. Yeah, and soothe stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here. We're focused on building communities clusters, so we're gonna put the count of machines. You want managers? We're gonna pick the label type manager and create three machines is the manager for the Cuban eighties. Casting Okay thing. We're having workers to the same. It's a process. Just making sure that the worker label host level are I'm sorry. On when Wait for the machines to deploy. Let's go through the process of putting the operating system on the notes validating and operating system deploying doctor identifies Make sure that the cluster is up and running and ready to go. Okay, let's review the bold events waken See the machine info now populated with more information about the specifics of things like storage and of course, details of a cluster etcetera. Yeah, yeah, well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build? And that brings us to the end of this particular demo. You can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>All right, so there we have it, deploying a cluster to bare metal. Much the same is how we did for AWS. I guess maybe the biggest different stepwise there is there is that registration face first, right? So rather than just using AWS financials toe magically create PM's in the cloud. You got a point out all your bare metal servers to Dr Enterprise between the cloud and they really come in, I guess three profiles, right? You got your manager profile with a profile storage profile which has been labeled as allocate. Um, crossword cluster has appropriate, >>right? And And I think that the you know, the key differentiator here is that you have more physical control over what, uh, attributes that love your cat, by the way, uh, where you have the different attributes of a server of physical server. So you can, uh, ensure that the SSD configuration on the storage nodes is gonna be taken advantage of in the best way the GP use on the worker nodes and and that the management layer is going to have sufficient horsepower to, um, spin up to to scale up the the environments, as required. One of the things I wanted to mention, though, um, if I could get this out without the choking much better. Um, is that Ah, hey, mentioned the load balancer and I wanted to make sure in defining the load balancer and the load balancer ranges. Um, that is for the top of the the cluster itself. That's the operations of the management, uh, layer integrating with your systems internally to be able to access the the Cube Can figs. I I p address the, uh, in a centralized way. It's not the load balancer that's working within the kubernetes cluster that you are deploying. That's still cube proxy or service mesh, or however you're intending to do it. So, um, it's kind of an interesting step that your initial step in building this, um and we typically use things like metal L B or in gen X or that kind of thing is to establish that before we deploy this bear mental cluster so that it can ride on top of that for the tips and things. >>Very cool. So any other thoughts on what we've seen so far today? Bruce, we've gone through all the different layers. Doctor enterprise container clouds in these videos from our management are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. Closing thoughts before we take just a very short break and run through these demos again. >>You know, I've been very exciting. Ah, doing the presentation with you. I'm really looking forward to doing it the second time, so that we because we've got a good rhythm going about this kind of thing. So I'm looking forward to doing that. But I think that the key elements of what we're trying to convey to the folks out there in the audience that I hope you've gotten out of it is that will that this is an easy enough process that if you follow the step by steps going through the documentation that's been put out in the chat, um, that you'll be able to give this a go yourself, Um, and you don't have to limit yourself toe having physical hardware on prim to try it. You could do it in a ws as we've shown you today. And if you've got some fancy use cases like, uh, you you need a Hadoop And and, uh, you know, cloud oriented ai stuff that providing a bare metal service helps you to get there very fast. So right. Thank you. It's been a pleasure. >>Yeah, thanks everyone for coming out. So, like I said we're going to take a very short, like, three minute break here. Uh, take the opportunity to let your colleagues know if they were in another session or they didn't quite make it to the beginning of this session. Or if you just want to see these demos again, we're going to kick off this demo. Siri's again in just three minutes at ten. Twenty five a. M. Pacific time where we will see all this great stuff again. Let's take a three minute break. I'll see you all back here in just two minutes now, you know. Okay, folks, that's the end of our extremely short break. We'll give people just maybe, like one more minute to trickle in if folks are interested in coming on in and jumping into our demo. Siri's again. Eso For those of you that are just joining us now I'm Bill Mills. I head up curriculum development for the training team here. Moran Tous on Joining me for this session of demos is Bruce. Don't you go ahead and introduce yourself doors, who is still on break? That's cool. We'll give Bruce a minute or two to get back while everyone else trickles back in. There he is. Hello, Bruce. >>How'd that go for you? Okay, >>Very well. So let's kick off our second session here. I e just interest will feel for you. Thio. Let it run over here. >>Alright. Hi. Bruce Matthews here. I'm the Western Regional Solutions architect for Marantz. Use A I'm the one with the gray hair and the glasses. Uh, the handsome one is Bill. So, uh, Bill, take it away. >>Excellent. So over the next hour or so, we've got a Siris of demos that's gonna walk you through your first steps with Dr Enterprise Container Cloud Doctor Enterprise Container Cloud is, of course, Miranda's brand new offering from bootstrapping kubernetes clusters in AWS bare metal open stack. And for the providers in the very near future. So we we've got, you know, just just over an hour left together on this session, uh, if you joined us at the top of the hour back at nine. A. M. Pacific, we went through these demos once already. Let's do them again for everyone else that was only able to jump in right now. Let's go. Our first video where we're gonna install Dr Enterprise container cloud for the very first time and use it to bootstrap management. Cluster Management Cluster, as I like to describe it, is our mother ship that's going to spin up all the other kubernetes clusters, Doctor Enterprise clusters that we're gonna run our workloads on. So I'm gonna do >>I'm so excited. I can hardly wait. >>Let's do it all right to share my video out here. Yeah, let's do it. >>Good day. The focus for this demo will be the initial bootstrap of the management cluster on the first regional clusters. To support AWS deployments, the management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case AWS and the Elsom components on the UCP cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a bootstrap note on its dependencies on handling the download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the ideas environment, the fourth configuring the deployment, defining things like the machine types on the fifth phase, Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node. Just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now, we're just checking through aws to make sure that the account we want to use we have the correct credentials on the correct roles set up on validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just gonna check that we can from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next we're going to run it. Yeah, I've been deployed changing into that big struck folder, just making see what's there right now we have no license file, so we're gonna get the license filed. Okay? Get the license file through more antis downloads site signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, since we've done that, we can now go ahead with the rest of the deployment. Yeah, see what the follow is there? Uh huh. Once again, checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. Alright. Next big step is violating all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running in AWS policy create. So it is part of that is creating our food trucks script. Creating this through policy files onto the AWS, just generally preparing the environment using a cloud formation script, you'll see in a second, I'll give a new policy confirmations just waiting for it to complete. And there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created. Good day. I am console. Go to the new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media access Key I. D and the secret access key, but usually then exported on the command line. Okay, Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Okay, thanks. Is key. So you could X key Right on. Let's kick it off. So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you. Um, as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the AWS side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS at the end of the process. That cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Yeah, okay. Local clusters boat. Just waiting for the various objects to get ready. Standard communities objects here. Yeah, you mentioned Yeah. So we've speed up this process a little bit just for demonstration purposes. Okay, there we go. So first note is being built the bastion host just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for AWS to create the instance. Okay. Yeah. Beauty there. Movies. Okay, sketch. Hello? Yeah, Okay. Okay. On. There we go. Question host has been built on three instances for the management clusters have now been created. Okay, We're going through the process of preparing. Those nodes were now copying everything over. See that scaling up of controllers in the big strapped cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Right? Okay. Just waiting for key. Clark. Uh huh. So finish up. Yeah. No. Now we're shutting down. Control this on the local bootstrap node on preparing our I. D. C configuration, fourth indication. So once this is completed, the last phase will be to deploy stack light into the new cluster, that glass on monitoring tool set, Then we go stack like deployment has started. Mhm. Coming to the end of the deployment mountain. Yeah, they were cut final phase of the deployment. And we are done. Yeah, you'll see. At the end, they're providing us the details of you. I log in. So there's a key Clark log in. Uh, you can modify that initial default possible is part of the configuration set up where they were in the documentation way. Go Councils up way can log in. Yeah. Yeah. Thank you very much for watching. >>All right, so at this point, what we have we got our management cluster spun up, ready to start creating work clusters. So just a couple of points to clarify there to make sure everyone caught that, uh, as advertised. That's darker. Enterprise container cloud management cluster. That's not rework loans. are gonna go right? That is the tool and you're gonna use to start spinning up downstream commodity documentary prize clusters for bootstrapping record too. >>And the seed host that were, uh, talking about the kind cluster dingy actually doesn't have to exist after the bootstrap succeeds eso It's sort of like, uh, copies head from the seed host Toothy targets in AWS spins it up it then boots the the actual clusters and then it goes away too, because it's no longer necessary >>so that bootstrapping know that there's not really any requirements, Hardly on that, right. It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, as you just said, it's just a kubernetes in docker cluster on that piece. Drop note is just gonna get torn down after the set up finishes on. You no longer need that. Everything you're gonna do, you're gonna drive from the single pane of glass provided to you by your management cluster Doctor enterprise Continue cloud. Another thing that I think is sort of interesting their eyes that the convict is fairly minimal. Really? You just need to provide it like aws regions. Um, am I? And that's what is going to spin up that spending that matter faster. >>Right? There is a mammal file in the bootstrap directory itself, and all of the necessary parameters that you would fill in have default set. But you have the option then of going in and defining a different Am I different for a different region, for example? Oh, are different. Size of instance from AWS. >>One thing that people often ask about is the cluster footprint. And so that example you saw they were spitting up a three manager, um, managing cluster as mandatory, right? No single manager set up at all. We want high availability for doctrine Enterprise Container Cloud management. Like so again, just to make sure everyone sort of on board with the life cycle stage that we're at right now. That's the very first thing you're going to do to set up Dr Enterprise Container Cloud. You're going to do it. Hopefully exactly once. Right now, you've got your management cluster running, and they're gonna use that to spend up all your other work clusters Day today has has needed How do we just have a quick look at the questions and then lets take a look at spinning up some of those child clusters. >>Okay, e think they've actually been answered? >>Yeah, for the most part. One thing I'll point out that came up again in the Dail, helpfully pointed out earlier in surgery, pointed out again, is that if you want to try any of the stuff yourself, it's all of the dogs. And so have a look at the chat. There's a links to instructions, so step by step instructions to do each and every thing we're doing here today yourself. I really encourage you to do that. Taking this out for a drive on your own really helps internalizing communicate these ideas after the after launch pad today, Please give this stuff try on your machines. Okay, So at this point, like I said, we've got our management cluster. We're not gonna run workloads there that we're going to start creating child clusters. That's where all of our work and we're gonna go. That's what we're gonna learn how to do in our next video. Cue that up for us. >>I so love Shawn's voice. >>Wasn't that all day? >>Yeah, I watched him read the phone book. >>All right, here we go. Let's now that we have our management cluster set up, let's create a first child work cluster. >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster the scaling of the cluster on how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the u I. So you can switch Project Mary only has access to development. Uh huh. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Man. Yes, this H keys, Associate ID for Mary into her team on the cloud credentials that allow you to create or access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences. Right. Let's now set up some ssh keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name. We copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our machine. A very simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, we got the clusters tab we had to create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one released version five point seven is the current release Onda Attach. Mary's Key is necessary key. We can then check the rest of the settings, confirming the provider any kubernetes c r D a r i p address information. We can change this. Should we wish to? We'll leave it default for now and then what components of stack light? I would like to deploy into my custom for this. I'm enabling stack light on logging, and I consider the retention sizes attention times on. Even at this stage, add any custom alerts for the watchdogs. Consider email alerting which I will need my smart host. Details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I convinced side on the route. Device size. There we go. My three machines are busy creating. I now need to add some workers to this cluster. So I go through the same process this time once again, just selecting worker. I'll just add to once again the am I is extremely important. Will fail if we don't pick the right. Am I for a Clinton machine? In this case and the deployment has started, we can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen. Cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here. We've created the VPC. We've created the sub nets on. We've created the Internet Gateway. It's unnecessary made of us. And we have no warnings of the stage. Okay, this will then run for a while. We have one minute past. We can click through. We can check the status of the machine balls as individuals so we can check the machine info, details of the machines that we've assigned mhm and see any events pertaining to the machine areas like this one on normal. Yeah. Just last. The community's components are waiting for the machines to start. Go back to customers. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway. And at this stage, the machines have been built on assigned. I pick up the U S. Yeah, yeah, yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. No speeding things up a little bit this whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured mhm and then we go. Cluster has been deployed. So once the classes deployed, we can now never get around. Our environment are looking into configure cluster. We could modify their cluster. We could get the end points for alert Alert Manager See here the griffon occupying and Prometheus are still building in the background but the cluster is available on You would be able to put workloads on it at this stage to download the cube conflict so that I can put workloads on it. It's again the three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster. All right, Now that the build is fully completed, we can check out cluster info on. We can see that all the satellite components have been built. All the storage is there, and we have access to the CPU. I. So if we click into the cluster, we can access the UCP dashboard, click the signing with the clock button to use the SSO. We give Mary's possible to use the name once again. Thing is an unlicensed cluster way could license at this point. Or just skip it on. Do we have the UCP dashboard? You could see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon. A data just been automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Um, so we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah, a general dashboard of Cuba Navies cluster. What If this is configurable, you can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster. All right to scale the cluster on to add a No. This is simple. Is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger group disks and you'll see that worker has been added in the provisioning state. On shortly, we will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node we would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workloads are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button validating which release you would like to update to this case. This available releases five point seven point one give you I'm kicking the update back in the background. We will coordinate. Drain each node slowly, go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Who we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already. Yeah, and in a few minutes, we'll see that the upgrade has been completed. There we go. Great. Done. If you work loads of both using proper cloud native community standards, there will be no impact. >>All right, there. We haven't. We got our first workload cluster spun up and managed by Dr Enterprise Container Cloud. So I I loved Shawn's classic warning there. When you're spinning up an actual doctor enterprise deployment, you see little errors and warnings popping up. Just don't touch it. Just leave it alone and let Dr Enterprises self healing properties take care of all those very transient temporary glitches, resolve themselves and leave you with a functioning workload cluster within victims. >>And now, if you think about it that that video was not very long at all. And that's how long it would take you if someone came into you and said, Hey, can you spend up a kubernetes cluster for development development A. Over here, um, it literally would take you a few minutes to thio Accomplish that. And that was with a W s. Obviously, which is sort of, ah, transient resource in the cloud. But you could do exactly the same thing with resource is on Prem or resource is, um physical resource is and will be going through that later in the process. >>Yeah, absolutely one thing that is present in that demo, but that I like to highlight a little bit more because it just kind of glides by Is this notion of, ah, cluster release? So when Sean was creating that cluster, and also when when he was upgrading that cluster, he had to choose a release. What does that didn't really explain? What does that mean? Well, in Dr Enterprise Container Cloud, we have released numbers that capture the entire staff of container ization tools that will be deploying to that workload costume. So that's your version of kubernetes sed cor DNs calico. Doctor Engineer. All the different bits and pieces that not only work independently but are validated toe work together as a staff appropriate for production, humanities, adopted enterprise environments. >>Yep. From the bottom of the stack to the top, we actually test it for scale. Test it for CVS, test it for all of the various things that would, you know, result in issues with you running the application services. And I've got to tell you from having, you know, managed kubernetes deployments and things like that that if you're the one doing it yourself, it can get rather messy. Eso This makes it easy. >>Bruce, you were staying a second ago. They I'll take you at least fifteen minutes to install your release. Custer. Well, sure, but what would all the other bits and pieces you need toe? Not just It's not just about pressing the button to install it, right? It's making the right decision. About what components work? Well, our best tested toe be successful working together has a staff? Absolutely. We this release mechanism and Dr Enterprise Container Cloud. Let's just kind of package up that expert knowledge and make it available in a really straightforward, fashionable species. Uh, pre Confederate release numbers and Bruce is you're pointing out earlier. He's got delivered to us is updates kind of transparent period. When when? When Sean wanted toe update that cluster, he created little update. Custer Button appeared when an update was available. All you gotta do is click. It tells you what Here's your new stack of communities components. It goes ahead. And the straps those components for you? >>Yeah, it actually even displays at the top of the screen. Ah, little header That says you've got an update available. Do you want me to apply? It s o >>Absolutely. Another couple of cool things. I think that are easy to miss in that demo was I really like the on board Bafana that comes along with this stack. So we've been Prometheus Metrics and Dr Enterprise for years and years now. They're very high level. Maybe in in previous versions of Dr Enterprise having those detailed dashboards that Ravana provides, I think that's a great value out there. People always wanted to be ableto zoom in a little bit on that, uh, on those cluster metrics, you're gonna provides them out of the box for us. Yeah, >>that was Ah, really, uh, you know, the joining of the Miranda's and Dr teams together actually spawned us to be able to take the best of what Morantes had in the open stack environment for monitoring and logging and alerting and to do that integration in in a very short period of time so that now we've got it straight across the board for both the kubernetes world and the open stack world. Using the same tool sets >>warm. One other thing I wanna point out about that demo that I think there was some questions about our last go around was that demo was all about creating a managed workplace cluster. So the doctor enterprise Container Cloud managers were using those aws credentials provisioned it toe actually create new e c two instances installed Docker engine stalled. Doctor Enterprise. Remember all that stuff on top of those fresh new VM created and managed by Dr Enterprise contain the cloud. Nothing unique about that. AWS deployments do that on open staff doing on Parramatta stuff as well. Um, there's another flavor here, though in a way to do this for all of our long time doctor Enterprise customers that have been running Doctor Enterprise for years and years. Now, if you got existing UCP points existing doctor enterprise deployments, you plug those in to Dr Enterprise Container Cloud, uh, and use darker enterprise between the cloud to manage those pre existing Oh, working clusters. You don't always have to be strapping straight from Dr Enterprises. Plug in external clusters is bad. >>Yep, the the Cube config elements of the UCP environment. The bundling capability actually gives us a very straightforward methodology. And there's instructions on our website for exactly how thio, uh, bring in import and you see p cluster. Um so it it makes very convenient for our existing customers to take advantage of this new release. >>Absolutely cool. More thoughts on this wonders if we jump onto the next video. >>I think we should move press on >>time marches on here. So let's Let's carry on. So just to recap where we are right now, first video, we create a management cluster. That's what we're gonna use to create All our downstream were closed clusters, which is what we did in this video. Let's maybe the simplest architectures, because that's doing everything in one region on AWS pretty common use case because we want to be able to spin up workload clusters across many regions. And so to do that, we're gonna add a third layer in between the management and work cluster layers. That's gonna be our regional cluster managers. So this is gonna be, uh, our regional management cluster that exists per region that we're going to manage those regional managers will be than the ones responsible for spending part clusters across all these different regions. Let's see it in action in our next video. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectural overview, how to set up the management environment, prepare for the deployment deployment overview, and then just to prove it, to play a regional child cluster. So looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case, AWS on the L C M components on the d you speak cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need original cluster? Different platform architectures, for example AWS open stack, even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager. We also machine manager. We're hell Mandel are managed as well as the actual provider logic. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. When you see it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster, similar to what we're going to deploy now. Also only has three managers once again, no workers. But as a comparison is a child cluster. This one has three managers, but also has additional workers associate it to the cluster. Yeah, all right, we need to connect. Tell bootstrap note, preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine Mhm. All right, A few things we have to do to make sure the environment is ready. First thing we're gonna pseudo into route. I mean, we'll go into our releases folder where we have the car's boot strap on. This was the original bootstrap used to build the original management cluster. We're going to double check to make sure our cube con figures there It's again. The one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything's working, condone, load our images waken access to a swell. Yeah, Next, we're gonna edit the machine definitions what we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I So that's found under the templates AWS directory. We don't need to edit anything else here, but we could change items like the size of the machines attempts we want to use but the key items to ensure where changed the am I reference for the junta image is the one for the region in this case aws region of re utilizing. This was an open stack deployment. We have to make sure we're pointing in the correct open stack images. Yeah, yeah. Okay. Sit the correct Am I save the file? Yeah. We need to get up credentials again. When we originally created the bootstrap cluster, we got credentials made of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we just exporting AWS access key and I d. What's important is Kaz aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our Q conflict that we want to use for the management cluster when we looked at earlier. Yeah, now we're exporting that. Want to call? The cluster region is Frankfurt's Socrates Frankfurt yet trying to use something descriptive? It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at us and waiting for the past, you know, to get started. Please the best your node, onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy Dr Enterprise, he says. Probably the longest face we'll see in a second that all the nodes will go from the player deployed. Prepare, prepare Mhm. We'll see. Their status changes updates. It was the first word ready. Second, just applying second. Grady, both my time away from home control that's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running a data for us? Yeah, almost a on. Now we're playing Stockland. Thanks. Whichever is done on Done. Now we'll build a child cluster in the new region very, very quickly. Find the cluster will pick our new credential have shown up. We'll just call it Frankfurt for simplicity. A key on customers to find. That's the machine. That cluster stop with three manages set the correct Am I for the region? Yeah, Same to add workers. There we go. That's the building. Yeah. Total bill of time. Should be about fifteen minutes. Concedes in progress. Can we expect this up a little bit? Check the events. We've created all the dependencies, machine instances, machines. A boat? Yeah. Shortly. We should have a working caster in the Frankfurt region. Now almost a one note is ready from management. Two in progress. On we're done. Trust us up and running. >>Excellent. There we have it. We've got our three layered doctor enterprise container cloud structure in place now with our management cluster in which we scrap everything else. Our regional clusters which manage individual aws regions and child clusters sitting over depends. >>Yeah, you can. You know you can actually see in the hierarchy the advantages that that presents for folks who have multiple locations where they'd like a geographic locations where they'd like to distribute their clusters so that you can access them or readily co resident with your development teams. Um and, uh, one of the other things I think that's really unique about it is that we provide that same operational support system capability throughout. So you've got stack light monitoring the stack light that's monitoring the stack light down to the actual child clusters that they have >>all through that single pane of glass that shows you all your different clusters, whether their workload cluster like what the child clusters or usual clusters from managing different regions. Cool. Alright, well, time marches on your folks. We've only got a few minutes left and I got one more video in our last video for the session. We're gonna walk through standing up a child cluster on bare metal. So so far, everything we've seen so far has been aws focus. Just because it's kind of easy to make that was on AWS. We don't want to leave you with the impression that that's all we do, we're covering AWS bare metal and open step deployments as well documented Craftsman Cloud. Let's see it in action with a bare metal child cluster. >>We are on the home stretch, >>right. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. Yeah, so why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and support high performance workouts like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another hyper visor layer in between. So continuing on the theme Why communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p g A s G p, us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. We can handle utilization in the scheduling better Onda. We increase the performance and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project. Will add the bare metal hosts, including the host name. I put my credentials. I pay my address, Mac address on, then provide a machine type label to determine what type of machine it is. Related use. Okay, let's get started Certain Blufgan was the operator thing. We'll go and we'll create a project for our machines to be a member off. Helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. Yeah. So the first thing we had to be in post many of the machine a name. Anything you want? Yeah, in this case by mental zero one. Provide the IAP My user name. Type my password? Yeah. On the Mac address for the active, my interface with boot interface and then the i p m i P address. Yeah, these machines. We have the time storage worker manager. He's a manager. We're gonna add a number of other machines on will speed this up just so you could see what the process. Looks like in the future, better discovery will be added to the product. Okay, Okay. Getting back there. We haven't Are Six machines have been added. Are busy being inspected, being added to the system. Let's have a look at the details of a single note. Mhm. We can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. Okay, it's going to create the cluster. Mhm. Okay, so we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So credit custom. We'll give it a name. Thank you. But he thought were selecting bare metal on the region. We're going to select the version we want to apply on. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of the dress range update the address range that we want to use for the cluster. Check that the sea idea blocks for the communities and tunnels are what we want them to be. Enable disabled stack light and said the stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here we're focused on building communities clusters. So we're gonna put the count of machines. You want managers? We're gonna pick the label type manager on create three machines. Is a manager for the Cuban a disgusting? Yeah, they were having workers to the same. It's a process. Just making sure that the worker label host like you are so yes, on Duin wait for the machines to deploy. Let's go through the process of putting the operating system on the notes, validating that operating system. Deploying Docker enterprise on making sure that the cluster is up and running ready to go. Okay, let's review the bold events. We can see the machine info now populated with more information about the specifics of things like storage. Yeah, of course. Details of a cluster, etcetera. Yeah, Yeah. Okay. Well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build, and that brings us to the end of this particular do my as you can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>Here we have a child cluster on bare metal for folks that wanted to play the stuff on Prem. >>It's ah been an interesting journey taken from the mothership as we started out building ah management cluster and then populating it with a child cluster and then finally creating a regional cluster to spread the geographically the management of our clusters and finally to provide a platform for supporting, you know, ai needs and and big Data needs, uh, you know, thank goodness we're now able to put things like Hadoop on, uh, bare metal thio in containers were pretty exciting. >>Yeah, absolutely. So with this Doctor Enterprise container cloud platform. Hopefully this commoditized scooping clusters, doctor enterprise clusters that could be spun up and use quickly taking provisioning times. You know, from however many months to get new clusters spun up for our teams. Two minutes, right. We saw those clusters gets better. Just a couple of minutes. Excellent. All right, well, thank you, everyone, for joining us for our demo session for Dr Enterprise Container Cloud. Of course, there's many many more things to discuss about this and all of Miranda's products. If you'd like to learn more, if you'd like to get your hands dirty with all of this content, police see us a training don Miranda's dot com, where we can offer you workshops and a number of different formats on our entire line of products and hands on interactive fashion. Thanks, everyone. Enjoy the rest of the launchpad of that >>thank you all enjoy.
SUMMARY :
So for the next couple of hours, I'm the Western regional Solutions architect for Moran At least somebody on the call knows something about your enterprise Computer club. And that's really the key to this thing is to provide some, you know, many training clusters so that by the end of the tutorial content today, I think that's that's pretty much what we had to nail down here. So the management costs was always We have to give this brief little pause of the management cluster in the first regional clusters to support AWS deployments. So in that video are wonderful field CTO Shauna Vera bootstrapped So primarily the foundation for being able to deploy So this cluster isn't yet for workloads. Read the phone book, So and just to make sure I understood The output that when it says I'm pivoting, I'm pivoting from on the bootstrap er go away afterwards. So that there's no dependencies on any of the clouds that get created thereafter. Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, The config file that that's generated the template is fairly straightforward We always insist on high availability for this management cluster the scenes without you having toe worry about it as a developer. Examples of that is the day goes on. either the the regional cluster or a We've got the management cluster, and we're gonna go straight with child cluster. as opposed to having to centralize thumb So just head on in, head on into the docks like the Dale provided here. That's going to be in a very near term I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. No, just that the fact that we're running through these individual So let's go to that video and see just how We can check the status of the machine bulls as individuals so we can check the machine the thing that jumped out to me at first Waas like the inputs that go into defining Yeah, and and And that's really the focus of our effort is to ensure that So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old of the bootstrapping as well that the processes themselves are self healing, And the worst thing you could do is panic at the first warning and start tearing things that don't that then go out to touch slack and say hi, You need to watch your disk But Sean mentioned it on the video. And And the kubernetes, uh, scaling methodology is is he adhered So should we go to the questions. Um, that's kind of the point, right? you know, set up things and deploy your applications and things. that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. to the standards that we would want to set to make sure that we're not overloading On the next video, we're gonna learn how to spin up a Yeah, Do the same to add workers. We got that management cluster that we do strapped in the first video. Yeah, that's the key to this is to be able to have co resident with So we don't have to go back to the mother ship. So it's just one pane of glass to the bootstrapped cluster of the regional services. and another, you know, detail for those that have sharp eyes. Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want all of the other major players in the cloud arena. Let's jump into our last video in the Siri's, So the first thing we had to be in post, Yeah, many of the machine A name. Much the same is how we did for AWS. nodes and and that the management layer is going to have sufficient horsepower to, are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. that's been put out in the chat, um, that you'll be able to give this a go yourself, Uh, take the opportunity to let your colleagues know if they were in another session I e just interest will feel for you. Use A I'm the one with the gray hair and the glasses. And for the providers in the very near future. I can hardly wait. Let's do it all right to share my video So the first thing is, we need those route credentials which we're going to export on the command That is the tool and you're gonna use to start spinning up downstream It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, and all of the necessary parameters that you would fill in have That's the very first thing you're going to Yeah, for the most part. Let's now that we have our management cluster set up, let's create a first We can check the status of the machine balls as individuals so we can check the glitches, resolve themselves and leave you with a functioning workload cluster within exactly the same thing with resource is on Prem or resource is, All the different bits and pieces And I've got to tell you from having, you know, managed kubernetes And the straps those components for you? Yeah, it actually even displays at the top of the screen. I really like the on board Bafana that comes along with this stack. the best of what Morantes had in the open stack environment for monitoring and logging So the doctor enterprise Container Cloud managers were Yep, the the Cube config elements of the UCP environment. More thoughts on this wonders if we jump onto the next video. Let's maybe the simplest architectures, of the regional cluster and how it connects to the management cluster on their components, There we have it. that we provide that same operational support system capability Just because it's kind of easy to make that was on AWS. Just making sure that the worker label host like you are so yes, It's ah been an interesting journey taken from the mothership Enjoy the rest of the launchpad
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mary | PERSON | 0.99+ |
Sean | PERSON | 0.99+ |
Sean O'Mara | PERSON | 0.99+ |
Bruce | PERSON | 0.99+ |
Frankfurt | LOCATION | 0.99+ |
three machines | QUANTITY | 0.99+ |
Bill Milks | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
first video | QUANTITY | 0.99+ |
second phase | QUANTITY | 0.99+ |
Shawn | PERSON | 0.99+ |
first phase | QUANTITY | 0.99+ |
Three | QUANTITY | 0.99+ |
Two minutes | QUANTITY | 0.99+ |
three managers | QUANTITY | 0.99+ |
fifth phase | QUANTITY | 0.99+ |
Clark | PERSON | 0.99+ |
Bill Mills | PERSON | 0.99+ |
Dale | PERSON | 0.99+ |
Five minutes | QUANTITY | 0.99+ |
Nan | PERSON | 0.99+ |
second session | QUANTITY | 0.99+ |
Third phase | QUANTITY | 0.99+ |
Seymour | PERSON | 0.99+ |
Bruce Basil Matthews | PERSON | 0.99+ |
Moran Tous | PERSON | 0.99+ |
five minutes | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Why Multi-Cloud?
>>Hello, everyone. My name is Rick Pew. I'm a senior product manager at Mirant. This and I have been working on the Doctor Enterprise Container Cloud for the last eight months. Today we're gonna be talking about multi cloud kubernetes. So the first thing to kind of look at is, you know, is multi cloud rial. You know, the terms thrown around a lot and by the way, I should mention that in this presentation, we use the term multi cloud to mean both multi cloud, which you know in the technical sense, really means multiple public clouds and hybrid cloud means public clouds. And on Prem, uh, we use in this presentation will use the term multi cloud to refer to all different types of multiple clouds, whether it's all public cloud or a mixture of on Prem and Public Cloud or, for that matter, multiple on Prem clouds as doctor and price container. Cloud supports all of those scenarios. So it really well, let's look at some research that came out of flex era in their 2020 State of the cloud report. You'll notice that ah, 33% state that they've got multiple public and one private cloud. 53% say they've got multiple public and multiple private cloud. So if you have those two up, you get 86% of the people say that they're in multiple public clowns and at least one private cloud. So I think at this stage we could say that multi cloud is a reality. According to 4 51 research, you know, a number of CEO stated that the strong driver their desire was to optimize cost savings across their private and public clouds. Um, they also wanted to avoid vendor lock in by operating in multiple clouds and try to dissuade their teams from taking too much advantage of a given providers proprietary infrastructure. But they also indicated that there the complexity of using multiple clouds hindered the rate of adoption of doing it doesn't mean they're not doing it. It just means that they don't go assed fast as they would like to go in many cases because of the complexity. And here it Miranda's. We surveyed our customers as well, and they're telling us similar things, you know. Risk management, through the diversification of providers, is key on their list cost optimization and the democratization of allowing their development teams, uh, to create kubernetes clusters without having to file a nightie ticket. But to give them a self service, uh, cloud like environment, even if it's on prem or multi cloud to give them the ability to create their own clusters, resize their own clusters and delete their own clusters without needing to have I t. Or of their operations teams involved at all. But there are some challenges with this, with the different clouds you know require different automation. Thio provisioned the underlying infrastructure or deploy and operating system or deployed kubernetes, for that matter, in a given cloud. You could say that they're not that complicated. They all have, you know, very powerful consoles and a P I s to do that. But did you get across three or four or five different clouds? Then you have to learn three or four or five different AP ice and Web consoles in order to make that happen on in. That scenario is difficult to provide self service for developers across all the cloud options, which is what you want to really accelerate your application innovation. So what's in it for me? You know We've got a number of roles and their prizes developers, operators and business leaders, and they have somewhat different needs. So when the developer side the need is flexibility to meet their development schedules, Number one you know they're under constant pressure to produce, and in order to do that, they need flexibility and in this case, the flexibility to create kubernetes clusters and use them across multiple clouds. Now they also have C I C D tools, and they want them to be able to be normalized on automated across all of the the on prim and public clouds that they're using. You know, in many cases they'll have a test and deployment scenario where they'll want to create a cluster, deploy their software, run their test, score the tests and then delete that cluster because the only point of that cluster, perhaps, was to test ah pipeline of delivery. So they need that kind of flexibility. From the operator's perspective, you know, they always want to be able to customize the control of their infrastructure and deployment. Uh, they certainly have the desire to optimize their optics and Capex fans. They also want to support their develops teams who many times their their customers through a p I access for on Prem and public clouds burst. Scaling is something operators are interested in, and something public clouds can provide eso the ability to scale out into public clouds, perhaps from there on prem infrastructure in a seamless manner. And many times they need to support geographic distribution of applications either for compliance or performance reasons. So having you know, data centers all across the world and be able to specifically target a given region, uh, is high on their list. Business leaders want flexibility and confidence to know that you know, they're on prim and public cloud uh, deployments. Air fully supported. They want to be able, like the operator, optimize their cloud, spends business leaders, think about disaster recovery. So having the applications running and living in different data centers gives them the opportunity to have disaster recovery. And they really want the flexibility of keeping private data under their control. On on Prem In certain applications may access that on Prem. Other applications may be able to fully run in the cloud. So what should I look for in a container cloud? So you really want something that fully automates these cluster deployments for virtual machine or bare metal. The operating system, uh, and kubernetes eso It's not just deploying kubernetes. It's, you know, how do I create my underlying infrastructure of a VM or bare metal? How do I deploy the operating system? And then, on top of all that, I want to be able to deploy kubernetes. Uh, you also want one that gives a unified cluster lifecycle management across all the clouds. So these clusters air running software gets updated. Cooper Netease has a new release cycle. Uh, they come out with something new. It's available, you know, How do you get that across all of your clusters? That air running in multiple clouds. We also need a container cloud that can provide you the visibility through logging, monitoring and alerting again across all the clouds. You know, many offerings have these for a particular cloud, but getting that across multiple clouds, uh, becomes a little more difficult. The Doctor Enterprise Container cloud, you know, is a very strong solution and really meets many of these, uh, dimensions along the left or kind of the dimensions we went through in the last slide we've got on Prem and public clouds as of RG A Today we're supporting open stack and bare metal for the on Prem Solutions and AWS in the public cloud. We'll be adding VM ware very soon for another on Prem uh, solution as well as azure and G C P. So thank you very much. Uh, look forward, Thio answering any questions you might have and we'll call that a rap. Thank you. >>Hi, Rick. Thanks very much for that. For that talk, I I am John James. You've probably seen me in other sessions. I do marketing here in Miran Tous on. I wanted to to take this opportunity while we had Rick to ask some more questions about about multi cloud. It's ah, potentially a pretty big topic, isn't it, Rick? >>Yeah. I mean, you know, the devil's in the details and there's, uh, lots of details that we could go through if you'd like, be happy to answer any questions that you have. >>Well, we've been talking about hybrid cloud for literally years. Um, this is something that I think you know, several generations of folks in the in the I. A s space doing on premise. I s, for example, with open stack the way Miran Tous Uh does, um, found, um, you know, thought that that it had a lot of potential. A lot of enterprises believed that, but there were There were things stopping people from from making it. Really, In many cases, um, it required a very, ah, very high degree of willingness to create homogeneous platforms in the cloud and on the premise. Um, and that was often very challenging. Um, but it seems like with things like kubernetes and with the isolation provided by containers, that this is beginning to shift, that that people are actually looking for some degree of application portability between their own Prem and there and their cloud environments. And that this is opening up, Uh, you know, investment on interest in pursuing this stuff. Is that the right perception? >>Yeah. So let's let's break that down a little bit. So what's nice about kubernetes is through the a. P. I s are the same. Regardless of whether it's something that Google or or a W s is offering as a platform as a service or whether you've taken the upstream open source project and deploy it yourself on parameter in a public cloud or whatever the scenario might be or could be a competitor of Frances's product, the Kubernetes A. P I is the same, which is the thing that really gives you that application portability. So you know, the container itself is contained arising, obviously your application and minimizing any kind of dependency issues that you might have And then the ability to deploy that to any of the coup bernetti clusters you know, is the same regardless of where it's running, the complexity comes and how doe I actually spend up a cluster in AWS and open stack and D M Where and gp An azure. How do I build that infrastructure and and spin that up and then, you know, used the ubiquitous kubernetes a p I toe actually deploy my application and get it to run. So you know what we've done is we've we've unified and created A I use the word normalized. But a lot of times people think that normalization means that you're kind of going to a lowest common denominator, which really isn't the case and how we've attacked the the enabling of multi cloud. Uh, you know, what we've done is that we've looked at each one of the providers and are basically providing an AP that allows you to utilize. You know, whatever the best of you know, that particular breed of provider has and not, uh, you know, going to at least common denominator. But, you know, still giving you a ah single ap by which you can, you know, create the infrastructure and the infrastructure could be on Prem is a bare metal infrastructure. It could be on preeminent open stack or VM ware infrastructure. Any of the public clouds, you know, used to have a a napi I that works for all of them. And we've implemented that a p i as an extension to kubernetes itself. So all of the developers, Dev ops and operators that air already familiar operating within the, uh, within the aapi of kubernetes. It's very, very natural. Extension toe actually be able to spend up these clusters and deploy them >>Now that's interesting. Without giving away, obviously what? Maybe special sauce. Um, are you actually using operators to do this in the Cooper 90? Sense of the word? >>Yes. Yeah, we've extended it with with C R D s, uh, and and operators and controllers, you know in the way that it was meant to be extended. So Kubernetes has a recipe on how you extend their A P I on that. That's what we used as our model. >>That, at least to me, makes enormous sense. Nick Chase, My colleague and I were digging into operators a couple of weeks ago, and that's a very elegant technology. Obviously, it's a it's evolving very fast, but it's remarkably unintimidating once you start trying to write them. We were able toe to compose operators around Cron and other simple processes and just, >>you know, >>a couple of minutes on day worked, which I found pretty astonishing. >>Yeah, I mean, you know, Kubernetes does a lot of things and they spent a lot of effort, um, in being able, you know, knowing that their a p I was gonna be ubiquitous and knowing that people wanted to extend it, uh, they spent a lot of effort in the early development days of being able to define that a p I to find what an operator was, what a controller was, how they interact. How a third party who doesn't know anything about the internals of kubernetes could add whatever it is that they wanted, you know, and follow the model that makes it work. Exactly. Aziz, the native kubernetes ap CSTO >>What's also fascinating to me? And, you know, I've I've had a little perspective on this over the past, uh, several weeks or a month or so working with various stakeholders inside the company around sessions related to this event that the understanding of how things work is by no means evenly distributed, even in a company as sort of tightly knit as Moran Tous. Um, some people who shall remain nameless have represented to me that Dr Underprice Container Cloud basically works. Uh, if you handed some of the EMS, it will make things for you, you know, and this is clearly not what's going on that that what's going on is a lot more nuanced that you are using, um, optimal resource is from each provider to provide, uh, you know, really coherent architected solutions. Um, the load balancing the d. N s. The storage that this that that right? Um all of which would ultimately be. And, you know, you've probably tried this. I certainly have hard to script by yourself in answerable or cloud formation or whatever. Um, this is, you know, this is not easy work. I I wrote a about the middle of last year for my prior employer. I wrote a dip lawyer in no Js against the raw aws a piece for deployment and configuration of virtual networks and servers. Um, and that was not a trivial project. Um, it took a long time to get thio. Uh, you know, a dependable result. And to do it in parallel and do other things that you need to do in order to maintain speed. One of the things, in fact, that I've noticed in working with Dr Enterprise Container Cloud recently, is how much parallelism it's capable of within single platforms. It's It's pretty powerful. I mean, if you want to clusters to be deployed simultaneously, that's not hard for Doc. Aerated price container cloud to dio on. I found it pretty remarkable because I have sat in front of a single laptop trying to churn out of cluster under answerable, for example, and just on >>you get into that serial nature, your >>poor little devil, every you know, it's it's going out and it's ssh, Indian Terminals and it's pretending it's a person and it's doing all that stuff. This is much more magical. Um, so So that's all built into the system to, isn't it? >>Yeah. Interesting, Really Interesting point on that. Is that you know, the complexity isn't not necessarily and just creating a virtual machine because all of these companies have, you know, spend a lot of effort to try to make that as easy as possible. But when you get into networking, load balancing, routing, storage and hooking those up, you know, two containers automating that if you were to do that in terror form or answerable or something like that is many, many, many lines of code, you know, people have to experiment. Could you never get it right the first or second or the third time? Uh, you know, and then you have to maintain that. So one of the things that we've heard from customers that have looked a container cloud was that they just can't wait to throw away their answerable or their terror form that they've been maintaining for a couple of years. The kind of enables them to do this. It's very brittle. If if the clouds change something, you know on the network side, let's say that's really buried. And it's not something that's kind of top of mind. Uh, you know, your your thing fails or maybe worse, you think that it works. And it's not until you actually go to use it that you notice that you can't get any of your containers. So you know, it's really great the way that we've simplified that for the users and again democratizing it. So the developers and Dev ops people can create these clusters, you know, with ease and not worry about all the complexities of networking and storage. >>Another thing that amazed me as I was digging into my first, uh, Dr Price container Cloud Management cluster deployment was how, uh, I want I don't want to use the word nuanced again, but I can't think of a better word. Nuanced. The the security thinking is in how things air set up. How, um, really delicate the thinking about about how much credential power you give to the deploy. Er the to the seed server that deploys your management cluster as opposed thio Um uh or rather the how much how much administrative access you give to the to the administrator who owns the entire implementation around a given provider versus how much power the seed server gets because that gets its own user right? It gets a bootstrap user specifically created so that it's not your administrator, you know, more limited visibility and permissions. And this whole hierarchy of permissions is then extended down into the child clusters that this management cluster will ultimately create. So that Dev's who request clusters will get appropriate permissions granted within. Ah, you know, a corporate schema of permissions. But they don't get the keys to the kingdom. They don't have access to anything they don't you know they're not supposed to have access to, but within their own scope, they're safe. They could do anything they want, so it's like a It's a It's a really neat kind of elegant way of protecting organizations against, for example, resource over use. Um, you know, give people the power to deploy clusters, and basically you're giving them the power toe. Make sure that a big bill hits you know, your corporate accounting office at the end of the billing cycle, um so there have to be controls and those controls exist in this, you know, in this. >>Yeah, And there's kind of two flavors of that. One is kind of the day one that you're doing the deployment you mentioned the seed servers, you know, And then it creates a bastion server, and then it creates, you know, the management cluster and so forth, you know, and how all those permissions air handled. And then once the system is running, you know, then you have full access to going into key cloak, which is a very powerful open source identity management tool on you have dozens of, you know, granular permissions that you can give to an individual user that gives them permission to do certain things and not others within the context of kubernetes eso. It's really well thought out. And the defaults, you know, our 80% right. You know, there's very few people are gonna have to go in and sort of change those defaults. You mentioned the corporate directory. You know, hooks right upto l bap or active directory can suck everybody down. So there's no kind of work from a day. One perspective of having to go add. You know everybody that you can think of different teams and groupings of of people. Uh, you know, that's kind of all given from the three interface to the corporate directory. And so it just makes kind of managing the users and and controlling who can do what? Uh, really easy. And, you know, you know, day one day two it's really almost like our one hour to write because it's just all the defaults were really well thought out. You can deploy, you know, very powerful doctor and price container cloud, you know, within an hour, and then you could just start using it. And you know, you can create users if you want. You can use the default users. That air set up a time goes on, you can fine tune that, and it's a really, really nice model again for the whole frictionless democratization of giving developers the ability to go in and get it out of, you know, kind of their way and doing what they want to do. And I t is happy to do that because they don't like dozens of tickets and saying, you know, create a cluster for this team created cluster for that team. You know, here's the size of these guys. Want to resize when you know let's move all that into a self service model and really fulfill the prophecy of, you know, speeding up application development. >>It strikes me is extremely ironic that one of the things that public cloud providers bless them, uh, have always claimed, is that their products provide this democratization when in the experience, I think my own experience and the experience of most of the AWS developers, for example, not toe you know, name names, um, that I've encountered is that an initial experience of trying to start start a virtual machine and figuring out how to log into it? A. W s could take the better part of an afternoon. It's just it's not familiar once you have it in your fingers. Boom. Two seconds, right. But, wow, that learning curve is steep and precipitous, and you slip back and you make stupid mistakes your first couple 1000 times through the loop. Um, by letting people skip that and letting them skip it potentially on multiple providers, in a sense, I would think products like this are actually doing the public cloud industry is, you know, a real surface Hide as much of that as you can without without taking the power away. Because ultimately people want, you know, to control their destiny. They want choice for a reason. Um, and and they want access to the infinite services And, uh, and, uh, innovation that AWS and Azure and Google are all doing on their platforms. >>Yeah, you know, and they're solving, uh, very broad problems in the public clouds, you know, here were saying, you know, this is a world of containers, right? This is a world of orchestration of these containers. And why should I have to worry about the underlying infrastructure, whether it's a virtual machine or bare metal? You know, I shouldn't care if I'm an application developer developing some database application. You know, the last thing I wanna worry about is how do I go in and create a virtual machine? Oh, this is running. And Google. It's totally different than the one I was creating. An AWS I can't find. You know where I get the I P address in Google. It's not like it was an eight of us, you know, and you have to relearn the whole thing. And that's really not what your job is. Anyways, your job is to write data base coat, for example. And what you really want to do is just push a button, deploy a nor kiss traitor, get your app on it and start debugging it and getting it >>to work. Yep. Yeah, it's It's powerful. I've been really excited to work with the product the past week or so, and, uh, I hope that folks will look at the links at the bottoms of our thank you slides and, uh, and, uh, avail themselves of of free trial downloads of both Dr Enterprise Container, Cloud and Lens. Thank you very much for spending this extra time with me. Rick. I I think we've produced some added value here for for attendees. >>Well, thank you, John. I appreciate your help. >>Have a great rest of your session by bike. >>Okay, Thanks. Bye.
SUMMARY :
the first thing to kind of look at is, you know, is multi cloud rial. For that talk, I I am John James. And that this is opening up, Uh, you know, investment on interest in pursuing any of the coup bernetti clusters you know, is the same regardless of where it's running, Um, are you actually using operators to do this in the Cooper 90? and and operators and controllers, you know in the way that it was meant to be extended. but it's remarkably unintimidating once you start trying whatever it is that they wanted, you know, and follow the model that makes it work. And, you know, poor little devil, every you know, it's it's going out and it's ssh, Indian Terminals and it's pretending Is that you know, the complexity isn't not necessarily and just creating a virtual machine because all of these companies Make sure that a big bill hits you know, your corporate accounting office at the And the defaults, you know, our 80% right. I would think products like this are actually doing the public cloud industry is, you know, a real surface you know, and you have to relearn the whole thing. bottoms of our thank you slides and, uh, and, uh, avail themselves of
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rick Pew | PERSON | 0.99+ |
Rick | PERSON | 0.99+ |
John James | PERSON | 0.99+ |
John | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Nick Chase | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
86% | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Mirant | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
Two seconds | QUANTITY | 0.99+ |
one hour | QUANTITY | 0.99+ |
53% | QUANTITY | 0.99+ |
33% | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
each provider | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
third time | QUANTITY | 0.99+ |
Aziz | PERSON | 0.98+ |
Thio | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
eight | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
first couple 1000 times | QUANTITY | 0.96+ |
two flavors | QUANTITY | 0.96+ |
Prem Solutions | ORGANIZATION | 0.96+ |
Miranda | ORGANIZATION | 0.96+ |
single platforms | QUANTITY | 0.95+ |
last year | DATE | 0.95+ |
dozens of tickets | QUANTITY | 0.95+ |
dozens | QUANTITY | 0.94+ |
past week | DATE | 0.93+ |
a day | QUANTITY | 0.93+ |
Kubernetes | TITLE | 0.92+ |
Capex | ORGANIZATION | 0.92+ |
each one | QUANTITY | 0.92+ |
single laptop | QUANTITY | 0.92+ |
last eight months | DATE | 0.92+ |
couple of weeks ago | DATE | 0.91+ |
One perspective | QUANTITY | 0.91+ |
two containers | QUANTITY | 0.91+ |
an hour | QUANTITY | 0.9+ |
Azure | ORGANIZATION | 0.9+ |
a month | QUANTITY | 0.88+ |
three interface | QUANTITY | 0.87+ |
azure | ORGANIZATION | 0.87+ |
Frances | PERSON | 0.87+ |
day | QUANTITY | 0.83+ |
Dr Enterprise Container | ORGANIZATION | 0.82+ |
Prem | ORGANIZATION | 0.82+ |
RG A | ORGANIZATION | 0.81+ |
W | ORGANIZATION | 0.8+ |
Miran Tous | ORGANIZATION | 0.79+ |
Cooper Netease | PERSON | 0.78+ |
Kubernetes A. | TITLE | 0.77+ |
Cron | TITLE | 0.76+ |
Dr Underprice Container Cloud | ORGANIZATION | 0.76+ |
one day | QUANTITY | 0.75+ |
five different clouds | QUANTITY | 0.72+ |
Moran Tous | PERSON | 0.7+ |
single ap | QUANTITY | 0.68+ |
Miran Tous | PERSON | 0.67+ |
Dr Enterprise | ORGANIZATION | 0.65+ |
G C P. | ORGANIZATION | 0.61+ |
90 | COMMERCIAL_ITEM | 0.61+ |
weeks | QUANTITY | 0.61+ |
Lens | ORGANIZATION | 0.61+ |
Securing Your Cloud, Everywhere
>>welcome to our session on security titled Securing Your Cloud. Everywhere With Me is Brian Langston, senior solutions engineer from Miranda's, who leads security initiatives from Renta's most security conscious customers. Our topic today is security, and we're setting the bar high by talking in some depth about the requirements of the most highly regulated industries. So, Brian four Regulated industries What do you perceive as the benefits of evolution from classic infra za service to container orchestration? >>Yeah, the adoption of container orchestration has given rise to five key benefits. The first is accountability. Think about the evolution of Dev ops and the security focused version of that team. Deb. SEC ops. These two competencies have emerged to provide, among other things, accountability for the processes they oversee. The outputs that they enable. The second benefit is audit ability. Logging has always been around, but the pervasiveness of logging data within container or container environments allows for the definition of audit trails in new and interesting ways. The third area is transparency organizations that have well developed container orchestration pipelines are much more likely to have a higher degree of transparency in their processes. This helps development teams move faster. It helped operations teams operations teams identify and resolve issues easier and help simplify the observation and certification of security operations by security organizations. Next is quality. Several decades ago, Toyota revolutionized the manufacturing industry when they implemented the philosophy of continuous improvement. Included within that philosophy was this dependency and trust in the process as the process was improved so that the quality of the output Similarly, the refinement of the process of container orchestration yields ah, higher quality output. The four things have mentioned ultimately points to a natural outcome, which is speed when you don't have to spend so much time wondering who does what or who did what. When you have the clear visibility to your processes and because you can continuously improve the quality of your work, you aren't wasting time in a process that produces defects or spending time and wasteful rework phases. You can move much faster than we've seen this to be the case with our customers. >>So what is it specifically about? Container orchestration that gives these benefits, I guess. I guess I'm really asking why are these benefits emerging now around these technologies? What's enabling them, >>right? So I think it boils down to four things related to the orchestration pipelines that are also critical components. Two successful security programs for our customers and related industry. The first one is policy. One of the core concepts and container orchestration is this idea of declaring what you want to happen or declaring the way you want things done? One place where declarations air made our policies. So as long as we can define what we want to happen, it's much easier to do complementary activities like enforcement, which is our second enabler. Um, tools that allow you to define a policy typically have a way to enforce that policy. Where this isn't the case, you need to have a way of enforcing and validating the policies objectives. Miranda's tools allow custom policies to be written and also enforce those policies. The third enabler is the idea of a baseline. Having a well documented set of policies and processes allows you to establish a baseline. Um, it allows you to know what's normal. Having a baseline allows you to measure against it as a way of evaluating whether or not you're achieving your objectives with container orchestration. The fourth enabler of benefits is continuous assessment, which is about measuring constantly back to what I said a few minutes ago. With the toilet away measuring constantly helps you see whether your processes and your target and state are being delivered as your output deviates from that baseline, your adjustments can be made more quickly. So these four concepts, I think, could really make or break your compliance status. >>It's a really way interesting way of thinking about compliance. I had thought previously back compliance, mostly as a as a matter of legally declaring and then trying to do something. But at this point, we have methods beyond legal boilerplate for asserting what we wanna happen, as you say, and and this is actually opening up new ways to detect, deviation and and enforce failure to comply. That's really exciting. Um, so you've you've touched on the benefits of container orchestration here, and you've provided some thoughts on what the drivers on enablers are. So what does Miranda's fit in all this? How does how are we helping enable these benefits, >>right? Well, our goal and more antis is ultimately to make the world's most compliant distribution. We we understand what our customers need, and we have developed our product around those needs, and I could describe a few key security aspects about our product. Um, so Miranda's promotes this idea of building and enabling a secure software supply chain. The simplified version of that that pertains directly to our product follows a build ship run model. So at the build stage is doctor trusted registry. This is where images are stored following numerous security best practices. Image scanning is an optional but highly recommended feature to enable within D T R. Image tags can be regularly pruned so that you have the most current validated images available to your developers. And the second or middle stage is the ship stage, where Miranda's enforces policies that also follow industry best practices, as well as custom image promotion policies that our customers can write and align to their own internal security requirements. The third and final stages to run stage. And at this stage, we're talking about the engine itself. Docker Engine Enterprise is the Onley container, run time with 51 40 dash to cryptography and has many other security features built in communications across the cluster across the container platform are all secure by default. So this build ship stage model is one way of how our products help support this idea of a secure supply chain. There are other aspects of the security supply chain that arm or customer specific that I won't go into. But that's kind of how we could help our product. The second big area eso I just touched on the secure supply chain. The second big area is in a Stig certification. Um, a stick is basically an implementation or configuration guide, but it's published by the U. S government for products used by the US government. It's not exclusive to them, but for customers that value security highly, especially in a regulated industry, will understand the significance and value that the Stig certification brings. So in achieving the certification, we've demonstrated compliance or alignment with a very rigid set of guidelines. Our fifth validation, the cryptography and the Stig certification our third party at two stations that our product is secure, whether you're using our product as a government customer, whether you're a customer in a regulated industry or something else, >>I did not understand what the Stig really Waas. It's helpful because this is not something that I think people in the industry by and large talk about. I suspect because these things are hard to get and time consuming to get s so they don't tend to bubble up to the top of marketing speak the way glitzy new features do that may or may not >>be secure. >>The, uh so then moving on, how has container orchestration changed? How your customers approach compliance assessment and reporting. >>Yeah, This has been an interesting experience and observation as we've worked with some of our customers in these areas. Eso I'll call out three areas. One is the integration of assessment tooling into the overall development process. The second is assessment frequency and then the third is how results are being reported, which includes what data is needed to go into the reporting. There are very likely others that could be addressed. But those are three things that I have noticed personally and working with customers. >>What do you mean exactly? By integration of assessment tooling. >>Yeah. So our customers all generally have some form of a development pipeline and process eso with various third party and open source tools that can be inserted at various phases of the pipeline to do things like status static source would analysis or host scanning or image scanning and other activities. What's not very well established in some cases is how everything fits within the overall pipeline framework. Eso fit too many customers, ends up having a conversation with us about what commands need should be run with what permissions? Where in the environment should things run? How does code get there that does this scanning? Where does the day to go? Once the out once the scan is done and how will I consume it? Thies Real things where we can help our customers understand? Um, you know what? Integration? What? Integration of assessment. Tooling really means. >>It is fascinating to hear this on, baby. We can come back to it at the end. But what I'm picking out of this Ah, this the way you speak about this and this conversation is this kind of re emergence of these Japanese innovations in product productivity in in factory floor productivity. Um, like, just in time delivery and the, you know, the Toyota Miracle and, uh, and that kind of stuff. Fundamentally, it's someone Yesterday, Anders Wahlgren from cloud bees, of course. The C I. C D expert told me, um, that one of the things he likes to tell his, uh consult ease and customers is to put a GoPro on the head of your code and figure out where it's going and how it's spending its time, which is very reminiscent of these 19 fifties time and motion studies, isn't it that that that people, you know pioneered accelerating the factory floor in the industrial America of the mid century? The idea that we should be coming back around to this and doing it at light speed with code now is quite fascinating. >>Yeah, it's funny how many of those same principles are really transferrable from 50 60 70 years ago to today. Yeah, quite fascinating. >>So getting back to what you were just talking about integrating, assessment, tooling, it sounds like that's very challenging. And you mentioned assessment frequency and and reporting. What is it about those areas that that's required? Adaptation >>Eso eso assessment frequency? Um, you know, in legacy environments, if we think about what those look like not too long ago, uh, compliance assessment used to be relatively infrequent activity in the form of some kind of an audit, whether it be a friendly peer review or intercompany audit. Formal third party assessments, whatever. In many cases, these were big, lengthy reviews full of interview questions, Um, it's requests for information, periods of data collection and then the actual review itself. One of the big drawbacks to this lengthy engagement is an infrequent engagement is that vulnerabilities would sometimes go unnoticed or unmitigated until these reviews at it. But in this era of container orchestration, with the decomposition of everything in the software supply chain and with clearer visibility of the various inputs to the build life cycle, our customers can now focus on what tooling and processes can be assembled together in the form of a pipeline that allows constant inspection of a continuous flow of code from start to finish. And they're asking how our product can integrate into their pipeline into their Q A frameworks to help simplify this continuous assessment framework. Eso that's that kind of addresses the frequency, uh, challenge now regarding reporting, our customers have had to reevaluate how results are being reported and the data that's needed in the reporting. The root of this change is in the fact that security has multiple stakeholder groups and I'll just focus on two of them. One is development, and their primary focus, if you think about it, is really about finding and fixing defects. That's all they're focused on, really, is there is there pushing code? The other group, though, is the Security Project Management Office, or PMO. This group is interested in what security controls are at risk due to those defects. So the data that you need for these two stakeholder groups is very different. But because it's also related, it requires a different approach to how the data is expressed, formatted and ultimately integrated with sometimes different data sources to be able to appease both use cases. >>Mhm. So how does Miranda's help improve the rate of compliance assessment? Aziz? Well, as this question of the need for differential data presentation, >>right, So we've developed on exposed a P I S that helped report the compliance status of our product as it's implemented in our customers on environment. So through these AP eyes, we express the data and industry standard formats using plastic out Oscar is a relatively new project out of the mist organization. It's really all about standardizing a set of standards instead of formats that expresses control information. So in this way our customers can get machine and human readable information related to compliance, and that data can then be massaged into other tools or downstream processes that our customers might have. And what I mean by downstream processes is if you're a development team and you have the inspection tools, the process is to gather findings defects related to your code. A downstream process might be the ticketing system with the era that might log a formal defect or that finding. But it all starts with having a common, standard way of expressing thes scan output. And the findings such that both development teams and and the security PMO groups can both benefit from the data. So essentially we've been following this philosophy of transparency, insecurity. What we mean by that is security isn't or should not be a black box of information on Lee, accessible and consumable by security professionals. Assessment is happening proactively in our product, and it's happening automatically. We're bringing security out of obscurity by exposing the aspects of our product that ultimately have a bearing on your compliance status and then making that information available to you in very user friendly ways. >>It's fascinating. Uh uh. I have been excited about Oscar's since, uh, since first hearing about it, Um, it seems extraordinarily important to have what is, in effect, a ah query capability. Um, that that let's that that lets different people for different reasons formalize and ask questions of a system that is constantly in flux, very, very powerful. So regarding security, what do you see is the basic requirements for container infrastructure and tools for use in production by the industries that you are working with, >>right? So obviously, you know, the tools and infrastructure is going to vary widely across customers. But Thio generalize it. I would refer back to the concept I mentioned earlier of a secure software supply chain. There are several guiding principles behind us that are worth mentioning. The first is toe have a strategy for ensuring code quality. What this means is being able to do static source code analysis, static source code analysis tools are largely language specific, so there may be a few different tools that you'll need to have to be able to manage that, um, second point is to have a framework for doing regular testing or even slightly more formal security assessments. There are plenty of tools that can help get a company started doing this. Some of these tools are scanning engines like open ESCAP that's also a product of n'est open. ESCAP can use CS benchmarks as inputs, and these tools do a very good job of summarizing and visualizing output, um, along the same family or idea of CS benchmarks. There's many, many benchmarks that are published. And if you look at your own container environment, um, there are very likely to be many benchmarks that can form the core platform, the building blocks of your container environment. There's benchmarks for being too, for kubernetes, for Dr and and it's always growing. In fact, Mirante is, uh, editing the benchmark for container D, so that will be a formal CSCE benchmark coming up very shortly. Um, next item would be defining security policies that line with your organization's requirements. There are a lot of things that come out of box that comes standard that comes default in various products, including ours, but we also give you through our product. The ability to write your own policies that align with your own organization's requirements, uh, minimizing your tax surface. It's another key area. What that means is only deploying what's necessary. Pretty common sense. But sometimes it's overlooked. What this means is really enabling required ports and services and nothing more. Um, and it's related to this concept of least privilege, which is the next thing I would suggest focusing on these privileges related to minimizing your tax service. It's, uh, it's about only allowing permissions to those people or groups that excuse me that are absolutely necessary. Um, within the container environment, you'll likely have heard this deny all approach. This denial approach is recommended here, which means deny everything first and then explicitly allow only what you need. Eso. That's a very common, uh uh, common thing that sometimes overlooked in some of our customer environments. Andi, finally, the idea of defense and death, which is about minimizing your plast radius by implementing multiple layers of defense that also are in line with your own risk management strategy. Eso following these basic principles, adapting them to your own use cases and requirements, uh, in our experience with our customers, they could go a long way and having a secure software supply chain. >>Thank you very much, Brian. That was pretty eye opening. Um, and I had the privilege of listening to it from the perspective of someone who has been working behind the scenes on the launch pad 2020 event. So I'd like to use that privilege to recommend that our listeners, if you're interested in this stuff certainly if you work within one of these regulated industries in a development role, um, that you may want to check out, which will be easy for you to do today, since everything is available once it's been presented. Matt Bentley's live presentation on secure Supply Chain, where he demonstrates one possible example of a secure supply chain that permits image. Signing him, Scanning on content Trust. Um, you may want to check out the session that I conducted with Andres Falcon at Cloud Bees who talks about thes um, these industrial efficiency factory floor time and motion models for for assessing where software is in order to understand what policies can and should be applied to it. Um, and you will probably want to frequent the tutorial sessions in that track, uh, to see about how Dr Enterprise Container Cloud implements many of these concentric security policies. Um, in order to provide, you know, as you say, defense in depth. There's a lot going on in there, and, uh, and it's ah, fascinating Thio to see it all expressed. Brian. Thanks again. This has been really, really educational. >>My pleasure. Thank you. >>Have a good afternoon. >>Thank you too. Bye.
SUMMARY :
about the requirements of the most highly regulated industries. Yeah, the adoption of container orchestration has given rise to five key benefits. So what is it specifically about? or declaring the way you want things done? on the benefits of container orchestration here, and you've provided some thoughts on what the drivers So in achieving the certification, we've demonstrated compliance or alignment I suspect because these things are hard to get and time consuming How your customers approach compliance assessment One is the integration of assessment tooling into the overall development What do you mean exactly? Where does the day to go? America of the mid century? Yeah, it's funny how many of those same principles are really transferrable So getting back to what you were just talking about integrating, assessment, One of the big drawbacks to this lengthy engagement is an infrequent engagement is that vulnerabilities Well, as this question of the need for differential the process is to gather findings defects related to your code. the industries that you are working with, finally, the idea of defense and death, which is about minimizing your plast Um, and I had the privilege of listening to it from the perspective of someone who has Thank you. Thank you too.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian | PERSON | 0.99+ |
Brian Langston | PERSON | 0.99+ |
Matt Bentley | PERSON | 0.99+ |
Anders Wahlgren | PERSON | 0.99+ |
Toyota | ORGANIZATION | 0.99+ |
Andres Falcon | PERSON | 0.99+ |
Cloud Bees | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
two stations | QUANTITY | 0.99+ |
U. S government | ORGANIZATION | 0.99+ |
50 | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
ESCAP | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
four things | QUANTITY | 0.99+ |
third area | QUANTITY | 0.98+ |
US government | ORGANIZATION | 0.98+ |
second | QUANTITY | 0.98+ |
five key benefits | QUANTITY | 0.98+ |
Miranda | ORGANIZATION | 0.98+ |
second enabler | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
second benefit | QUANTITY | 0.97+ |
fifth validation | QUANTITY | 0.97+ |
Oscar | ORGANIZATION | 0.97+ |
three things | QUANTITY | 0.97+ |
Miracle | COMMERCIAL_ITEM | 0.97+ |
Thio | PERSON | 0.97+ |
Mirante | ORGANIZATION | 0.97+ |
Aziz | PERSON | 0.97+ |
Stig | ORGANIZATION | 0.97+ |
one way | QUANTITY | 0.96+ |
two competencies | QUANTITY | 0.96+ |
Several decades ago | DATE | 0.95+ |
two stakeholder groups | QUANTITY | 0.95+ |
Yesterday | DATE | 0.95+ |
four concepts | QUANTITY | 0.94+ |
second big | QUANTITY | 0.93+ |
fourth enabler | QUANTITY | 0.93+ |
19 fifties | DATE | 0.92+ |
Renta | ORGANIZATION | 0.92+ |
both use | QUANTITY | 0.91+ |
three areas | QUANTITY | 0.9+ |
Securing Your Cloud | TITLE | 0.9+ |
one | QUANTITY | 0.9+ |
One place | QUANTITY | 0.87+ |
51 40 dash | QUANTITY | 0.87+ |
D T | TITLE | 0.86+ |
launch pad 2020 | EVENT | 0.86+ |
GoPro | ORGANIZATION | 0.86+ |
mid century | DATE | 0.85+ |
70 years ago | DATE | 0.84+ |
first one | QUANTITY | 0.83+ |
few minutes | DATE | 0.83+ |
Oscar | EVENT | 0.82+ |
two of them | QUANTITY | 0.81+ |
Japanese | OTHER | 0.8+ |
Everywhere With Me | TITLE | 0.79+ |
60 | DATE | 0.78+ |
Security Project Management Office | ORGANIZATION | 0.77+ |
third enabler | QUANTITY | 0.75+ |
one possible | QUANTITY | 0.74+ |
Stig | TITLE | 0.67+ |
Deb | PERSON | 0.66+ |
PMO | ORGANIZATION | 0.62+ |
Two successful security programs | QUANTITY | 0.62+ |
Andi | PERSON | 0.61+ |
Dr Enterprise Container Cloud | ORGANIZATION | 0.6+ |
four | QUANTITY | 0.6+ |
Docker Engine | ORGANIZATION | 0.59+ |
America | LOCATION | 0.53+ |
Thies | QUANTITY | 0.5+ |
Eso | ORGANIZATION | 0.49+ |
Lee | ORGANIZATION | 0.48+ |
Miranda | PERSON | 0.47+ |
DOCKER CLI FINAL
>>Hello, My name is John John Sheikh from Iran Tous. Welcome to our session on new extensions for doctors CLI as we all know, containers air everywhere. Kubernetes is coming on strong and the CNC F cloud landscape slide has become a marvel to behold its complexities about to surpass that of the photo. Letha dies used to fabricate the old intel to 86 and future generations of the diagram will be built out and up into multiple dimensions using extreme ultraviolet lithography. Meanwhile, complexity is exploding and uncertainty about tools, platform details, processes and the economic viability of our companies in changing and challenging times is also increasing. Mirant ous, as you've already heard today, believes that achieving speed is critical and that speed results from balancing choice with simplicity and security. You've heard about Dr Enterprise Container Cloud, a new framework built on kubernetes, the less you deploy compliant, secure by default. Cooper nineties clusters on any infrastructure, providing a seamless self service capable cloud experience to developers. Get clusters fast, Justus, you need them, Update them seamlessly. Scale them is needed all while keeping workloads running smoothly. And you've heard how Dr Enterprise Container Cloud also provides all the day one and Day two and observe ability, tools, the integration AP ICE and Top Down Security, Identity and Secrets management to run operations efficiently. You've also heard about Lens, an open source i D for kubernetes. Aimed at speeding up the most banding, tightest inner loop of kubernetes application development. Lens beautifully meets the needs of a new class of developers who need to deal with multiple kubernetes clusters. Multiple absent project sufficiently developers who find themselves getting bogged down and seal I only coop CTL work flows and context switches into and out of them. But what about Dr Developers? They're working with the same core technologies all the time. They're accessing many of the same amenities, including Docker, engine Enterprise, Docker, Trusted registry and so on. Sure, their outer loop might be different. For example, they might be orchestrating on swarm. Many companies are our future of Swarm session talks about the ongoing appeal of swarm and Miranda's commitment to maintaining and extending the capabilities of swarm Going forward. Dr Enterprise Container Cloud can, of course, deployed doctor enterprise clusters with 100% swarm orchestration on computes just Aziza Leah's. It can provide kubernetes orchestration or mixed swarming kubernetes clusters. The problem for Dr Dev's is that nobody's given them an easy way to use kubernetes without a learning curve and without getting familiar with new tools and work flows, many of which involved buoys and are somewhat tedious for people who live on the command line and like it that way until now. In a few moments you'll meet my colleagues Chris Price and Laura Powell, who enact a little skit to introduce and demonstrate our new extended docker CLI plug in for kubernetes. That plug in offers seamless new functionality, enabling easy context management between the doctor Command Line and Dr Enterprise Clusters deployed by Dr Enterprise Container Cloud. We hope it will help Dev's work faster, help them adapt decay. TSA's they and their organizations manage platform coexistence or transition. Here's Chris and Laura, or, as we like to call them, developer A and B. >>Have you seen the new release of Docker Enterprise Container Cloud? I'm already finding it easier to manage my collection of UCP clusters. >>I'm glad it's helping you. It's great we can manage multiple clusters, but the user interface is a little bit cumbersome. >>Why is that? >>Well, if I want to use docker cli with a cluster, I need to download a client bundle from UCP and use it to create a contact. I like that. I can see what's going on, but it takes a lot of steps. >>Let me guess. Are these the steps? First you have to navigate to the web. You i for docker Enterprise Container Cloud. You need to enter your user name and password. And since the cluster you want to access is part of the demo project, you need to change projects. Then you have to choose a cluster. So you choose the first demo cluster here. Now you need to visit the U C p u I for that cluster. You can use the link in the top right corner of the page. Is that about right? >>Uh yep. >>And this takes you to the UCP you. I log in page now you can enter your user name and password again, but since you've already signed in with key cloak, you can use that instead. So that's good. Finally, you've made it to the landing page. Now you want to download a client bundle what you can do by visiting your user profile, you'll generate a new bundle called Demo and download it. Now that you have the bundle on your local machine, you can import it to create a doctor context. First, let's take a look at the context already on your machine. I can see you have the default context here. Let's import the bundle and call it demo. If we look at our context again, you can see that the demo context has been created. Now you can use the context and you'll be able to interact with your UCP cluster. Let's take a look to see if any stacks are running in the cluster. I can see you have a stack called my stack >>in >>the default name space running on Kubernetes. We can verify that by checking the UCP you I and there it iss my stack in the default name space running on Kubernetes. Let's try removing the stack just so we could be sure we're dealing with the right cluster and it disappears. As you can see. It's easy to use the Docker cli once you've created a context, but it takes quite a bit of effort to create one in the first place. Imagine? >>Yes. Imagine if you had 10 or 20 or 50 clusters toe work with. It's a management nightmare. >>Haven't you heard of the doctor Enterprise Container Cloud cli Plug in? >>No, >>I think you're going to like it. Let me show you how it works. It's already integrated with the docker cli You start off by setting it up with your container cloud Instance, all you need to get started is the base. You are all of your container cloud Instance and your user name and password. I'll set up my clothes right now. I have to enter my user name and password this one time only. And now I'm all set up. >>But what does it actually dio? >>Well, we can list all of our clusters. And as you can see, I've got the cluster demo one in the demo project and the cluster demo to in the Demo project Taking a look at the web. You I These were the same clusters we're seeing there. >>Let me check. Looks good to me. >>Now we can select one of these clusters, but let's take a look at our context before and after so we can understand how the plug in manages a context for us. As you can see, I just have my default contact stored right now, but I can easily get a context for one of our clusters. Let's try demo to the plug in says it's created a context called Container Cloud for me and it's pointing at the demo to cluster. Let's see what our context look like now and there's the container cloud context ready to go. >>That's great. But are you saying once you've run the plug in the doctor, cli just works with that cluster? >>Sure. Let me show you. I've got a doctor stack right here and it deploys WordPress. Well, the play it to kubernetes for you. Head over to the U C P u I for the cluster so you can verify for yourself. Are you ready? >>Yes. >>First I need to make sure I'm using the context >>and >>then I can deploy. And now we just have to wait for the deployment to complete. It's as easy as ever. >>You weren't lying. Can you deploy the same stack to swarm on my other clusters? >>Of course. And that should also show you how easy it is to switch between clusters. First, let's just confirm that our stack has reported as running. I've got a stack called WordPress demo in the default name space running on Kubernetes to deploy to the other cluster. First I need to select it that updates the container cloud context so I don't even need to switch contexts, since I'm already using that one. If I check again for running stacks, you can see that our WordPress stack is gone. Bring up the UCP you I on your other cluster so you can verify the deployment. >>I'm ready. >>I'll start the deployment now. It should be appearing any moment. >>I see the services starting up. That's great. It seems a lot easier than managing context manually. But how do I know which cluster I'm currently using? >>Well, you could just list your clusters like So do you see how this one has an asterisk next to its name? That means it's the currently selected cluster >>I'm sold. Where can I get the plug in? >>Just go to get hub dot com slash miran tous slash container dash cloud dash cli and follow the instructions
SUMMARY :
built on kubernetes, the less you deploy compliant, secure by default. Have you seen the new release of Docker Enterprise Container Cloud? but the user interface is a little bit cumbersome. I can see what's going on, but it takes a lot of steps. Then you have to choose a cluster. what you can do by visiting your user profile, you'll generate the UCP you I and there it iss my stack It's a management nightmare. Let me show you how it works. I've got the cluster demo one in the demo project and the cluster demo to in Looks good to at the demo to cluster. But are you saying once you've run the plug in the doctor, Head over to the U C P u I for the cluster so you can verify for yourself. And now we just have to wait for the deployment to complete. Can you deploy the same stack to swarm And that should also show you how easy it is to switch between clusters. I'll start the deployment now. I see the services starting up. Where can I get the plug in?
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Laura Powell | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Chris Price | PERSON | 0.99+ |
John John Sheikh | PERSON | 0.99+ |
Laura | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Aziza Leah | PERSON | 0.97+ |
50 clusters | QUANTITY | 0.97+ |
docker Enterprise Container Cloud | TITLE | 0.95+ |
Kubernetes | TITLE | 0.94+ |
86 | QUANTITY | 0.94+ |
WordPress | ORGANIZATION | 0.93+ |
today | DATE | 0.92+ |
one | QUANTITY | 0.91+ |
one time | QUANTITY | 0.9+ |
Docker Enterprise Container Cloud | TITLE | 0.89+ |
Dr Enterprise Container Cloud | TITLE | 0.88+ |
first demo cluster | QUANTITY | 0.88+ |
Miranda | PERSON | 0.85+ |
Iran Tous | ORGANIZATION | 0.84+ |
intel | ORGANIZATION | 0.84+ |
Lens | TITLE | 0.83+ |
TSA | ORGANIZATION | 0.83+ |
Cooper nineties | ORGANIZATION | 0.81+ |
Day two | QUANTITY | 0.78+ |
Dr | TITLE | 0.73+ |
Docker | ORGANIZATION | 0.73+ |
first place | QUANTITY | 0.71+ |
WordPress | TITLE | 0.71+ |
Enterprise Container Cloud | COMMERCIAL_ITEM | 0.65+ |
Letha | PERSON | 0.59+ |
Cloud | COMMERCIAL_ITEM | 0.58+ |
DOCKER CLI | TITLE | 0.57+ |
Mirant | TITLE | 0.57+ |
day | QUANTITY | 0.55+ |
Trusted | ORGANIZATION | 0.51+ |
Dr Enterprise Clusters | TITLE | 0.47+ |
Dr Enterprise | TITLE | 0.46+ |
Cloud | TITLE | 0.43+ |
Dr | PERSON | 0.42+ |
Enterprise | COMMERCIAL_ITEM | 0.33+ |
Swarm | ORGANIZATION | 0.33+ |
Adrian and Adam Keynote v4 fixed audio blip added slide
>>Welcome everyone. Good morning. Good evening to all of you around the world. I am so excited to welcome you to launch bad our annual conference for customers, for partners, for our own colleagues here at Mirandes. This is meant to be a forum for learning, for sharing for discovery. One of openness. We're incredibly excited. Do you have you here with us? I want to take a few minutes this morning and opened the conference and share with you first and foremost where we're going as a company. What is our vision then? I also want to share with you on update on what we have been up to you for the past year. Especially with two important acquisitions, Doc Enterprise and then container and lens. And what are some of the latest developments at Mirandes? And then I'll close also with an exciting announcement that we have today, which we hope is going to be interesting and valuable for all of you. But let me start with our mission. What are we here to Dio? It's very simple. We want to help you the ship code faster. This is something that we're very excited about, something that we have achieved for many of you around the world. And we just want thio double down on. We feel this is a mission that's very much worthwhile and relevant and important to you. Now, how do we do that? How do we help you ship code faster? There are three things we believe in. We believe in this world of cloud. Um, choice is incredibly important. We all know that developers want to use the latest tools. We all know that cloud technology is evolving very quickly and new innovations appear, um, very, very quickly, and we want to make them available to you. So choice is very important. At the same time, consuming choice can be difficult. So our mission is to make choice simple for you to give developers and operators simplicity and then finally underpinning everything that we dio is security. These are the three big things that we invest in and that we believe that choice, simplicity and security and the foundation technology that we're betting on to make that happen for you is kubernetes many of you, many of our customers use kubernetes from your aunties today and they use it at scale. And this is something we want to double down on the fundamental benefit. The our key promise we want to deliver for you is Speed. And we feel this is very relevant and important and and valuable in the world that we are in today. So you might also be interested in what have been our priorities since we acquired Doc Enterprise. What has happened for the past year at Miranda's And there are three very important things we focused on as a company. The first one is customer success. Um, when we acquired Doc Enterprise, the first thing we did is listen to you connect with the most important customers and find out what was your sentiment. What did you like? What were you concerned about? What needed to improve? How can we create more value and a better experience for you? So, customers success has been a top of our list of priorities ever since. And here is what we've heard here is what you've told us. You've told us that you very much appreciated the technology that you got a lot of value out of the technology, but that at the same time, there are some things that we can do better. Specifically, you wanted better. Sele's better support experience. You also wanted more clarity on the road map. You also wanted to have a deeper alignment and a deeper relationship between your needs and your requirements and our our technical development that keep people in our development organization are most important engineers. So those three things are were very, very important to you and they were very important to us here. So we've taken that to heart and over the past 12 months, we believe, as a team, we have dramatically improved the customer support experience. We introduced new SLS with prod care. We've rolled out a roadmap to many many of our customers. We've taken your requirements of the consideration and we've built better and deeper relationships with so many of you. And the evidence for that that we've actually made some progress is in a significant increase off the work clothes and in usage of all platforms. I was so fortunate that we were able to build better and stronger relationships and take you to the next level of growth for companies like Visa like soc T general, like nationwide, like Bosch, like Axa X l like GlaxoSmithKline, like standard and Poor's, like Apple A TNT. So many, many off you, Many of all customers around the world, I believe over the past 12 months have experienced better, better, better support strong s L. A s a deeper relationship and a lot more clarity on our roadmap and our vision forward. The second very big priority for us over the last year has been product innovation. This is something that we are very excited about that we've invested. Most of our resource is in, and we've delivered some strong proof points. Doc Enterprise 3.1 has been the first release that we have shipped. Um, as Mirant is as the unified company, Um, it's had some big innovative features or Windows support or a I and machine learning use cases and a significant number off improvements in stability and scalability earlier this year. We're very excited to have a quiet lens and container team, which is by far the most popular kubernetes. I'd, um, in the world today and every day, 600 new users are starting to use lens to manage the community's clusters to deploy applications on top of communities and to dramatically simplify the experience for communities for operators and developers alike. That is a very big step forward for us as a company. And then finally, this week at this conference, we announcing our latest product, which we believe is a huge step forward for Doc Enterprise and which we call Doc Enterprise, Container Cloud, and you will hear a lot more about that during this conference. The third vector of development, the third priority for us as a company over the past year was to become mawr and Mawr developer centric. As we've seen over the past 10 years, developers really move the world forward. They create innovation, they create new software. And while our platform is often managed and run and maybe even purchased by RT architects and operators and I T departments, the actual end users are developers. And we made it our mission a za company, to become closer and closer to developers to better understand their needs and to make our technology as easy and fast to consume as possible for developers. So as a company, we're becoming more and more developers centric, really. The two core products which fit together extremely well to make that happen, or lens, which is targeted squarely at a new breed off kubernetes developers sitting on the desktop and managing communities, environments and the applications on top on any cloud platform anywhere and then DACA enterprise contain a cloud which is a new and radically innovative, contain a platform which we're bringing to market this week. So with this a za background, what is the fundamental problem which we solve for you, for our customers? What is it that we feel are are your pain points that can help you resolve? We see too very, very big trends in the world today, which you are experiencing. On one side, we see the power of cloud emerging with more features mawr innovation, more capabilities coming to market every day. But with those new features and new innovations, there is also an exponential growth in cloud complexity and that cloud complexity is becoming increasingly difficult to navigate for developers and operators alike. And at the same time, we see the pace of change in the economy continuing to accelerate on bits in the economy and in the technology as well. So when you put these two things together on one hand, you have MAWR and Mawr complexity. On the other hand, you have fast and faster change. This makes for a very, very daunting task for enterprises, developers and operators to actually keep up and move with speed. And this is exactly the central problem that we want to solve for you. We want to empower you to move with speed in the middle off rising complexity and change and do it successfully and with confidence. So with that in mind, we are announcing this week at LAUNCHPAD a big and new concept to take the company forward and take you with us to create value for you. And we call this your cloud everywhere, which empowers you to ship code faster. Dr. Enterprise Container Cloud is a lynch bit off your cloud everywhere. It's a radical and new container platform, which gives you our customers a consistent experience on public clouds and private clouds alike, which enables you to ship code faster on any infrastructure, anywhere with a cohesive cloud fabric that meets your security standards that offers a choice or private and public clouds and offer you a offers you a simple, an extremely easy and powerful to use experience. for developers. All of this is, um, underpinned by kubernetes as the foundation technology we're betting on forward to help you achieve your goals at the same time. Lens kubernetes e. It's also very, very well into the real cloud. Every concept, and it's a second very strong linchpin to take us forward because it creates the developing experience. It supports developers directly on their desktop, enabling them Thio manage communities workloads to test, develop and run communities applications on any infrastructure anywhere. So Doc, Enterprise, Container, Cloud and Lens complement each other perfectly. So I'm very, very excited to share this with you today and opened the conference for you. And with this I want to turn it over to my colleague Adam Parker, who runs product development at Mirandes to share a lot more detail about Doc Enterprise Container Cloud. Why we're excited about it. Why we feel is a radical step forward to you and why we feel it can add so much value to your developers and operators who want to embrace the latest kubernetes technology and the latest container technology on any platform anywhere. I look forward to connecting with you during the conference and we should all the best. Bye bye. >>Thanks, Adrian. My name is Adam Parco, and I am vice president of engineering and product development at Mirant ISS. I'm extremely excited to be here today And to present to you Dr Enterprise Container Cloud Doc Enterprise Container Cloud is a major leap forward. It Turpal charges are platform. It is your cloud everywhere. It has been completely designed and built around helping you to ship code faster. The world is moving incredibly quick. We have seen unpredictable and rapid changes. It is the goal of Docker Enterprise Container Cloud to help navigate this insanity by focusing on speed and efficiency. To do this requires three major pillars choice, simplicity and security. The less time between a line of code being written and that line of code running in production the better. When you decrease that cycle, time developers are more productive, efficient and happy. The code is higher, quality contains less defects, and when bugs are found are fixed quicker and more easily. And in turn, your customers get more value sooner and more often. Increasing speed and improving developer efficiency is paramount. To do this, you need to be able to cycle through coding, running, testing, releasing and monitoring all without friction. We enabled us by offering containers as a service through a consistent, cloudlike experience. Developers can log into Dr Enterprise Container Cloud and, through self service, create a cluster No I T. Tickets. No industry specific experience required. Need a place to run. A workload simply created nothing quicker than that. The clusters air presented consistently no matter where they're created, integrate your pipelines and start deploying secure images everywhere. Instantly. You can't have cloud speed if you start to get bogged down by managing, so we offer fully automated lifecycle management. Let's jump into the details of how we achieve cloud speed. The first is cloud choice developers. Operators add mons users they all want. In fact, mandate choice choice is extremely important in efficiency, speed and ultimately the value created. You have cloud choice throughout the full stack. Choice allows developers and operators to use the tooling and services their most familiar with most efficient with or perhaps simply allows them to integrate with any existing tools and services already in use, allowing them to integrate and move on. Doc Enterprise Container Cloud isn't constructive. It's open and flexible. The next important choice we offer is an orchestration. We hear time and time again from our customers that they love swarm. That's simply enough for the majority of their applications. And that just works that they have skills and knowledge to effectively use it. They don't need to be or find coop experts to get immediate value, so we will absolutely continue to offer this choice and orchestration. Our existing customers could rest assure their workloads will continue to run. Great as always. On the other hand, we can't ignore the popularity that growth, the enthusiasm and community ecosystem that has exploded with communities. So we will also be including a fully conforming, tested and certified kubernetes going down the stock. You can't have choice or speed without your choice and operating system. This ties back to developer efficiency. We want developers to be able to leverage their operating system of choice, were initially supporting full stack lifecycle management for a bun, too, with other operating systems like red hat to follow shortly. Lastly, all the way down at the bottom of stack is your choice in infrastructure choice and infrastructure is in our DNA. We have always promoted no locking and flexibility to run where needed initially were supporting open stock AWS and full life cycle management of bare metal. We also have a road map for VM Ware and other public cloud providers. We know there's no single solution for the unique and complex requirements our customers have. This is why we're doubling down on being the most open platform. We want you to truly make this your cloud. If done wrong, all this choice at speed could have been extremely complex. This is where cloud simplification comes in. We offer a simple and consistent as a service cloud experience, from installation to day to ops clusters Air created using a single pane of glass no matter where they're created, giving a simple and consistent interface. Clusters can be created on bare metal and private data centers and, of course, on public cloud applications will always have specific operating requirements. For example, data protection, security, cost efficiency edge or leveraging specific services on public infrastructure. Being able to create a cluster on the infrastructure that makes the most sense while maintaining a consistent experience is incredibly powerful to developers and operators. This helps developers move quick by being able to leverage the infra and services of their choice and operators by leveraging, available, compute with the most efficient and for available. Now that we have users self creating clusters, we need centralized management to support this increase in scale. Doc Enterprise Container cloud use is the single pane of glass for observe ability and management of all your clusters. We have day to ops covered to keep things simple and new. Moving fast from this single pane of glass, you can manage the full stack lifecycle of your clusters from the infra up, including Dr Enterprise, as well as the fully automated deployment and management of all components deployed through it. What I'm most excited about is Doc Enterprise Container Cloud as a service. What do I mean by as a service doctor? Enterprise continue. Cloud is fully self managed and continuously delivered. It is always up to date, always security patched, always available new features and capabilities pushed often and directly to you truly as a service experience anywhere you want, it run. Security is of utmost importance to Miranda's and our customers. Security can't be an afterthought, and it can't be added later with Doctor and a price continued cloud, we're maintaining our leadership and security. We're doing this by leveraging the proven security and Dr Enterprise. Dr. Enterprise has the best and the most complete security certifications and compliance, such as Stig Oscar, How and Phipps 1 $40 to thes security certifications allows us to run in the world's most secure locations. We are proud and honored to have some of the most security conscious customers in the world from all industries into. She's like insurance, finance, health care as well as public, federal and government agencies. With Dr Enterprise Container Cloud. We put security as our top concern, but importantly, we do it with speed. You can't move fast with security in the way so they solve this. We've added what we're calling invisible security security enabled by default and configured for you as part of the platform. Dr Price Container Cloud is multi tenant with granular are back throughout. In conjunction with Doc Enterprise, Docker Trusted Registry and Dr Content Trust. We have a complete end to end secured software supply chain Onley run the images that have gone through the appropriate channels that you have authorized to run on the most secure container engine in the >>industry. >>Lastly, I want to quickly touch on scale. Today. Cluster sprawl is a very real thing. There are test clusters, staging clusters and, of course, production clusters. There's also different availability zones, different business units and so on. There's clusters everywhere. These clusters are also running all over the place. We have customers running Doc Enterprise on premise there, embracing public cloud and not just one cloud that might also have some bare metal. So cloud sprawl is also a very real thing. All these clusters on all these clouds is a maintenance and observe ability. Nightmare. This is a huge friction point to scaling Dr Price. Container Cloud solves these issues, lets you scale quicker and more easily. Little recap. What's new. We've added multi cluster management. Deploy and attach all your clusters wherever they are. Multi cloud, including public private and bare metal. Deploy your clusters to any infra self service cluster creation. No more I T. Tickets to get resources. Incredible speed. Automated Full stack Lifecycle management, including Dr Enterprise Container, cloud itself as a service from the in for up centralized observe ability with a single pane of glass for your clusters, their health, your APs and most importantly to our existing doc enterprise customers. You can, of course, add your existing D clusters to Dr Enterprise Container Cloud and start leveraging the many benefits it offers immediately. So that's it. Thank you so much for attending today's keynote. This was very much just a high level introduction to our exciting release. There is so much more to learn about and try out. I hope you are as excited as I am to get started today with Doc Enterprise. Continue, Cloud, please attend the tutorial tracks up Next is Miska, with the world's most popular Kubernetes E Lens. Thanks again, and I hope you enjoy the rest of our conference.
SUMMARY :
look forward to connecting with you during the conference and we should all the best. We want you to truly make this your cloud. This is a huge friction point to scaling Dr Price.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Adrian | PERSON | 0.99+ |
Bosch | ORGANIZATION | 0.99+ |
Adam Parco | PERSON | 0.99+ |
Adam Parker | PERSON | 0.99+ |
GlaxoSmithKline | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Visa | ORGANIZATION | 0.99+ |
Adam | PERSON | 0.99+ |
standard and Poor's | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
Mirant | ORGANIZATION | 0.99+ |
first release | QUANTITY | 0.99+ |
600 new users | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Mirandes | ORGANIZATION | 0.98+ |
three | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
LAUNCHPAD | ORGANIZATION | 0.98+ |
Mawr | ORGANIZATION | 0.98+ |
Dr Enterprise | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
this week | DATE | 0.97+ |
Today | DATE | 0.97+ |
Mirant ISS | ORGANIZATION | 0.97+ |
Doc Enterprise | ORGANIZATION | 0.97+ |
third vector | QUANTITY | 0.97+ |
third priority | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
two important acquisitions | QUANTITY | 0.97+ |
Windows | TITLE | 0.96+ |
two core products | QUANTITY | 0.96+ |
Axa X l | ORGANIZATION | 0.96+ |
one cloud | QUANTITY | 0.96+ |
Miranda | ORGANIZATION | 0.96+ |
three things | QUANTITY | 0.96+ |
one side | QUANTITY | 0.96+ |
mawr | ORGANIZATION | 0.96+ |
Apple A TNT | ORGANIZATION | 0.95+ |
Miska | PERSON | 0.94+ |
single pane | QUANTITY | 0.93+ |
single solution | QUANTITY | 0.92+ |
Doc | ORGANIZATION | 0.91+ |
Dr. Enterprise | ORGANIZATION | 0.9+ |
past year | DATE | 0.9+ |
How and Phipps | ORGANIZATION | 0.89+ |
past year | DATE | 0.89+ |
Lens | ORGANIZATION | 0.88+ |
this morning | DATE | 0.87+ |
Container | ORGANIZATION | 0.87+ |
earlier this year | DATE | 0.85+ |
Doc Enterprise 3.1 | TITLE | 0.85+ |
Dr Content Trust | ORGANIZATION | 0.84+ |
Doc Enterprise | TITLE | 0.84+ |
Stig Oscar | ORGANIZATION | 0.84+ |
Docker Enterprise Container Cloud | TITLE | 0.83+ |
Dr Price | ORGANIZATION | 0.82+ |
soc T general | ORGANIZATION | 0.82+ |
Container Cloud | ORGANIZATION | 0.81+ |
Doc Enterprise Container Cloud | TITLE | 0.81+ |
Enterprise | ORGANIZATION | 0.79+ |
three major pillars | QUANTITY | 0.78+ |
Enterprise Container Cloud Doc Enterprise Container Cloud | TITLE | 0.78+ |
Container | TITLE | 0.77+ |
one hand | QUANTITY | 0.76+ |
months | DATE | 0.75+ |
$40 | QUANTITY | 0.74+ |
RT | ORGANIZATION | 0.73+ |
Dr Price Container | ORGANIZATION | 0.72+ |
Dio | ORGANIZATION | 0.7+ |
Sele | PERSON | 0.7+ |
Miska's keynote v3 ghosting fix
>>Hello. I miss Caribbean, the principal Off Lens Open Source project and senior director off Engineering at Mirandes. I'm excited to be here today at launch back 2020 Virtual conference. I will be your guide, helping you to navigate the rough waters off opportunities and containers and show you the way how to take full advantage off this great new technology with help off lens. The Coburn Edie's idea. It's happening all around us. Containers and Coburn ET is everywhere. Every day, hundreds of thousands off people create new clusters. Develop containerized application on they deploy those applications on top of Cuban Edie's. It has become the golden standard for container orchestration. How did we get here? The industry has been very creative and innovative in ways how to burn it is has been marketed with the help off develops movement, empowering individual development teams leveraging 12 factor model on infrastructure. As a code principles, we have created the need for a system that is able to obstruct everything. That's one a single system to rule them all. Cooper needs has become this system. It has become the operating system for cloud. But hey, people say Coburn Ages is difficult and complex. Absolutely many people on organizations are struggling to adopt kubernetes at scale terrorist complexity on complexity on top of complexity. On top off this, you might need to unlearn some of the things you have used to do in the past. Having had chance to speak with hundreds off, Cooper needs users on operators, from beginners to ninja level hackers. I feel Coburn Edie's is not too difficult or complex. People will get this perception on Lee when they are using primitive or were limited tools for job, or if they have failed to address the needs off all different stakeholders. By using proper quality tools and products, we can truly harness the power of communities on radically improved the speed of business To get there. In my mind, we have deserved at least two important stakeholders. First, mhm. We have hopes and idea means who want to use system for centralized kubernetes cluster creation operations and management in a listen take care a lot about underlying infrastructure, security and conformance. The industry has been serving teas people very well. He has an amazing products for this segment. Dr Enterprise Container Cloud. It's a great example off such a product. Secondly, we have developers who are, in fact the consumers off. The clusters provided by the ops and I T at means they are the people who actually access the clusters on daily basis. Take deploy, run, managed, debug, inspect on observed the workloads running on top of communities. The availability and quality off tools and products for this segment has been lacking. See, very luckily, that's not the case anymore. And that's the focus off my talk today to take away. I want you to have from this simple unless we have quality tools and products for both off these important stakeholders, we might not get all the benefits we were looking for. Docker Enterprise Container Cloud. We'll get you on top and when combined with the product, I'm about to talk. Next. We'll take you where you wanna be. I'm so excited about this lens. The Cooper needs I D. I. D stands for integrated development environment. We could call it in the credit operations environment as well, but let's stick with I D for a little bit longer. No, If you would be doing non virtual conference, I would be as asking how many off you have heard or actually tried using less >>before. It's okay, Let's make make it interactive. We can still do it all right. I'm probably I would see something to 20% of people raising their hands. To be honest, I'm amazed how many people have started using lens already. It's been out on Lee for just six months or so. Lens combines all a sense of tools and technologies >>required for streamlining cloud native applicants and development on Day two operations. It's all you need to take control off Coburn. Edie's clusters on workloads running on top, for example, you might have find hard time trying to understand what is really going on in your clusters with lens. You will have complete situational awareness off all your clusters on work clothes, and you will understand what's going on on quickly. Take actions if needed. Lenses designed for developers who need to work with Cooper needs on a daily basis. If you have somebody who is just getting started, lens will lower the barrier of entry because it will let you explore your clusters on workloads very easy. Take action to try out different things on diesel eyes, everything in a way that makes sense on provides full context. If you are very experienced ninja level heavy user, you will get things done fast. In essence, by using lens, you will become more productive on the quality off life is improved a lot lenses. A stand alone desktop application for Mac OS Windows and Linux operating systems. It's free and fully open. Source under Emmett license. If you want to get started, simply download the lens application from Lens website and start adding your clusters. Now you might wonder. How does lend play together with Mirandes >>offering sheep code faster at Mirandes, we want to convert open source innovation in the customer value. We want to be best in the world. At this. We want to increase developer velocity to continuously deliver code faster for public and private clouds. And in order to do that, we want to put capable person in the center. We want to invest in products and technologies that will improve the developer productivity that speed sheep gold faster. To have speed, we got to get right amount off simplicity. Choice on security simplicity does not mean less features. It means amazing usability on developer experience for using complex on feature rich systems Under the hood. Security means invisible security, something that is built into the system from >>beginning on its part of its DNA, something that is automatically applied to the underlying infrastructure and software running on top without need for developers to worry about too much choice. It's include chance. You should be able to choose the parts you want to use, for example, choice of the infrastructure, cloud providers or even host operating system running on your machines. Everything in here comes to life with talker in the price container cloud. Combined with lens, it's the end to end solution for harnessing the power of kubernetes and radically improving the speed of business. >>All right, I hope you got the idea how lens will play together with Mirandes offering on a highly law. Now I'd like to talk more about lens features in detail. Let's kick off with multi cluster management. Unlike multi cluster management systems designed for hopes and ideas, New people peace is the Monte Cluster management from the developers point of view, take a nap. Any number >>of cabernet, these clusters to provide quick and easy way to switch cluster context on access workloads Running on top thes clusters may be the ones provide provided by their hopes and ideas mean people, but they might be clusters running locally, used in some other projects or use for hobby purposes. As an example, the clusters are added simply simply by importing the cube conflict file and selecting the cluster context. Once added, it's fast and easy to switch between clusters. Since the requirement for acting a cluster is just a cube. Conflict file lens works with any any certified Cooper needs distributions where user might have obtained to keep conflict. Five. For example, Documented price Container Cloud. You see T e. K s G. K. A. K s rancher opens it. Minnick YouTube many, many other flavors off uber Nitties They all work straight out off the box. The creating above lens is that you will get one unified I e across all your clusters. >>No matter what's the flavor on. There is absolutely nothing that you need to install in. Cluster is in itself is great because most off the developers we're talking about in here do not have sufficient right to install anything like this in their clusters. Since we're now talking about access control, let's discuss how the role based access control is taken in account with lens. It's all about uber needs built in role based access control. As you know, clusters may be configured to use any supported identity providers, since lens will authenticate uses the Cooper needs with Cuba conflict file Cooper needs are back is automatically enforced. This is also reflected on the user interface user. Will Onley see those resources they are allowed to access? Lens do not need admin level privileges, service accounts or any other solution that would by bus. The Cooper needs are back. Next. We have a smart terminal less has a built in smart terminal. It comes with bundled common line tools such as cube cattle on help. It's different from your native terminal because the smart terminal will always have cube cattle command available on bond. It will automatically >>switch the version off cube cattle to match the currently selected Cooper Needs Cluster a P I. If FBI compatible version is not found, it will be downloaded automatically in the background. In addition to making sure you are always using the right version off cube kuttel the Smart Terminal will automatically assigned the Cube conflict context to match your currently selected co Bernie. This cluster as a summary. When you use lens with building Smart Terminal, you are always using the right version off cube cattle and context. I feel there is still something more I want to share with you. Visualizations lenses Very diesel on There is a lot of detail in the user experience. One of the great features in Lens is that building in the creation with Prometheus to visualize everything. As you might know, people working on the ups and i d at me inside of things have learned to write complex Primedia Square ease. Most likely, they have created beautiful death sport to look at data. Looking at the cluster's from the bird's eye perspective. If you are a developer, you are interested in your own stuff. Bird side perspective might be nice, but it doesn't help you to debug and trouble. Suit your own application. You don't necessarily have access to or want to learn Prometheus to write your own queries on out of context that sports. That is why lens will provide automatically civilization for all supportive resource types including the aggregated Use it, >>David Little person. Or, to be honest, ops on Idea Means to will get all the data they need, always in the right context. The basic metrics include CPU memory on disk with total capacity actual use. It requests on limits. The unrest metrics include bytes sent success, failure on request and response to race. Both statistics also include network bytes sent and received. Persistent pulling. Unclaimed metrics include disk usage and capacity. Wow, that was a lot on. To be honest, we are just barely scratching the surface off the available features. Let's move on and talk about lens from the community on open source project perspective. We'll start with statistic, not because I like statistics in particular, but because this project has some mind blowing stats to share. Let's remind ourselves that lens was made open source just a half a year ago. Since then, over 600,000 downloads over 50,000 users over >>8000 star gazers on get top. The users come from all around the world. It's one off the fastest training open source projects on git hub and definitely in Cuba needs ecosystem. It's the number one e or u I or whatever you wanna call it for Cuban, it is on. If you are not using it yet, you're probably missing out some something great. What's coming on next? We are working hard every day to make lens better. Our focus as a leader in this open source project is to remain vendor Notre Look Ways for collaboration with other vendors in the cloud Native technology ecosystem on focus on making features that directing most value for our users. Against this background, the near future roadmap includes exciting features like extensive a P I. While the building features off, lens might feel great. It's just the beginning. Lens extensive a p I that is going to be a new feature released as part off Lens 4.0, we'll let you at custom visualizations on functionality to support your preferred development. Work flaws. The Extensions AP I will provide options for extensive creators to but directly into the lens You I we are already working with the number off cloud Native technology ecosystem vendors to get their technology is deeply integrated on therefore more accessible true lens, for example, on extension for a container >>image scanning technology vendor, I might add a warning icon next to a port or a deployment where vulnerable image is detected in a decent. This extension might provide more details about this vulnerability when the port or deployment is clicked. This is just a simple example, but I hope you get the idea on Really, this is just beginning. We want to >>bring entire Coburn Edie's ecosystem together in a listen to extensions. A p I. We will work on features to enhance Cooper needs Developer were close, both locally on remote, enable teamwork and naturally improve the usability on fixed box reported by our users. There are so many great things coming. It's impossible to list everything in here. If you are interested, please take a look at the epics listed on our guitar free ball. Once again, if you're not using lens already, you're probably missing out on something great. Download and get started today. For the most amazing entrant experience, check out the Docker Enterprise Container Cloud as well. I wish you all a great time with Coburn. Edie's I'm looking forward to meet you all in person someday. Take care. Bye bye
SUMMARY :
The clusters provided by the ops and I T at means It's been out on Lee for just six months entry because it will let you explore your clusters on workloads security, something that is built into the system from You should be able to choose the parts you want to use, New people peace is the Monte Cluster management from the developers you will get one unified I e across all your clusters. Cluster is in itself is great because most off the developers addition to making sure you are always using the right version off cube kuttel the Let's move on and talk about lens from the community on functionality to support your preferred development. is just a simple example, but I hope you get the idea on Really, Edie's I'm looking forward to meet you all
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
20% | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
Cuba | LOCATION | 0.99+ |
David Little | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
Prometheus | TITLE | 0.99+ |
over 600,000 downloads | QUANTITY | 0.99+ |
Linux | TITLE | 0.98+ |
Miska | PERSON | 0.98+ |
today | DATE | 0.98+ |
Mirandes | ORGANIZATION | 0.98+ |
Edie | PERSON | 0.98+ |
over 50,000 users | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Cooper | PERSON | 0.97+ |
uber | ORGANIZATION | 0.97+ |
Secondly | QUANTITY | 0.97+ |
Lee | PERSON | 0.97+ |
One | QUANTITY | 0.97+ |
hundreds | QUANTITY | 0.96+ |
Coburn Edie | ORGANIZATION | 0.96+ |
YouTube | ORGANIZATION | 0.96+ |
FBI | ORGANIZATION | 0.96+ |
Five | QUANTITY | 0.95+ |
12 factor | QUANTITY | 0.95+ |
Mac OS Windows | TITLE | 0.95+ |
half a year ago | DATE | 0.94+ |
Day two | QUANTITY | 0.92+ |
Lens 4.0 | OTHER | 0.9+ |
Edie | ORGANIZATION | 0.9+ |
Both statistics | QUANTITY | 0.89+ |
Lens | ORGANIZATION | 0.87+ |
Emmett | TITLE | 0.85+ |
Coburn ET | ORGANIZATION | 0.85+ |
single system | QUANTITY | 0.85+ |
Cooper | ORGANIZATION | 0.84+ |
Coburn | ORGANIZATION | 0.84+ |
hundreds of thousands off | QUANTITY | 0.84+ |
Notre | ORGANIZATION | 0.82+ |
>8000 star | QUANTITY | 0.77+ |
Cuban | OTHER | 0.75+ |
Coburn | PERSON | 0.74+ |
least two important stakeholders | QUANTITY | 0.72+ |
Lee | ORGANIZATION | 0.72+ |
Primedia Square | ORGANIZATION | 0.72+ |
2020 Virtual conference | EVENT | 0.69+ |
ways | QUANTITY | 0.69+ |
Caribbean | LOCATION | 0.69+ |
Coburn Ages | ORGANIZATION | 0.68+ |
Minnick | ORGANIZATION | 0.67+ |
one | QUANTITY | 0.66+ |
Enterprise Container | ORGANIZATION | 0.62+ |
Docker | ORGANIZATION | 0.6+ |
Bernie | PERSON | 0.59+ |
lend | ORGANIZATION | 0.53+ |
Docker Enterprise Container Cloud | TITLE | 0.53+ |
fix | TITLE | 0.52+ |
Cooper | COMMERCIAL_ITEM | 0.5+ |
Onley | PERSON | 0.45+ |
G. | PERSON | 0.44+ |
Cluster | ORGANIZATION | 0.27+ |
Why Use IaaS When You Can Make Bare Metal Cloud-Native?
>>Hi, Oleg. So great of you to join us today. I'm really looking forward to our session. Eso Let's get started. So if I can get you to give a quick intro to yourself and then if you can share with us what you're going to be discussing today >>Hi, Jake. In my name is Oleg Elbow. I'm a product architect and the Doctor Enterprise Container Cloud team. Uh, today I'm going to talk about running kubernetes on bare metal with a container cloud. My goal is going to tell you about this exciting feature and why we think it's important and what we actually did to make it possible. >>Brilliant. Thank you very much. So let's get started. Eso from my understanding kubernetes clusters are typically run in virtual machines in clouds. So, for example, public cloud AWS or private cloud maybe open staff based or VM ware V sphere. So why why would you go off and run it on their mettle? >>Well, uh, the Doctor Enterprise container cloud already can run Coburn eighties in the cloud, as you know, and the idea behind the container clouds to enable us to manage multiple doctor enterprise clusters. But we want to bring innovation to kubernetes. And instead of spending a lot of resources on the hyper visor and virtual machines, we just go all in for kubernetes directly environmental. >>Fantastic. So it sounds like you're suggesting then to run kubernetes directly on their mettle. >>That's correct. >>Fantastic and without a hyper visor layer. >>Yes, we all know the reasons to run kubernetes and virtual machines it's in The first place is mutual mutual isolation off workloads, but virtualization. It comes with the performance, heat and additional complexity. Uh, another. And when Iran coordinated the director on the hardware, it's a perfect opportunity for developers. They can see performance boost up to 30% for certain container workloads. Uh, this is because the virtualization layer adds a lot off overhead, and even with things like enhanced placement awareness technologies like Numa or processor opinion, it's it's still another head. By skipping over the virtualization, we just remove this overhead and gained this boost. >>Excellent, though it sounds like 30% performance boost very appealing. Are there any other value points or positive points that you can pull out? >>Yes, Besides, the hyper visor over had virtual machines. They also have some static resource footprint. They take up the memory and CPU cycles and overall reintroduces the density of containers per host. Without virtual machines, you can run upto 16% more containers on the same host. >>Excellent. Really great numbers there. >>One more thing to point out directly. Use environmental makes it easier to use a special purpose hardware like graphic processors or virtual no virtual network functions for don't work interfaces or the field programmable gate arrays for custom circuits, Uh, and you can share them between containers more efficiently. >>Excellent. I mean, there's some really great value points you pulled out there. So 30% performance boost, 60% density boost on it could go off and support specialized hardware a lot easier. But let's talk about now. The applications. So what sort of applications do you think would benefit from this The most? >>Well, I'm thinking primarily high performance computations and deep learning will benefit, Uh, which is the more common than you might think of now they're artificial Intelligence is gripping into a lot off different applications. Uh, it really depends on memory capacity and performance, and they also use a special devices like F P G s for custom circuits widely sold. All of it is applicable to the machine learning. Really? >>And I mean, that whole ai piece is I mean, really exciting. And we're seeing this become more commonplace across a whole host of sectors. So you're telcos, farmers, banking, etcetera. And not just I t today. >>Yeah, that's indeed very exciting. Uh, but creating communities closer environmental, unfortunately, is not very easy. >>Hope so it sounds like there may be some challenges or complexities around it. Ondas this, I guess. The reason why there's not many products then out there today for kubernetes on their metal on baby I like. Could you talk to us then about some of the challenges that this might entail? >>Well, there are quite a few challenges first, and for most, there is no one way to manage governmental infrastructures Nowadays. Many vendors have their solutions that are not always compatible with each other and not necessarily cover all aspects off this. Um So we've worked an open source project called metal cube metal cooped and integrated it into the doctor Enterprise Container Cloud To do this unified bar middle management for us. >>And you mentioned it I hear you say is that open source? >>There is no project is open source. We had a lot of our special sauce to it. Um, what it does, Basically, it enables us to manage the hardware servers just like a cloud server Instances. >>And could you go? I mean, that's very interesting, but could you go into a bit more detail and specifically What do you mean? As cloud instances, >>of course they can. Generally, it means to manage them through some sort of a p I or programming interface. Uh, this interface has to cover all aspects off the several life cycle, like hardware configuration, operating system management network configuration storage configuration, Uh, with help off Metal cube. We extend the carbonated C p i to enable it to manage bare metal hosts. And aled these suspects off its life cycle. The mental que project that's uses open stack. Ironic on. Did it drops it in the Cuban. It s a P I. And ironic does all the heavy lifting off provisioned. It does it in a very cloud native way. Uh, it configures service using cloud they need, which is very familiar to anyone who deals with the cloud and the power is managed transparently through the i p my protocol on. But it does a lot to hide the differences between different hardware hosts from the user and in the Doctor Enterprise Container Cloud. We made everything so the user doesn't really feel the difference between bare metal server and cloud VM. >>So, Oleg, are you saying that you can actually take a machine that's turned off and turn it on using the commands? >>That's correct. That's the I. P M I. R Intelligent platform management interface. Uh, it gives you an ability to interact directly with the hardware. You can manager monitor things like power, consumption, temperature, voltage and so on. But what we use it for is to manage the food source and the actual power state of the server. So we have a group of service that are available and we can turn them on. And when we need them, just if we were spinning the VM >>Excellent. So that's how you get around the fact that while aled cloud the ends of the same, the hardware is all different. But I would assume you would have different server configurations in one environment So how would you get around that? >>Uh, yeah, that Zatz. Excellent questions. So some elements of the berm mental management the FBI that we developed, they are specifically to enable operators toe handle wider range of hardware configurations. For example, we make it possible to consider multiple network interfaces on the host. We support flexible partitioning off hard disks and other storage devices. We also make it possible thio boot remote live using the unified extended firmware interface for modern systems. Or just good old bias for for the legacy ones. >>Excellent. So yeah, thanks. Thanks for sharing that that. Now let's take a look at the rest of the infrastructure and eggs. So what about things like networking and storage house that managed >>Oh, Jakey, that's some important details. So from the networking standpoint, the most important thing for kubernetes is load balancing. We use some proven open source technologies such a Zengin ICS and met a little bit to handle. Handle that for us and for the storage. That's ah, a bit more tricky part. There are a lot off different stories. Solutions out. There s o. We decided to go with self and ah cooperator for self self is very much your and stable distributed stories system. It has incredible scalability. We actually run. Uh, pretty big clusters in production with chef and rock makes the life cycle management for self very robust and cloud native with health shaking and self correction. That kind of stuff. So any kubernetes cluster that Dr Underprice Container Cloud provision for environmental Potentially. You can have the self cluster installed self installed in this cluster and provide stories that is accessible from any node in the cluster to any port in the cluster. So that's, uh, called Native Storage components. Native storage. >>Wonderful. But would that then mean that you'd have to have additional hardware so mawr hardware for the storage cluster, then? >>Not at all. Actually, we use Converse storage architecture in the current price container cloud and the workloads and self. They share the same machines and actually managed by the same kubernetes cluster A. Some point in the future, we plan to add more fully, even more flexibility to this, uh, self configuration and enable is share self, where all communities cluster will use a single single self back, and that's that's not the way for us to optimize our very basically. >>Excellent. So thanks for covering the infrastructure part. What would be good is if we can get an understanding them for that kind of look and feel, then for the operators and the users of the system. So what can they say? >>Yeah, the case. We know Doc Enterprise Container Cloud provides a web based user interface that is, uh, but enables to manage clusters. And the bare metal management actually is integrated into this interface and provides provides very smooth user experience. A zone operator, you need to add or enrolled governmental hosts pretty much the same way you add cloud credentials for any other for any other providers for any other platforms. >>Excellent. I mean, Oleg, it sounds really interesting. Would you be able to share some kind of demo with us? It be great to see this in action. Of >>course. Let's let's see what we have here. So, >>uh, thank you. >>Uh, so, first of all, you take a bunch of governmental service and you prepare them, connect and connect them to the network is described in the dogs and bootstrap container cloud on top of these, uh, three of these bare metal servers. Uh, once you put through, you have the container cloud up and running. You log into the u I. Let's start here. And, uh, I'm using the generic operator user for now. Its's possible to integrate it with your in the entity system with the customer and the entity system and get real users there. Mhm. So first of all, let's create a project. It will hold all off our clusters. And once we created it, just switched to it. And the first step for an operator is to add some burr metal hosts of the project. As you see it empty, uh, toe at the berm. It'll host. You just need a few parameters. Uh, name that will allow you to identify the server later. Then it's, ah, user name and password to access the IBM. My controls off the server next on, and it's very important. It's the hardware address off the first Internet port. It will be used to remotely boot the server over network. Uh, finally, that Z the i p address off the i p m i n point and last, but not the least. It's the bucket, uh, toe Assign the governmental host to. It's a label that is assigned to it. And, uh, right now we offer just three default labels or buckets. It's, ah, manager, manager, hosts, worker hosts and storage hosts. And depending on the hardware configuration of the server, you assign it to one of these three groups. You will see how it's used later in the phone, so note that least six servers are required to deploy managed kubernetes cluster. Just as for for the cloud providers. Um, there is some information available now about the service is the result of inspection. By the way, you can look it up. Now we move. Want to create a cluster, so you need to provide the name for the cluster. Select the release off Dr Enterprise Engine and next next step is for provider specific information. You need to specify the address of the Class three guy and point here, and the range of feathers is for services that will be installed in the cluster. The user war close um kubernetes Network parameter school be changed as well, but the defaults are usually okay. Now you can enable or disable stack light the monitoring system for the Burnett's cluster and provide some parameters to eat custom parameters. Uh, finally you click create to create the cluster. It's an empty cluster that we need to add some machines to. So we need a least three manager notes. The form is very simple. You just select the roll off the community snowed. It's either manager of worker Onda. You need to select this label bucket from which the environmental hospital we picked. We go with the manager label for manager notes and work your label for the workers. Uh, while question is deploying, let's check out some machine information. The storage data here, the names off the disks are taken from the environmental host Harbor inspection data that we checked before. Now we wait for servers to be deployed. Uh, it includes ah, operating system, and the government is itself. So uh, yeah, that's that's our That's our you user interface. Um, if operators need to, they can actually use Dr Enterprise Container Container cloud FBI for some more sophisticated, sophisticated configurations or to integrate with an external system, for example, configuration database. Uh, all the burr mental tasks they just can be executed through the carbonated C. P. I and by changing the custom resources customer sources describing the burr mental notes and objects >>Mhm, brilliant. Well, thank you for bringing that life. It's always good. Thio See it in action. I guess from my understanding, it looks like the operators can use the same tools as develops or developers but for managing their infrastructure, then >>yes, Exactly. For example, if you're develops and you use lands, uh, to monitor and manage your cluster, uh, the governmental resources are just another set of custom resources for you. Uh, it is possible to visualize and configure them through lands or any other developer to for kubernetes. >>Excellent. So from what I can see, that really could bridge the gap, then between infrastructure operators on develops and developer teams. Which is which is a big thing? >>Yes, that's that's Ah, one of our aspirations is to unify the user experience because we've seen a lot of these situations when infrastructure is operated by one set of tools and the container platform uses agnostic off it end users and offers completely different set of tools. So as a develops, you have to be proficient in both, and that's not very sustainable for some developers. Team James. >>Sure. Okay, well, thanks for covering that. That's great. E mean, there's obviously other container platforms out there in the market today. It would be great if you could explain only one of some of the differences there and in how Dr Enterprise Container Cloud approaches bare metal. >>Yeah, that's that's a That's an excellent question, Jake. Thank you. So, uh, in container cloud in the container Cloud Burr Mental management Unlike another container platforms, Burr metal management is highly and is tightly integrated in the in the product. It's integrated on the U and the A p I, and on the back and implementation level. Uh, other platforms typically rely on the user to provision in the ber metal hosts before they can deploy kubernetes on it. Uh, this leaves the operating system management hardware configuration hardware management mostly with dedicated infrastructure greater steam. Uh, Dr Enterprise Container Cloud might help to reduce this burden and this infrastructure management costs by just automated and effectively removing the part of responsibility from the infrastructure operators. And that's because container cloud on bare metal is essentially full stack solution. It includes the hardware configuration covers, operating system lifecycle management, especially, especially the security updates or C e updates. Uh, right now, at this point, the only out of the box operating system that we support is you, Bhutto. We're looking to expand this, and, as you know, the doctor Enterprise engine. It makes it possible to run kubernetes on many different platforms, including even Windows. And we plan to leverage this flexibility in the doctor enterprise container cloud full extent to expand this range of operating systems that we support. >>Excellent. Well, Oleg, we're running out of time. Unfortunately, I mean, I've thoroughly enjoyed our conversation today. You've pulled out some excellent points you talked about potentially up to a 30% performance boost up to 60% density boost. Um, you've also talked about how it can help with specialized hardware and make this a lot easier. Um, we also talked about some of the challenges that you could solve, obviously, by using docker enterprise container clouds such as persistent storage and load balancing. There's obviously a lot here, but thank you so much for joining us today. It's been fantastic. And I hope that we've given some food for thoughts to go out and try and deployed kubernetes on Ben. It'll so thanks. So leg >>Thank you for coming. BJ Kim
SUMMARY :
Hi, Oleg. So great of you to join us today. My goal is going to tell you about this exciting feature and why we think it's So why why would you go off And instead of spending a lot of resources on the hyper visor and virtual machines, So it sounds like you're suggesting then to run kubernetes directly By skipping over the virtualization, we just remove this overhead and gained this boost. Are there any other value points or positive points that you can pull out? Yes, Besides, the hyper visor over had virtual machines. Excellent. Uh, and you can share them between containers more efficiently. So what sort of applications do you think would benefit from this The most? Uh, which is the more common than you might think And I mean, that whole ai piece is I mean, really exciting. Uh, but creating communities closer environmental, the challenges that this might entail? metal cooped and integrated it into the doctor Enterprise Container Cloud to it. We made everything so the user doesn't really feel the difference between bare metal server Uh, it gives you an ability to interact directly with the hardware. of the same, the hardware is all different. So some elements of the berm mental Now let's take a look at the rest of the infrastructure and eggs. So from the networking standpoint, so mawr hardware for the storage cluster, then? Some point in the future, we plan to add more fully, even more flexibility So thanks for covering the infrastructure part. And the bare metal management actually is integrated into this interface Would you be able to share some Let's let's see what we have here. And depending on the hardware configuration of the server, you assign it to one of these it looks like the operators can use the same tools as develops or developers Uh, it is possible to visualize and configure them through lands or any other developer Which is which is a big thing? So as a develops, you have to be proficient in both, It would be great if you could explain only one of some of the differences there and in how Dr in the doctor enterprise container cloud full extent to expand Um, we also talked about some of the challenges that you could solve, Thank you for coming.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Oleg | PERSON | 0.99+ |
Oleg Elbow | PERSON | 0.99+ |
30% | QUANTITY | 0.99+ |
Jake | PERSON | 0.99+ |
FBI | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Jakey | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
first step | QUANTITY | 0.98+ |
three groups | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
one set | QUANTITY | 0.98+ |
BJ Kim | PERSON | 0.98+ |
Windows | TITLE | 0.97+ |
up to 30% | QUANTITY | 0.97+ |
Doctor Enterprise | ORGANIZATION | 0.96+ |
Iran | ORGANIZATION | 0.93+ |
three | QUANTITY | 0.91+ |
single | QUANTITY | 0.91+ |
Ben | PERSON | 0.91+ |
Onda | ORGANIZATION | 0.9+ |
James | PERSON | 0.9+ |
Eso | ORGANIZATION | 0.89+ |
three manager | QUANTITY | 0.87+ |
Burnett | ORGANIZATION | 0.86+ |
One more thing | QUANTITY | 0.84+ |
three default | QUANTITY | 0.84+ |
each | QUANTITY | 0.83+ |
upto 16% more | QUANTITY | 0.81+ |
60% density | QUANTITY | 0.79+ |
single self | QUANTITY | 0.76+ |
up to 60% | QUANTITY | 0.75+ |
Zengin ICS | TITLE | 0.73+ |
IaaS | TITLE | 0.73+ |
six servers | QUANTITY | 0.72+ |
Harbor | ORGANIZATION | 0.68+ |
P G | TITLE | 0.68+ |
Enterprise | TITLE | 0.67+ |
Dr Enterprise | ORGANIZATION | 0.67+ |
I. P M | TITLE | 0.64+ |
three | OTHER | 0.64+ |
up | QUANTITY | 0.63+ |
Dr Enterprise Container Cloud | ORGANIZATION | 0.63+ |
Doctor | ORGANIZATION | 0.6+ |
Cuban | OTHER | 0.58+ |
Coburn eighties | ORGANIZATION | 0.58+ |
tools | QUANTITY | 0.56+ |
Thio | PERSON | 0.55+ |
Bhutto | ORGANIZATION | 0.55+ |
Cloud | TITLE | 0.54+ |
Doc Enterprise Container | TITLE | 0.5+ |
Doctor Enterprise Container | TITLE | 0.5+ |
Zatz | PERSON | 0.49+ |
Team | PERSON | 0.49+ |
Container Cloud | TITLE | 0.36+ |
ON DEMAND MIRANTIS OPENSTACK ON K8S FINAL
>> Hi, I'm Adrienne Davis, Customer Success Manager on the CFO-side of the house at Mirantis. With me today is Artem Andreev, Product Manager and expert, who's going to enlighten us today. >> Hello everyone. It's great to hear all of you listening to our discussion today. So my name is Artem Andreev. I'm a Product Manager for Mirantis OpenStack line of products. That includes the current product line that we have in the the next generation product line that we're about to launch quite soon. And actually this is going to be the topic of our presentation today. So the new product that we are very, very, very excited about, and that is going to be launched in a matter of several weeks, is called Mirantis OpenStack on Kubernetes. For those of you who have been in Mirantis quite a while already, Mirantis OpenStack on Kubernetes is essentially a reincarnation of our Miranti Cloud Platform version one, as we call it these days. So, and the theme has reincarnated into something more advanced, more robust, and altogether modern, that provides the same, if not more, value to our customers, but packaged in a different shape. And well, we're very excited about this new launch, and we would like to share this excitement with you Of course. As you might know, recently a few months ago, Mirantis acquired Docker Enterprise together with the advanced Kubernetes technology that Docker Enterprise provides. And we made this technology the piece and parcel of our product suite, and this naturally includes OpenStack Mirantis, OpenStack on Kubernetes as well, since this is a part of our product suite. And well, the Kubernetes technology in question, we call Docker Enterprise Container Cloud these days, I'm going to refer to this name a lot over the course of the presentation. So I would like to split today's discussions to several major parts. So for those of you who do not know what OpenStack is in general, a quick recap might be helpful to understand the value that it provides. I will discuss why someone still needs OpenStack in 2020. We will talk about what a modern OpenStack distribution is supposed to do to the expectation that is there. And of course, we will go into a bit of details of how exactly Mirantis OpenStack on Kubernetes works, how it helps to deploy and manage OpenStack clouds. >> So set the stage for me here. What's the base environment we were trying to get to? >> So what is OpenStack? One can think of OpenStack as a free and open source alternative to VMware, and it's a fair comparison. So OpenStack, just as VMware, operates primarily on Virtual Machines. So it gives you as a user, a clean and crispy interface to launch a virtual VM, to configure the virtual networking to plug this VM into it to configure and provision virtual storage, to attach to your VM, and do a lot of other things that actually a modern application requires to run. So the idea behind OpenStack is that you have a clean and crispy API exposed to you as a user, and alters little details and nuances of the physical infrastructure configuration provision that need to happen just for the virtual application to work are hidden, and spread across multiple components that comprise OpenStack per se. So as compared again, to a VMware, the functionality is pretty much similar, but actually OpenStack can do much more than just Vms, and it does that, at frankly speaking much less price, if we do the comparison. So what OpenStack has to offer. Naturally, the virtualization, networking, storage systems out there, it's just the basic entry level functionality. But of course, what comes with it is the identity and access management features, or practical user interface together with the CLI and command line tools to manage the cloud, orchestration functionality, to deploy your application in the form of templates, ability to manage bare metal machines, and of course, some nice and fancy extras like DNSaaS service, Metering, Secret Management, and Load Balancing. And frankly speaking, OpenStack can actually do even more, depending on the needs that you have. >> We hear so much about containers today. Do applications even need VMs anymore? Can't Kubernetes provide all these services? And even if IaaS is still needed, why would one bother with building their own private platform, if there's a wide choice of public solutions for virtualization, like Amazon web services, Microsoft Azure, and Google cloud platform? >> Well, that's a very fair question. And you're absolutely correct. So the whole trend (audio blurs) as the States. Everybody's talking about containers, everybody's doing containers, but to be realistic, yes, the market still needs VMs. There are certain use cases in the modern world. And actually these use cases are quite new, like 5G, where you require high performance in the networking for example. You might need high performance computing as well. So when this takes quite special hardware and configuration to be provided within your infrastructure, that is much more easily solved with the Vms, and not containers. Of course not to mention that, there are still legacy applications that you need to deal with, and that well, they have just switched from the server-based provision into VM-based provision, and they need to run somewhere. So they're not just ready for containers. And well, if we think about, okay, VMs are still needed, but why don't I just go to a public infrastructure as a service provider and run my workloads there? Now if you can do that, but well, you have to be prepared to pay a lot of money, once you start running your workloads at scale. So public IaaSes, they actually tend to hit your pockets heavily. And of course, if you're working in a highly regulated area, like enterprises cover (audio blurs) et cetera, so you have to comply with a lot of security regulations and data placement regulations. And well, public IaaSes, let's be frank, they're not good at providing you with this transparency. So you need to have full control over your whole stack, starting from the hardware to the very, very top. And this is why private infrastructure as a service is still a theme these days. And I believe that it's going to be a theme for at least five years more, if not more. >> So if private IaaSes are useful and demanded, why does Mirantis just stick to the OpenStack that we already have? Why did we decide to build a new product, rather than keep selling the current one? >> Well, to answer this question, first, we need to see what actually our customers believe more in infrastructure as a service platform should be able to provide. And we've compiled this list into like five criteria. Naturally, private IaaS needs to be reliable and robust, meaning that whatever happens on the underneath the API, that should not be impacting the business generated workloads, this is a must, or impacting them as little as possible, the platform needs to be secure and transparent, going back to the idea of working in the highly regulated areas. And this is again, a table stake to enter the enterprise market. The platform needs to be simple to deploy (audio blurs) 'cause well, you as an operator, you should not be thinking about the internals, but try to focus in on enabling your users with the best possible experience. Updates, updates are very important. So the platform needs to keep up with the latest software patches, bug fixes, and of course, features, and upgrading to a new version must not take weeks or months, and has as little impact on the running workloads as possible. And of course, to be able to run modern application, the platform needs to provide the comparable set of services, just as a public cloud so that you can move your application across your terms in the private or public cloud without having to change it severally, so-called the feature parity, it needs to be there. And if we look at the architecture of OpenStack, and we know OpenStack is powerful, it can do a lot. We've just discussed that, right? But the architecture of OpenStack is known to be complex. And well, tell me, how would you enable the robustness and robustness and reliability in this complex system? It's not easy, right? So, and actually this diagrams shelves, just like probably a third part of the modern update OpenStack cloud. So it's just a little illustration. It's not the whole picture. So imagine how hard it is to make a very solid platform out of this architecture. And well, naturally this also imposes some challenges to provide the transparency and security, 'cause well, the more complex the system is, the harder it is to manage, and well the harder it is to see what's on the inside, and well upgrades, yeah. One of the biggest challenges that we learned from our past previous history, well that many of our customers prefer to stay on the older version of OpenStack, just because, well, they were afraid of upgraded, cause they saw upgrades as time-consuming and risky and divorce. And well, instead of just switching to the latest and greatest software, they preferred reliability by sticking to the old stuff. Well, why? Well, 'cause potentially that meant implied certain impact on their workloads and well an upgrade required thorough planning and execution, just to be as as riskless as possible. And we are solving all of these challenges, of managing a system as complex as OpenStack is with Kubernetes. >> So how does Kubernetes solve these problems? >> Well, we look at OpenStack as a typical microservice architecture application, that is organized into multiple little moving parts, demons that are connected to each other and that talk to each other through the standard API. And altogether, that feels as very good feet to run on top of a Kubernetes cluster, because many of the modern applications, they fall exactly on the same pattern. >> How exactly did you put OpenStack on Kubernetes? >> Well, that's not easy. I'm going to be frank with you. And if you look at the architectural diagram, so this is a stack of Miranda's products represented with a focus of course, on the Mirantis OpenStack, as a central part. So what you see in the middle shelving pink, is Mirantis OpenStack on Kubernetes itself. And of course around that are supporting components that are needed to be there, to run OpenStack on Kubernetes successfully. So on the very bottom, there is hardware, networking, storage, computing, hardware that somebody needs to configure provision and manage, to be able to deploy the operating system on top of it. And this is just another layer of complexity that abstracts the Mirantis OpenStack on Kubernetes just from the under lake. So once we have operating system there, there needs to be a Kubernetes cluster, deployed and managed. And as I mentioned previously, we are using the capabilities that this Kuberenetes cluster provides to run OpenStack itself, the control plane that way, because everything in Mirantis OpenStack on Kuberentes is a container, or whatever you can think of. Of course naturally, it doesn't sound like an easy task to manage this multi-layered pie. And this is where Docker Enterprise Container Cloud comes into play, 'cause this is our single pane of glass into day one and day two operations for the hardware itself, for the operating system, and for Docker Enterprise Kubernetes. So it solves the need to have this underlay ready and prepared. And once the underlay is there, you go ahead, and deploy Mirantis OpenStack on Kubernetes, just as another Kubernetes application, application following the same practices and tools as you use with any other applications. So naturally of course, once you have OpenStack up and running, you can use it to create your own... To give your users ability to create their own private little Kubernetes clusters inside OpenStack projects. And this is one of the measure just cases for OpenStack these days, again, being an underlay for containers. So if you look at the operator experience, how does it look like for a human operator who is responsible for deployment the management of the cloud to deal with Mirantis OpenStack on Kubernetes? So first, you deploy Docker Enterprise Container Cloud, and you use the built-in capabilities that it provides to provision your physical infrastructure, that you discover the hardware nodes, you deploy operating system there, you do configuration of the network interfaces in storage devices there, and then you deploy Kubernetes cluster on top of that. This Kubernetes cluster is going to be dedicated to Mirantis OpenStack on Kuberenetes itself. So it's a special (indistinct) general purpose thing, that well is dedicated to OpenStack. And that means that inside of this cluster, there are a bunch of life cycle management modules, running as Kubernetes operators. So OpenStack itself has its own LCM module or operator. There is a dedicated operator for Ceph, cause Ceph is our major storage solution these days, that we integrate with. Naturally, there is a dedicated lifecycle management module for Stack Light. Stack Light is our operator, logging monitoring alerting solution for OpenStack on Kubernetes, that we bundle toegether with the whole product suite. So Kubernetes operators, directly through, it keeps the TL command or through the practical records that are provided by Docker Enterprise Container Cloud, as a part of it, to deploy the OpenStack staff and Stack Light clusters one by one, and connect them together. So instead of dealing with hundreds of YAML files, while it's five definitions, five specifications, that you're supposed to provide these days and that's safe. And although data management is performed through these APIs, just as the deployment as easily. >> All of this assumes that OpenStack has containers. Now, Mirantis was containerizing back long before Kubernetes even came along. Why did we think this would be important? >> That is true. Well, we've been containerizing OpenStack for quite a while already, it's not a new thing at all. However, is the way that we deploy OpenStack as a Kubernetes application that matters, 'cause Kubernetes solves a whole bunch of challenges that we have used to deal with, with MCP1, when deploying OpenStack on top of bare operating systems as packages. So, naturally Kubernetes provides us with... Allows us to achieve reliability through the self (audio blurs) auto-scaling mechanisms. So you define a bunch of policies that describe the behavior of OpenStack control plane. And Kubernetes follows these policies when things happen, and without actually any need for human interaction. So isolation of the dependencies or OpenStack services within Docker images is a good thing, 'cause previously we had to deal with packages and conflicts in between the versions of different libraries. So now we just ship everything together as a Docker image, and I think that early in updates is an advanced feature that Kubernetes provides natively. So updating OpenStack has never been as easy as with Kubernetes. Kubernetes also provides some fancy building blocks for network and like hold balancing, and of course, collegial tunnels, and service meshes. They're also quite helpful when dealing with such a complex application like OpenStack when things need to talk to each other and without any problem in the configuration. So Helm Reconciling is a place that also has a great deal of role. So it actually is our soul for Kubernetes. We're using Helm Bubbles, which are for opens, provide for OpenStack into upstream, as our low level layer of logic to deploy OpenStack app services and connect them to each other. And they'll naturally automatic scale-up of control plane. So adding in, YouNote is easy, you just add a new Kubernetes work up with a bunch of labels there and well, it handles the distribution of the necessary service automatically. Naturally, there are certain drawbacks. So there's fancy features come at a cost. Human operators, they need to understand Kubernetes and how it works. But this is also a good thing because everything is moving towards Kubernetes these days, so you would have to learn at some point anyway. So you can use this as a chance to bring yourself to the next level of knowledge. OpenStack is not 100% Cloud Native Application by itself. Unfortunately, there are certain components that are stateful like databases, or NOAA compute services, or open-the-switch demons, and that have to be dealt with very carefully when doing operates, updates, and all the whole deployment. So there's extra life cycle management logic build team that handles these components carefully for you. So, a bit of a complexity we had to have. And naturally, Kubernetes requires resources, and keeping the resources itself to run. So you need to have this resources available and dedicated to Kubernetes control plane, to be able to control your application, that is all OpenStack and stuff. So a bit of investment is required. >> Can anybody just containerize OpenStack services and get these benefits? >> Well, yes, the idea is not new, there's a bunch of OpStream open, sorry, community projects doing pretty much the same thing. So we are not inventing a rocket here, let's be fair. However, it's only the way that Kubernetes cooks OpenStack, gives you the robustness and reliability that enterprise and like big customers actually need. And we're doing a great deal of a job, ultimating all the possible day to work polls and all these caveats complexities of the OpenStack management inside our products. Okay, at this point, I believe we shall wrap this discussion a bit up. So let me conclude for you. So OpenStack is an opensource infrastructure as a service platform, that still has its niche in 2020th, and it's going to have it's niche for at least five years. OpenStack is a powerful but very complex tool. And the complexities of OpenStack and OpenStack life cycle management, are successfully solved by Mirantis, through the capabilities of Kubernetes distribution, that provides us with the old necessary primitives to run OpenStack, just as another containerized application these days.
SUMMARY :
on the CFO-side of the house at Mirantis. and that is going to be launched So set the stage for me here. So as compared again, to a VMware, And even if IaaS is still needed, and they need to run somewhere. So the platform needs to keep up and that talk to each other of the cloud to deal with All of this assumes that and keeping the resources itself to run. and it's going to have it's
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Adrienne Davis | PERSON | 0.99+ |
Artem Andreev | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
five specifications | QUANTITY | 0.99+ |
five definitions | QUANTITY | 0.99+ |
Mirantis | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
OpenStack | TITLE | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Ceph | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
five criteria | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.97+ |
2020th | DATE | 0.96+ |
one | QUANTITY | 0.95+ |
ORGANIZATION | 0.93+ | |
MCP1 | TITLE | 0.92+ |
two | QUANTITY | 0.92+ |
Mirantis OpenStack | TITLE | 0.91+ |
Mirantis OpenStack | TITLE | 0.91+ |
YouNote | TITLE | 0.9+ |
Docker Enterprise | ORGANIZATION | 0.9+ |
Helm Bubbles | TITLE | 0.9+ |
Kubernetes | ORGANIZATION | 0.9+ |
least five years | QUANTITY | 0.89+ |
single | QUANTITY | 0.89+ |
Mirantis OpenStack on Kubernetes | TITLE | 0.88+ |
few months ago | DATE | 0.86+ |
OpenStack on Kubernetes | TITLE | 0.86+ |
Docker Enterprise | TITLE | 0.85+ |
K8S | TITLE | 0.84+ |
Dave Van Everen, Mirantis | Mirantis Launchpad 2020 Preview
>>from the Cube Studios in Palo Alto in Boston, connecting with thought leaders all around the world. This is a cube conversation. >>Hey, welcome back. You're ready, Jeffrey here with the Cuban Apollo Alto studios today, and we're excited. You know, we're slowly coming out of the, uh, out of the summer season. We're getting ready to jump back into the fall. Season, of course, is still covet. Everything is still digital. But you know, what we're seeing is a digital events allow a lot of things that you couldn't do in the physical space. Mainly get a lot more people to attend that don't have to get in airplanes and file over the country. So to preview this brand new inaugural event that's coming up in about a month, we have We have a new guest. He's Dave and Everen. He is the senior vice president of marketing. Former ran tous. Dave. Great to see you. >>Happy to be here today. Thank you. >>Yeah. So tell us about this inaugural event. You know, we did an event with Miranda's years ago. I had to look it up like 2014. 15. Open stack was hot and you guys sponsored a community event in the Bay Area because the open stack events used to move all over the country each and every year. But you guys said, and the top one here in the Bay Area. But now you're launching something brand new based on some new activity that you guys have been up to over the last several months. So let us give us give us the word. >>Yeah, absolutely. So we definitely have been organizing community events in a variety of open source communities over the years. And, you know, we saw really, really good success with with the Cube And are those events in opens tax Silicon Valley days? And, you know, with the way things have gone this year, we've really seen that virtual events could be very successful and provide a new, maybe slightly different form of engagement but still very high level of engagement for our guests and eso. We're excited to put this together and invite the entire cloud native industry to join us and learn about some of the things that Mantis has been working on in recent months. A zwelling as some of the interesting things that are going on in the Cloud native and kubernetes community >>Great. So it's the inaugural event is called Moran Sous launchpad 2020. The Wares and the Winds in September 16th. So we're about a month away and it's all online is their registration. Costars is free for the community. >>It's absolutely free. Eso everyone is welcome to attend You. Just visit Miranda's dot com and you'll see the info for registering for the event and we'd love it. We love to see you there. It's gonna be a fantastic event. We have multiple tracks catering to developers, operators, general industry. Um, you know, participants in the community and eso we'd be happy to see you on join us on and learn about some of the some of the things we're working on. >>That's awesome. So let's back up a step for people that have been paying as close attention as they might have. Right? So you guys purchase, um, assets from Docker at the end of last year, really taken over there, they're they're kind of enterprise solutions, and you've been doing some work with that. Now, what's interesting is we we cover docker con, um, A couple of months ago, a couple three months ago. Time time moves fast. They had a tremendously successful digital event. 70,000 registrants, people coming from all over the world. I think they're physical. Event used to be like four or 5000 people at the peak, maybe 6000 Really tremendous success. But a lot of that success was driven, really by the by the strength of the community. The docker community is so passionate. And what struck me about that event is this is not the first time these people get together. You know, this is not ah, once a year, kind of sharing of information and sharing ideas, but kind of the passion and and the friendships and the sharing of information is so, so good. You know, it's a super or, um, rich development community. You guys have really now taken advantage of that. But you're doing your Miranda's thing. You're bringing your own technology to it and really taking it to more of an enterprise solution. So I wonder if you can kind of walk people through the process of, you know, you have the acquisition late last year. You guys been hard at work. What are we gonna see on September 16. >>Sure, absolutely. And, you know, just thio Give credit Thio Docker for putting on an amazing event with Dr Khan this year. Uh, you know, you mentioned 70,000 registrants. That's an astounding number. And you know, it really is a testament thio. You know, the community that they've built over the years and continue to serve eso We're really, really happy for Docker as they kind of move into, you know, the next the next path in their journey and, you know, focus more on the developer oriented, um, solution and go to market. So, uh, they did a fantastic job with the event. And, you know, I think that they continue toe connect with their community throughout the year on That's part of what drives What drove so many attendees to the event assed faras our our history and progress with with Dr Enterprise eso. As you mentioned mid November last year, we did acquire Doctor Enterprise assets from Docker Inc and, um, right away we noticed tremendous synergy in our product road maps and even in the in the team's eso that came together really, really quickly and we started executing on a Siris of releases. Um that are starting Thio, you know, be introduced into the market. Um, you know, one was introduced in late May and that was the first major release of Dr Enterprise produced exclusively by more antis. And we're going to announce at the launch pad 2020 event. Our next major release of the Doctor Enterprise Technology, which will for the first time include kubernetes related in life cycle management related technology from Mirant is eso. It's a huge milestone for our company. Huge benefit Thio our customers on and the broader user community around Dr Enterprise. We're super excited. Thio provide a lot of a lot of compelling and detailed content around the new technology that will be announcing at the event. >>So I'm looking at the at the website with with the agenda and there's a little teaser here right in the middle of the spaceship Docker Enterprise Container Cloud. So, um, and I glanced into you got a great little layout, five tracks, keynote track D container track operations and I t developer track and keep track. But I did. I went ahead and clicked on the keynote track and I see the big reveal so I love the opening keynote at at 8 a.m. On the 76 on the September 16th is right. Um, I, Enel CEO who have had on many, many times, has the big reveal Docker Enterprise Container Cloud. So without stealing any thunder, uh, can you give us any any little inside inside baseball on on what people should expect or what they can get excited about for that big announcement? >>Sure, absolutely so I definitely don't want to steal any thunder from Adrian, our CEO. But you know, we did include a few Easter eggs, so to speak, in the website on Dr Enterprise. Container Cloud is absolutely the biggest story out of the bunch eso that's visible on the on the rocket ship as you noticed, and in the agenda it will be revealed during Adrian's keynote, and every every word in the product name is important, right? So Dr Enterprise, based on Dr Enterprise Platform Container Cloud and there's the new word in there really is Cloud eso. I think, um, people are going to be surprised at the groundbreaking territory that were forging with with this release along the lines of a cloud experience and what we are going to provide to not only I t operations and the Op Graders and Dev ops for cloud environment, but also for the developers and the experience that we could bring to developers As they become more dependent on kubernetes and get more hands on with kubernetes. We think that we're going thio provide ah lot of ways for them to be more empowered with kubernetes while at the same time lowering the bar, the bar or the barrier of entry for kubernetes. As many enterprises have have told us that you know kubernetes can be difficult for the broader developer community inside the organization Thio interact with right? So this is, uh, you know, a strategic underpinning of our our product strategy. And this is really the first step in a non going launch of technologies that we're going to make bigger netease easier for developing. >>I was gonna say the other Easter egg that's all over the agenda, as I'm just kind of looking through the agenda. It's kubernetes on 80 infrastructure multi cloud kubernetes Miranda's open stack on kubernetes. So Goober Netease plays a huge part and you know, we talk a lot about kubernetes at all the events that we cover. But as you said, kind of the new theme that we're hearing a little bit more Morris is the difficulty and actually managing it so looking, kind of beyond the actual technology to the operations and the execution in production. And it sounds like you guys might have a few things up your sleeve to help people be more successful in in and actually kubernetes in production. >>Yeah, absolutely. So, uh, kubernetes is the focus of most of the companies in our space. Obviously, we think that we have some ideas for how we can, you know, really begin thio enable enable it to fulfill its promise as the operating system for the cloud eso. If we think about the ecosystem that's formed around kubernetes, uh, you know, it's it's now really being held back on Lee by adoption user adoption. And so that's where our focus in our product strategy really lives is around. How can we accelerate the move to kubernetes and accelerate the move to cloud native applications on? But in order to provide that acceleration catalyst, you need to be able to address the needs of not only the operators and make their lives easier while still giving them the tools they need for things like policy enforcement and operational insights. At the same time, Foster, you know, a grassroots, um, upswell of developer adoption within their company on bond Really help the I t. Operations team serve their customers the developers more effectively. >>Well, Dave, it sounds like a great event. We we had a great time covering those open stack events with you guys. We've covered the doctor events for years and years and years. Eso super engaged community and and thanks for, you know, inviting us back Thio to cover this inaugural event as well. So it should be terrific. Everyone just go to Miranda's dot com. The big pop up Will will jump up. You just click on the button and you can see the full agenda on get ready for about a month from now. When when the big reveal, September 16th will happen. Well, Dave, thanks for sharing this quick update with us. And I'm sure we're talking a lot more between now in, uh, in the 16 because I know there's a cube track in there, so we look forward to interview in our are our guests is part of the part of the program. >>Absolutely. Eso welcome everyone. Join us at the event and, uh, you know, stay tuned for the big reveal. >>Everybody loves a big reveal. All right, well, thanks a lot, Dave. So he's Dave. I'm Jeff. You're watching the Cube. Thanks for watching. We'll see you next time.
SUMMARY :
from the Cube Studios in Palo Alto in Boston, connecting with thought leaders all around the world. But you know, what we're seeing is a digital Happy to be here today. But you guys said, and the top one here in the Bay Area. invite the entire cloud native industry to join us and The Wares and the Winds in September 16th. participants in the community and eso we'd be happy to see you on So you guys purchase, um, assets from Docker at the end of last year, you know, focus more on the developer oriented, um, solution and So I'm looking at the at the website with with the agenda and there's a little teaser here right in the on the on the rocket ship as you noticed, and in the agenda it will be revealed So Goober Netease plays a huge part and you know, we talk a lot about kubernetes at all the events that we cover. some ideas for how we can, you know, really begin thio enable You just click on the button and you can see the full agenda on uh, you know, stay tuned for the big reveal. We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Adrian | PERSON | 0.99+ |
September 16 | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
Jeffrey | PERSON | 0.99+ |
Dave Van Everen | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Everen | PERSON | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
September 16th | DATE | 0.99+ |
Docker Inc | ORGANIZATION | 0.99+ |
Bay Area | LOCATION | 0.99+ |
late May | DATE | 0.99+ |
Enel | PERSON | 0.99+ |
mid November last year | DATE | 0.99+ |
5000 people | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
70,000 registrants | QUANTITY | 0.99+ |
Dr Enterprise | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
today | DATE | 0.99+ |
Mirantis | ORGANIZATION | 0.99+ |
8 a.m. | DATE | 0.99+ |
first time | QUANTITY | 0.98+ |
Docker Enterprise Container Cloud | TITLE | 0.98+ |
Doctor Enterprise | ORGANIZATION | 0.98+ |
Foster | PERSON | 0.98+ |
Dr Enterprise | TITLE | 0.98+ |
2014. 15 | DATE | 0.98+ |
first step | QUANTITY | 0.98+ |
80 | QUANTITY | 0.98+ |
Docker Enterprise Container Cloud | TITLE | 0.98+ |
6000 | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
Cube Studios | ORGANIZATION | 0.96+ |
late last year | DATE | 0.96+ |
Container Cloud | TITLE | 0.96+ |
five tracks | QUANTITY | 0.96+ |
Easter | EVENT | 0.96+ |
Morris | PERSON | 0.96+ |
The Wares and the Winds | EVENT | 0.95+ |
Miranda | PERSON | 0.94+ |
Dr | PERSON | 0.94+ |
Silicon Valley | LOCATION | 0.94+ |
first time | QUANTITY | 0.94+ |
once a year | QUANTITY | 0.93+ |
each | QUANTITY | 0.9+ |
end | DATE | 0.9+ |
couple of months ago | DATE | 0.88+ |
Launchpad | COMMERCIAL_ITEM | 0.88+ |
Apollo Alto studios | ORGANIZATION | 0.87+ |
Cloud | TITLE | 0.87+ |
about a month | QUANTITY | 0.86+ |
Thio | PERSON | 0.85+ |
Will | PERSON | 0.85+ |
Mantis | PERSON | 0.84+ |
Mirant | ORGANIZATION | 0.84+ |
Thio Docker | PERSON | 0.83+ |
Doctor Enterprise | TITLE | 0.82+ |
a month | QUANTITY | 0.82+ |
Khan | PERSON | 0.81+ |
first major release | QUANTITY | 0.81+ |
last year | DATE | 0.8+ |
2020 | DATE | 0.8+ |
couple three months ago | DATE | 0.79+ |
years | QUANTITY | 0.79+ |
years | DATE | 0.75+ |
Moran Sous launchpad 2020 | EVENT | 0.72+ |
Thio | ORGANIZATION | 0.72+ |
Lee | PERSON | 0.71+ |
Miranda | ORGANIZATION | 0.7+ |
one | QUANTITY | 0.69+ |
Container Cloud | TITLE | 0.67+ |
months | DATE | 0.66+ |
Dr Enterprise Platform | TITLE | 0.65+ |
76 | DATE | 0.64+ |