Image Title

Search Results for Socrates:

Kubernetes on Any Infrastructure Top to Bottom Tutorials for Docker Enterprise Container Cloud


 

>>all right, We're five minutes after the hour. That's all aboard. Who's coming aboard? Welcome everyone to the tutorial track for our launchpad of them. So for the next couple of hours, we've got a SYRIZA videos and experts on hand to answer questions about our new product, Doctor Enterprise Container Cloud. Before we jump into the videos and the technology, I just want to introduce myself and my other emcee for the session. I'm Bill Milks. I run curriculum development for Mirant us on. And >>I'm Bruce Basil Matthews. I'm the Western regional Solutions architect for Moran Tissue esa and welcome to everyone to this lovely launchpad oven event. >>We're lucky to have you with us proof. At least somebody on the call knows something about your enterprise Computer club. Um, speaking of people that know about Dr Enterprise Container Cloud, make sure that you've got a window open to the chat for this session. We've got a number of our engineers available and on hand to answer your questions live as we go through these videos and disgusting problem. So that's us, I guess, for Dr Enterprise Container Cloud, this is Mirant asses brand new product for bootstrapping Doctor Enterprise Kubernetes clusters at scale Anything. The airport Abu's? >>No, just that I think that we're trying Thio. Uh, let's see. Hold on. I think that we're trying Teoh give you a foundation against which to give this stuff a go yourself. And that's really the key to this thing is to provide some, you know, many training and education in a very condensed period. So, >>yeah, that's exactly what you're going to see. The SYRIZA videos we have today. We're going to focus on your first steps with Dr Enterprise Container Cloud from installing it to bootstrapping your regional child clusters so that by the end of the tutorial content today, you're gonna be prepared to spin up your first documentary prize clusters using documented prize container class. So just a little bit of logistics for the session. We're going to run through these tutorials twice. We're gonna do one run through starting seven minutes ago up until I guess it will be ten fifteen Pacific time. Then we're gonna run through the whole thing again. So if you've got other colleagues that weren't able to join right at the top of the hour and would like to jump in from the beginning, ten. Fifteen Pacific time. We're gonna do the whole thing over again. So if you want to see the videos twice, you got public friends and colleagues that, you know you wanna pull in for a second chance to see this stuff, we're gonna do it all. All twice. Yeah, this session. Any any logistics I should add, Bruce that No, >>I think that's that's pretty much what we had to nail down here. But let's zoom dash into those, uh, feature films. >>Let's do Edmonds. And like I said, don't be shy. Feel free to ask questions in the chat or engineers and boosting myself are standing by to answer your questions. So let me just tee up the first video here and walk their cost. Yeah. Mhm. Yes. Sorry. And here we go. So our first video here is gonna be about installing the Doctor Enterprise Container Club Management cluster. So I like to think of the management cluster as like your mothership, right? This is what you're gonna use to deploy all those little child clusters that you're gonna use is like, Come on it as clusters downstream. So the management costs was always our first step. Let's jump in there >>now. We have to give this brief little pause >>with no good day video. Focus for this demo will be the initial bootstrap of the management cluster in the first regional clusters to support AWS deployments. The management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case, eight of us and the Elsie um, components on the UCP Cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a big strap note on this dependencies on handling with download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the adduce environment. The fourth configuring the deployment, defining things like the machine types on the fifth phase. Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node, just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now we're just checking through AWS to make sure that the account we want to use we have the correct credentials on the correct roles set up and validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just going to check that we can, from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next, we're going to run it. I'm in. Deploy it. Changing into that big struck folder. Just making see what's there. Right now we have no license file, so we're gonna get the license filed. Oh, okay. Get the license file through the more antis downloads site, signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, Once we've done that, we can now go ahead with the rest of the deployment. See that the follow is there. Uh, huh? That's again checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. All right, The next big step is valid in all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running an AWS policy create. So it is part of that is creating our Food trucks script, creating the mystery policy files on top of AWS, Just generally preparing the environment using a cloud formation script you'll see in a second will give a new policy confirmations just waiting for it to complete. Yeah, and there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created Today I am console. Go to that new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media Access key I D and the secret access key. We went, Yeah, usually then exported on the command line. Okay. Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Yes. Okay, that's the key. Secret X key. Right on. Let's kick it off. Yeah, So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you, and as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the west side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS. At the end of the process, that cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Okay. Local clusters boat just waiting for the various objects to get ready. Standard communities objects here Okay, so we speed up this process a little bit just for demonstration purposes. Yeah. There we go. So first note is being built the best in host. Just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for a W s to create the instance. Okay. Yes. Here, beauty there. Okay. Mhm. Okay. Yeah, yeah. Okay. On there. We got question. Host has been built on three instances for the management clusters have now been created. We're going through the process of preparing. Those nodes were now copying everything over. See that? The scaling up of controllers in the big Strap cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Yeah. Yeah, just waiting for key. Clark. Uh huh. Start to finish up. Yeah. No. What? Now we're shutting down control this on the local bootstrap node on preparing our I. D. C. Configuration. Fourth indication, soon as this is completed. Last phase will be to deploy stack light into the new cluster the last time Monitoring tool set way Go stack like to plan It has started. Mhm coming to the end of the deployment Mountain. Yeah, America. Final phase of the deployment. Onda, We are done. Okay, You'll see. At the end they're providing us the details of you. I log in so there's a keeper clogging. You can modify that initial default password is part of the configuration set up with one documentation way. Go Councils up way can log in. Yeah, yeah, thank you very much for watching. >>Excellent. So in that video are wonderful field CTO Shauna Vera bootstrapped up management costume for Dr Enterprise Container Cloud Bruce, where exactly does that leave us? So now we've got this management costume installed like what's next? >>So primarily the foundation for being able to deploy either regional clusters that will then allow you to support child clusters. Uh, comes into play the next piece of what we're going to show, I think with Sean O'Mara doing this is the child cluster capability, which allows you to then deploy your application services on the local cluster. That's being managed by the ah ah management cluster that we just created with the bootstrap. >>Right? So this cluster isn't yet for workloads. This is just for bootstrapping up the downstream clusters. Those or what we're gonna use for workings. >>Exactly. Yeah. And I just wanted to point out, since Sean O'Mara isn't around, toe, actually answer questions. I could listen to that guy. Read the phone book, and it would be interesting, but anyway, you can tell him I said that >>he's watching right now, Crusoe. Good. Um, cool. So and just to make sure I understood what Sean was describing their that bootstrap er knows that you, like, ran document fresh pretender Cloud from to begin with. That's actually creating a kind kubernetes deployment kubernetes and Docker deployment locally. That then hits the AWS a p i in this example that make those e c two instances, and it makes like a three manager kubernetes cluster there, and then it, like, copies itself over toe those communities managers. >>Yeah, and and that's sort of where the transition happens. You can actually see it. The output that when it says I'm pivoting, I'm pivoting from my local kind deployment of cluster AP, I toothy, uh, cluster, that's that's being created inside of AWS or, quite frankly, inside of open stack or inside of bare metal or inside of it. The targeting is, uh, abstracted. Yeah, but >>those air three environments that we're looking at right now, right? Us bare metal in open staff environments. So does that kind cluster on the bootstrap er go away afterwards. You don't need that afterwards. Yeah, that is just temporary. To get things bootstrapped, then you manage things from management cluster on aws in this example? >>Yeah. Yeah. The seed, uh, cloud that post the bootstrap is not required anymore. And there's no, uh, interplay between them after that. So that there's no dependencies on any of the clouds that get created thereafter. >>Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, be a temporary container that would bootstrap all the other containers. Go away. It's, uh, so sort of a similar, similar temporary transient bootstrapping model. Cool. Excellent. What will convict there? It looked like there wasn't a ton, right? It looked like you had to, like, set up some AWS parameters like credentials and region and stuff like that. But other than that, that looked like heavily script herbal like there wasn't a ton of point and click there. >>Yeah, very much so. It's pretty straightforward from a bootstrapping standpoint, The config file that that's generated the template is fairly straightforward and targeted towards of a small medium or large, um, deployment. And by editing that single file and then gathering license file and all of the things that Sean went through, um, that that it makes it fairly easy to script >>this. And if I understood correctly as well that three manager footprint for your management cluster, that's the minimum, right. We always insist on high availability for this management cluster because boy do not wanna see oh, >>right, right. And you know, there's all kinds of persistent data that needs to be available, regardless of whether one of the notes goes down or not. So we're taking care of all of that for you behind the scenes without you having toe worry about it as a developer. >>No, I think there's that's a theme that I think will come back to throughout the rest of this tutorial session today is there's a lot of there's a lot of expertise baked him to Dr Enterprise Container Cloud in terms of implementing best practices for you like the defaulter, just the best practices of how you should be managing these clusters, Miss Seymour. Examples of that is the day goes on. Any interesting questions you want to call out from the chap who's >>well, there was. Yeah, yeah, there was one that we had responded to earlier about the fact that it's a management cluster that then conduce oh, either the the regional cluster or a local child molester. The child clusters, in each case host the application services, >>right? So at this point, we've got, in some sense, like the simplest architectures for our documentary prize Container Cloud. We've got the management cluster, and we're gonna go straight with child cluster. In the next video, there's a more sophisticated architecture, which will also proper today that inserts another layer between those two regional clusters. If you need to manage regions like across a BS, reads across with these documents anything, >>yeah, that that local support for the child cluster makes it a lot easier for you to manage the individual clusters themselves and to take advantage of our observation. I'll support systems a stack light and things like that for each one of clusters locally, as opposed to having to centralize thumb >>eso. It's a couple of good questions. In the chat here, someone was asking for the instructions to do this themselves. I strongly encourage you to do so. That should be in the docks, which I think Dale helpfully thank you. Dale provided links for that's all publicly available right now. So just head on in, head on into the docks like the Dale provided here. You can follow this example yourself. All you need is a Mirante license for this and your AWS credentials. There was a question from many a hear about deploying this toe azure. Not at G. Not at this time. >>Yeah, although that is coming. That's going to be in a very near term release. >>I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. Very bracing. Cool. Okay. Any other thoughts on this one does. >>No, just that the fact that we're running through these individual pieces of the steps Well, I'm sure help you folks. If you go to the link that, uh, the gentleman had put into the chat, um, giving you the step by staff. Um, it makes it fairly straightforward to try this yourselves. >>E strongly encourage that, right? That's when you really start to internalize this stuff. OK, but before we move on to the next video, let's just make sure everyone has a clear picture in your mind of, like, where we are in the life cycle here creating this management cluster. Just stop me if I'm wrong. Who's creating this management cluster is like, you do that once, right? That's when your first setting up your doctor enterprise container cloud environment of system. What we're going to start seeing next is creating child clusters and this is what you're gonna be doing over and over and over again. When you need to create a cluster for this Deb team or, you know, this other team river it is that needs commodity. Doctor Enterprise clusters create these easy on half will. So this was once to set up Dr Enterprise Container Cloud Child clusters, which we're going to see next. We're gonna do over and over and over again. So let's go to that video and see just how straightforward it is to spin up a doctor enterprise cluster for work clothes as a child cluster. Undocumented brands contain >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster, the scaling of the cluster and how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the U I so you can switch. Project Mary only has access to development. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Nan Yes, this H Keys Associate ID for Mary into her team on the cloud credentials that allow you to create access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences, Right? Let's now set up semester search keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name, we copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our local machine. A simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, you go to the clusters tab. We hit the create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one release version five point seven is the current release Onda Attach. Mary's Key is necessary Key. We can then check the rest of the settings, confirming the provider Any kubernetes c r D r I p address information. We can change this. Should we wish to? We'll leave it default for now on. Then what components? A stack light I would like to deploy into my Custer. For this. I'm enabling stack light on logging on Aiken. Sit up the retention sizes Attention times on. Even at this stage, at any customer alerts for the watchdogs. E consider email alerting which I will need my smart host details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I commend side on the route device size. There we go, my three machines obviously creating. I now need to add some workers to this custom. So I go through the same process this time once again, just selecting worker. I'll just add to once again, the AM is extremely important. Will fail if we don't pick the right, Am I for a boon to machine in this case and the deployment has started. We can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here we've created the VPC. We've created the sub nets on We've created the Internet gateway. It's unnecessary made of us and we have no warnings of the stage. Yeah, this will then run for a while. We have one minute past waken click through. We can check the status of the machine bulls as individuals so we can check the machine info, details of the machines that we've assigned, right? Mhm Onda. See any events pertaining to the machine areas like this one on normal? Yeah. Just watch asked. The community's components are waiting for the machines to start. Go back to Custer's. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway on the stage. The machines have been built on assigned. I pick up the U. S. Thank you. Yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. Mhm. No speeding things up a little bit. This whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured. Mhm, mhm. And then we go. Cluster has been deployed. So once the classes deployed, we can now never get around our environment. Okay, Are cooking into configure cluster We could modify their cluster. We could get the end points for alert alert manager on See here The griffon occupying and Prometheus are still building in the background but the cluster is available on you would be able to put workloads on it the stretch to download the cube conflict so that I can put workloads on it. It's again three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster Mhm all right Now that the build is fully completed, we can check out cluster info on. We can see that Allow the satellite components have been built. All the storage is there, and we have access to the CPU. I So if we click into the cluster, we can access the UCP dashboard, right? Shit. Click the signing with Detroit button to use the SSO on. We give Mary's possible to use the name once again. Thing is, an unlicensed cluster way could license at this point. Or just skip it on. There. We have the UCP dashboard. You can see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon, a data just being automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Mhm. So we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah. General dashboard of Cuba navies cluster one of this is configurable. You can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster, all right to scale the cluster on to add a notice. A simple is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger disks and you'll see that worker has been added from the provisioning state on shortly. We will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workouts are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button, validating which release you would like to update to. In this case, the next available releases five point seven point one. Here I'm kicking the update by in the background We will coordinate. Drain each node slowly go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Girl, we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already on In a few minutes we'll see that there are great has been completed. There we go. Great. Done. Yeah. If you work loads of both using proper cloud native community standards, there will be no impact. >>Excellent. So at this point, we've now got a cluster ready to start taking our communities of workloads. He started playing or APs to that costume. So watching that video, the thing that jumped out to me at first Waas like the inputs that go into defining this workload cost of it. All right, so we have to make sure we were using on appropriate am I for that kind of defines the substrate about what we're gonna be deploying our cluster on top of. But there's very little requirements. A so far as I could tell on top of that, am I? Because Docker enterprise Container Cloud is gonna bootstrap all the components that you need. That s all we have is kind of kind of really simple bunch box that we were deploying these things on top of so one thing that didn't get dug into too much in the video. But it's just sort of implied. Bruce, maybe you can comment on this is that release that Shawn had to choose for his, uh, for his cluster in creating it. And that release was also the thing we had to touch. Wanted to upgrade part cluster. So you have really sharp eyes. You could see at the end there that when you're doing the release upgrade enlisted out a stack of components docker, engine, kubernetes, calico, aled, different bits and pieces that go into, uh, go into one of these commodity clusters that deploy. And so, as far as I can tell in that case, that's what we mean by a release. In this sense, right? It's the validated stack off container ization and orchestration components that you know we've tested out and make sure it works well, introduction environments. >>Yeah, and and And that's really the focus of our effort is to ensure that any CVS in any of the stack are taken care of that there is a fixes air documented and up streamed to the open stack community source community, um, and and that, you know, then we test for the scaling ability and the reliability in high availability configuration for the clusters themselves. The hosts of your containers. Right. And I think one of the key, uh, you know, benefits that we provide is that ability to let you know, online, high. We've got an update for you, and it's fixes something that maybe you had asked us to fix. Uh, that all comes to you online as your managing your clusters, so you don't have to think about it. It just comes as part of the product. >>You just have to click on Yes. Please give me that update. Uh, not just the individual components, but again. It's that it's that validated stack, right? Not just, you know, component X, y and Z work. But they all work together effectively Scalable security, reliably cool. Um, yeah. So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old universal control plane. Doctor Enterprise. On top of that, Sean had the classic comment there, you know? Yeah. Yeah. You'll see a little warnings and errors or whatever. When you're setting up, UCP don't handle, right, Just let it do its job, and it will converge all its components, you know, after just just a minute or two. But we saw in that video, we sped things up a little bit there just we didn't wait for, you know, progress fighters to complete. But really, in real life, that whole process is that anything so spend up one of those one of those fosters so quite quite quick. >>Yeah, and and I think the the thoroughness with which it goes through its process and re tries and re tries, uh, as you know, and it was evident when we went through the initial ah video of the bootstrapping as well that the processes themselves are self healing, as they are going through. So they will try and retry and wait for the event to complete properly on. And once it's completed properly, then it will go to the next step. >>Absolutely. And the worst thing you could do is panic at the first warning and start tearing things that don't don't do that. Just don't let it let it heal. Let take care of itself. And that's the beauty of these manage solutions is that they bake in a lot of subject matter expertise, right? The decisions that are getting made by those containers is they're bootstrapping themselves, reflect the expertise of the Mirant ISS crew that has been developing this content in these two is free for years and years now, over recognizing humanities. One cool thing there that I really appreciate it actually that it adds on top of Dr Enterprise is that automatic griffon a deployment as well. So, Dr Enterprises, I think everyone knows has had, like, some very high level of statistics baked into its dashboard for years and years now. But you know our customers always wanted a double click on that right to be able to go a little bit deeper. And Griffon are really addresses that it's built in dashboards. That's what's really nice to see. >>Yeah, uh, and all of the alerts and, uh, data are actually captured in a Prometheus database underlying that you have access to so that you are allowed to add new alerts that then go out to touch slack and say hi, You need to watch your disk space on this machine or those kinds of things. Um, and and this is especially helpful for folks who you know, want to manage the application service layer but don't necessarily want to manage the operations side of the house. So it gives them a tool set that they can easily say here, Can you watch these for us? And Miran tas can actually help do that with you, So >>yeah, yeah, I mean, that's just another example of baking in that expert knowledge, right? So you can leverage that without tons and tons of a long ah, long runway of learning about how to do that sort of thing. Just get out of the box right away. There was the other thing, actually, that you could sleep by really quickly if you weren't paying close attention. But Sean mentioned it on the video. And that was how When you use dark enterprise container cloud to scale your cluster, particularly pulling a worker out, it doesn't just like Territo worker down and forget about it. Right? Is using good communities best practices to cordon and drain the No. So you aren't gonna disrupt your workloads? You're going to just have a bunch of containers instantly. Excellent crash. You could really carefully manage the migration of workloads off that cluster has baked right in tow. How? How? Document? The brass container cloud is his handling cluster scale. >>Right? And And the kubernetes, uh, scaling methodology is is he adhered to with all of the proper techniques that ensure that it will tell you. Wait, you've got a container that actually needs three, uh, three, uh, instances of itself. And you don't want to take that out, because that node, it means you'll only be able to have to. And we can't do that. We can't allow that. >>Okay, Very cool. Further thoughts on this video. So should we go to the questions. >>Let's let's go to the questions >>that people have. Uh, there's one good one here, down near the bottom regarding whether an a p I is available to do this. So in all these demos were clicking through this web. You I Yes, this is all a p. I driven. You could do all of this. You know, automate all this away is part of the CSC change. Absolutely. Um, that's kind of the point, right? We want you to be ableto spin up. Come on. I keep calling them commodity clusters. What I mean by that is clusters that you can create and throw away. You know, easily and automatically. So everything you see in these demos eyes exposed to FBI? >>Yeah. In addition, through the standard Cube cuddle, Uh, cli as well. So if you're not a programmer, but you still want to do some scripting Thio, you know, set up things and deploy your applications and things. You can use this standard tool sets that are available to accomplish that. >>There is a good question on scale here. So, like, just how many clusters and what sort of scale of deployments come this kind of support our engineers report back here that we've done in practice up to a Zeman ia's like two hundred clusters. We've deployed on this with two hundred fifty nodes in a cluster. So were, you know, like like I said, hundreds, hundreds of notes, hundreds of clusters managed by documented press container fall and then those downstream clusters, of course, subject to the usual constraints for kubernetes, right? Like default constraints with something like one hundred pods for no or something like that. There's a few different limitations of how many pods you can run on a given cluster that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. >>Yeah, E. I mean, I don't think that we constrain any of the capabilities that are available in the, uh, infrastructure deliveries, uh, service within the goober Netease framework. So were, you know, But we are, uh, adhering to the standards that we would want to set to make sure that we're not overloading a node or those kinds of things, >>right. Absolutely cool. Alright. So at this point, we've got kind of a two layered our protection when we are management cluster, but we deployed in the first video. Then we use that to deploy one child clustering work, classroom, uh, for more sophisticated deployments where we might want to manage child clusters across multiple regions. We're gonna add another layer into our architectural we're gonna add in regional cluster management. So this idea you're gonna have the single management cluster that we started within the first video. On the next video, we're gonna learn how to spin up a regional clusters, each one of which would manage, for example, a different AWS uh, US region. So let me just pull out the video for that bill. We'll check it out for me. Mhm. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectures of you how to set up the management environment, prepare for the deployment deployment overview and then just to prove it, to play a regional child cluster. So, looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case AWS on the LCN components on the D you speak Cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need a regional cluster? Different platform architectures, for example aws who have been stack even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager we also Machine Manager were held. Mandel are managed as well as the actual provider logic. Mhm. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. And you see, it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster similar to what we're going to deploy now, also only has three managers once again, no workers. But as a comparison, here's a child cluster This one has three managers, but also has additional workers associate it to the cluster. All right, we need to connect. Tell bootstrap note. Preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine. All right. A few things we have to do to make sure the environment is ready. First thing we're going to see go into route. We'll go into our releases folder where we have the kozberg struck on. This was the original bootstrap used to build the original management cluster. Yeah, we're going to double check to make sure our cube con figures there once again, the one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything is working. A condom. No damages waken access to a swell. Yeah. Next we're gonna edit the machine definitions. What we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I. So that's found under the templates AWS directory. We don't need to edit anything else here. But we could change items like the size of the machines attempts. We want to use that The key items to ensure where you changed the am I reference for the junta image is the one for the region in this case AWS region for utilizing this was no construct deployment. We have to make sure we're pointing in the correct open stack images. Yeah, okay. Set the correct and my save file. Now we need to get up credentials again. When we originally created the bootstrap cluster, we got credentials from eight of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we're just exporting the AWS access key and I d. What's important is CAAs aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our cube conflict that we want to use for the management cluster. When we looked at earlier Yeah, now we're exporting that. Want to call the cluster region Is Frank Foods Socrates Frankfurt yet trying to use something descriptive It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed. Um, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at W s and waiting for that bastard and no to get started. Please. The best you nerd Onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy. Dr. Enterprise, this is probably the longest face. Yeah, seeing the second that all the nerds will go from the player deployed. Prepare, prepare. Yeah, You'll see their status changes updates. He was the first night ready. Second, just applying second already. Both my time. No waiting from home control. Let's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running the date of the U. S. All my stay. Ah, now we're playing Stockland. Switch over is done on. Done. Now I will build a child cluster in the new region very, very quickly to find the cluster will pick. Our new credential has shown up. We'll just call it Frankfurt for simplicity a key and customs to find. That's the machine. That cluster stop with three managers. Set the correct Am I for the region? Yeah, Do the same to add workers. There we go test the building. Yeah. Total bill of time Should be about fifteen minutes. Concedes in progress. It's going to expect this up a little bit. Check the events. We've created all the dependencies, machine instances, machines, a boat shortly. We should have a working cluster in Frankfurt region. Now almost a one note is ready from management. Two in progress. Yeah, on we're done. Clusters up and running. Yeah. >>Excellent. So at this point, we've now got that three tier structure that we talked about before the video. We got that management cluster that we do strapped in the first video. Now we have in this example to different regional clustering one in Frankfurt, one of one management was two different aws regions. And sitting on that you can do Strap up all those Doctor enterprise costumes that we want for our work clothes. >>Yeah, that's the key to this is to be able to have co resident with your actual application service enabled clusters the management co resident with it so that you can, you know, quickly access that he observation Elson Surfboard services like the graph, Ana and that sort of thing for your particular region. A supposed to having to lug back into the home. What did you call it when we started >>the mothership? >>The mothership. Right. So we don't have to go back to the mother ship. We could get >>it locally. Yeah, when, like to that point of aggregating things under a single pane of glass? That's one thing that again kind of sailed by in the demo really quickly. But you'll notice all your different clusters were on that same cluster. Your pain on your doctor Enterprise Container Cloud management. Uh, court. Right. So both your child clusters for running workload and your regional clusters for bootstrapping. Those child clusters were all listed in the same place there. So it's just one pane of glass to go look for, for all of your clusters, >>right? And, uh, this is kind of an important point. I was, I was realizing, as we were going through this. All of the mechanics are actually identical between the bootstrapped cluster of the original services and the bootstrapped cluster of the regional services. It's the management layer of everything so that you only have managers, you don't have workers and that at the child cluster layer below the regional or the management cluster itself, that's where you have the worker nodes. And those are the ones that host the application services in that three tiered architecture that we've now defined >>and another, you know, detail for those that have sharp eyes. In that video, you'll notice when deploying a child clusters. There's not on Lee. A minimum of three managers for high availability management cluster. You must have at least two workers that's just required for workload failure. It's one of those down get out of work. They could potentially step in there, so your minimum foot point one of these child clusters is fine. Violence and scalable, obviously, from a >>That's right. >>Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want to my last video. There's another question here about, like where these clusters can live. So again, I know these examples are very aws heavy. Honestly, it's just easy to set up down on the other us. We could do things on bare metal and, uh, open stack departments on Prem. That's what all of this still works in exactly the same way. >>Yeah, the, uh, key to this, especially for the the, uh, child clusters, is the provision hers? Right? See you establish on AWS provision or you establish a bare metal provision or you establish a open stack provision. Or and eventually that list will include all of the other major players in the cloud arena. But you, by selecting the provision or within your management interface, that's where you decide where it's going to be hosted, where the child cluster is to be hosted. >>Speaking off all through a child clusters. Let's jump into our last video in the Siri's, where we'll see how to spin up a child cluster on bare metal. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. So why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent. Provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and supports high performance workloads like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another opera visor. Lay it between so continue on the theme Why Communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p G A s G p us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. Uh, we can handle utilization in the scheduling. Better Onda we increase the performances and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project will add the bare metal hosts, including the host name. I put my credentials I pay my address the Mac address on then provide a machine type label to determine what type of machine it is for later use. Okay, let's get started. So well again. Was the operator thing. We'll go and we'll create a project for our machines to be a member off helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. So the first thing we had to be in post, Yeah, many of the machine A name. Anything you want, que experimental zero one. Provide the IAP my user name type my password. Okay. On the Mac address for the common interface with the boot interface and then the i p m I i p address These machines will be at the time storage worker manager. He's a manager. Yeah, we're gonna add a number of other machines on will. Speed this up just so you could see what the process looks like in the future. Better discovery will be added to the product. Okay. Okay. Getting back there we have it are Six machines have been added, are busy being inspected, being added to the system. Let's have a look at the details of a single note. Yeah, you can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. I see. Okay, let's go and create the cluster. Yeah, So we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So we'll credit custom. We'll give it a name, but if it were selecting bare metal on the region, we're going to select the version we want to apply. No way. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of dress range on update the address range that we want to use for the cluster. Check that the sea ideal blocks for the Cuban ladies and tunnels are what we want them to be. Enable disabled stack light. Yeah, and soothe stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here. We're focused on building communities clusters, so we're gonna put the count of machines. You want managers? We're gonna pick the label type manager and create three machines is the manager for the Cuban eighties. Casting Okay thing. We're having workers to the same. It's a process. Just making sure that the worker label host level are I'm sorry. On when Wait for the machines to deploy. Let's go through the process of putting the operating system on the notes validating and operating system deploying doctor identifies Make sure that the cluster is up and running and ready to go. Okay, let's review the bold events waken See the machine info now populated with more information about the specifics of things like storage and of course, details of a cluster etcetera. Yeah, yeah, well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build? And that brings us to the end of this particular demo. You can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>All right, so there we have it, deploying a cluster to bare metal. Much the same is how we did for AWS. I guess maybe the biggest different stepwise there is there is that registration face first, right? So rather than just using AWS financials toe magically create PM's in the cloud. You got a point out all your bare metal servers to Dr Enterprise between the cloud and they really come in, I guess three profiles, right? You got your manager profile with a profile storage profile which has been labeled as allocate. Um, crossword cluster has appropriate, >>right? And And I think that the you know, the key differentiator here is that you have more physical control over what, uh, attributes that love your cat, by the way, uh, where you have the different attributes of a server of physical server. So you can, uh, ensure that the SSD configuration on the storage nodes is gonna be taken advantage of in the best way the GP use on the worker nodes and and that the management layer is going to have sufficient horsepower to, um, spin up to to scale up the the environments, as required. One of the things I wanted to mention, though, um, if I could get this out without the choking much better. Um, is that Ah, hey, mentioned the load balancer and I wanted to make sure in defining the load balancer and the load balancer ranges. Um, that is for the top of the the cluster itself. That's the operations of the management, uh, layer integrating with your systems internally to be able to access the the Cube Can figs. I I p address the, uh, in a centralized way. It's not the load balancer that's working within the kubernetes cluster that you are deploying. That's still cube proxy or service mesh, or however you're intending to do it. So, um, it's kind of an interesting step that your initial step in building this, um and we typically use things like metal L B or in gen X or that kind of thing is to establish that before we deploy this bear mental cluster so that it can ride on top of that for the tips and things. >>Very cool. So any other thoughts on what we've seen so far today? Bruce, we've gone through all the different layers. Doctor enterprise container clouds in these videos from our management are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. Closing thoughts before we take just a very short break and run through these demos again. >>You know, I've been very exciting. Ah, doing the presentation with you. I'm really looking forward to doing it the second time, so that we because we've got a good rhythm going about this kind of thing. So I'm looking forward to doing that. But I think that the key elements of what we're trying to convey to the folks out there in the audience that I hope you've gotten out of it is that will that this is an easy enough process that if you follow the step by steps going through the documentation that's been put out in the chat, um, that you'll be able to give this a go yourself, Um, and you don't have to limit yourself toe having physical hardware on prim to try it. You could do it in a ws as we've shown you today. And if you've got some fancy use cases like, uh, you you need a Hadoop And and, uh, you know, cloud oriented ai stuff that providing a bare metal service helps you to get there very fast. So right. Thank you. It's been a pleasure. >>Yeah, thanks everyone for coming out. So, like I said we're going to take a very short, like, three minute break here. Uh, take the opportunity to let your colleagues know if they were in another session or they didn't quite make it to the beginning of this session. Or if you just want to see these demos again, we're going to kick off this demo. Siri's again in just three minutes at ten. Twenty five a. M. Pacific time where we will see all this great stuff again. Let's take a three minute break. I'll see you all back here in just two minutes now, you know. Okay, folks, that's the end of our extremely short break. We'll give people just maybe, like one more minute to trickle in if folks are interested in coming on in and jumping into our demo. Siri's again. Eso For those of you that are just joining us now I'm Bill Mills. I head up curriculum development for the training team here. Moran Tous on Joining me for this session of demos is Bruce. Don't you go ahead and introduce yourself doors, who is still on break? That's cool. We'll give Bruce a minute or two to get back while everyone else trickles back in. There he is. Hello, Bruce. >>How'd that go for you? Okay, >>Very well. So let's kick off our second session here. I e just interest will feel for you. Thio. Let it run over here. >>Alright. Hi. Bruce Matthews here. I'm the Western Regional Solutions architect for Marantz. Use A I'm the one with the gray hair and the glasses. Uh, the handsome one is Bill. So, uh, Bill, take it away. >>Excellent. So over the next hour or so, we've got a Siris of demos that's gonna walk you through your first steps with Dr Enterprise Container Cloud Doctor Enterprise Container Cloud is, of course, Miranda's brand new offering from bootstrapping kubernetes clusters in AWS bare metal open stack. And for the providers in the very near future. So we we've got, you know, just just over an hour left together on this session, uh, if you joined us at the top of the hour back at nine. A. M. Pacific, we went through these demos once already. Let's do them again for everyone else that was only able to jump in right now. Let's go. Our first video where we're gonna install Dr Enterprise container cloud for the very first time and use it to bootstrap management. Cluster Management Cluster, as I like to describe it, is our mother ship that's going to spin up all the other kubernetes clusters, Doctor Enterprise clusters that we're gonna run our workloads on. So I'm gonna do >>I'm so excited. I can hardly wait. >>Let's do it all right to share my video out here. Yeah, let's do it. >>Good day. The focus for this demo will be the initial bootstrap of the management cluster on the first regional clusters. To support AWS deployments, the management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case AWS and the Elsom components on the UCP cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a bootstrap note on its dependencies on handling the download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the ideas environment, the fourth configuring the deployment, defining things like the machine types on the fifth phase, Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node. Just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now, we're just checking through aws to make sure that the account we want to use we have the correct credentials on the correct roles set up on validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just gonna check that we can from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next we're going to run it. Yeah, I've been deployed changing into that big struck folder, just making see what's there right now we have no license file, so we're gonna get the license filed. Okay? Get the license file through more antis downloads site signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, since we've done that, we can now go ahead with the rest of the deployment. Yeah, see what the follow is there? Uh huh. Once again, checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. Alright. Next big step is violating all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running in AWS policy create. So it is part of that is creating our food trucks script. Creating this through policy files onto the AWS, just generally preparing the environment using a cloud formation script, you'll see in a second, I'll give a new policy confirmations just waiting for it to complete. And there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created. Good day. I am console. Go to the new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media access Key I. D and the secret access key, but usually then exported on the command line. Okay, Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Okay, thanks. Is key. So you could X key Right on. Let's kick it off. So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you. Um, as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the AWS side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS at the end of the process. That cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Yeah, okay. Local clusters boat. Just waiting for the various objects to get ready. Standard communities objects here. Yeah, you mentioned Yeah. So we've speed up this process a little bit just for demonstration purposes. Okay, there we go. So first note is being built the bastion host just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for AWS to create the instance. Okay. Yeah. Beauty there. Movies. Okay, sketch. Hello? Yeah, Okay. Okay. On. There we go. Question host has been built on three instances for the management clusters have now been created. Okay, We're going through the process of preparing. Those nodes were now copying everything over. See that scaling up of controllers in the big strapped cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Right? Okay. Just waiting for key. Clark. Uh huh. So finish up. Yeah. No. Now we're shutting down. Control this on the local bootstrap node on preparing our I. D. C configuration, fourth indication. So once this is completed, the last phase will be to deploy stack light into the new cluster, that glass on monitoring tool set, Then we go stack like deployment has started. Mhm. Coming to the end of the deployment mountain. Yeah, they were cut final phase of the deployment. And we are done. Yeah, you'll see. At the end, they're providing us the details of you. I log in. So there's a key Clark log in. Uh, you can modify that initial default possible is part of the configuration set up where they were in the documentation way. Go Councils up way can log in. Yeah. Yeah. Thank you very much for watching. >>All right, so at this point, what we have we got our management cluster spun up, ready to start creating work clusters. So just a couple of points to clarify there to make sure everyone caught that, uh, as advertised. That's darker. Enterprise container cloud management cluster. That's not rework loans. are gonna go right? That is the tool and you're gonna use to start spinning up downstream commodity documentary prize clusters for bootstrapping record too. >>And the seed host that were, uh, talking about the kind cluster dingy actually doesn't have to exist after the bootstrap succeeds eso It's sort of like, uh, copies head from the seed host Toothy targets in AWS spins it up it then boots the the actual clusters and then it goes away too, because it's no longer necessary >>so that bootstrapping know that there's not really any requirements, Hardly on that, right. It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, as you just said, it's just a kubernetes in docker cluster on that piece. Drop note is just gonna get torn down after the set up finishes on. You no longer need that. Everything you're gonna do, you're gonna drive from the single pane of glass provided to you by your management cluster Doctor enterprise Continue cloud. Another thing that I think is sort of interesting their eyes that the convict is fairly minimal. Really? You just need to provide it like aws regions. Um, am I? And that's what is going to spin up that spending that matter faster. >>Right? There is a mammal file in the bootstrap directory itself, and all of the necessary parameters that you would fill in have default set. But you have the option then of going in and defining a different Am I different for a different region, for example? Oh, are different. Size of instance from AWS. >>One thing that people often ask about is the cluster footprint. And so that example you saw they were spitting up a three manager, um, managing cluster as mandatory, right? No single manager set up at all. We want high availability for doctrine Enterprise Container Cloud management. Like so again, just to make sure everyone sort of on board with the life cycle stage that we're at right now. That's the very first thing you're going to do to set up Dr Enterprise Container Cloud. You're going to do it. Hopefully exactly once. Right now, you've got your management cluster running, and they're gonna use that to spend up all your other work clusters Day today has has needed How do we just have a quick look at the questions and then lets take a look at spinning up some of those child clusters. >>Okay, e think they've actually been answered? >>Yeah, for the most part. One thing I'll point out that came up again in the Dail, helpfully pointed out earlier in surgery, pointed out again, is that if you want to try any of the stuff yourself, it's all of the dogs. And so have a look at the chat. There's a links to instructions, so step by step instructions to do each and every thing we're doing here today yourself. I really encourage you to do that. Taking this out for a drive on your own really helps internalizing communicate these ideas after the after launch pad today, Please give this stuff try on your machines. Okay, So at this point, like I said, we've got our management cluster. We're not gonna run workloads there that we're going to start creating child clusters. That's where all of our work and we're gonna go. That's what we're gonna learn how to do in our next video. Cue that up for us. >>I so love Shawn's voice. >>Wasn't that all day? >>Yeah, I watched him read the phone book. >>All right, here we go. Let's now that we have our management cluster set up, let's create a first child work cluster. >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster the scaling of the cluster on how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the u I. So you can switch Project Mary only has access to development. Uh huh. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Man. Yes, this H keys, Associate ID for Mary into her team on the cloud credentials that allow you to create or access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences. Right. Let's now set up some ssh keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name. We copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our machine. A very simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, we got the clusters tab we had to create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one released version five point seven is the current release Onda Attach. Mary's Key is necessary key. We can then check the rest of the settings, confirming the provider any kubernetes c r D a r i p address information. We can change this. Should we wish to? We'll leave it default for now and then what components of stack light? I would like to deploy into my custom for this. I'm enabling stack light on logging, and I consider the retention sizes attention times on. Even at this stage, add any custom alerts for the watchdogs. Consider email alerting which I will need my smart host. Details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I convinced side on the route. Device size. There we go. My three machines are busy creating. I now need to add some workers to this cluster. So I go through the same process this time once again, just selecting worker. I'll just add to once again the am I is extremely important. Will fail if we don't pick the right. Am I for a Clinton machine? In this case and the deployment has started, we can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen. Cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here. We've created the VPC. We've created the sub nets on. We've created the Internet Gateway. It's unnecessary made of us. And we have no warnings of the stage. Okay, this will then run for a while. We have one minute past. We can click through. We can check the status of the machine balls as individuals so we can check the machine info, details of the machines that we've assigned mhm and see any events pertaining to the machine areas like this one on normal. Yeah. Just last. The community's components are waiting for the machines to start. Go back to customers. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway. And at this stage, the machines have been built on assigned. I pick up the U S. Yeah, yeah, yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. No speeding things up a little bit this whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured mhm and then we go. Cluster has been deployed. So once the classes deployed, we can now never get around. Our environment are looking into configure cluster. We could modify their cluster. We could get the end points for alert Alert Manager See here the griffon occupying and Prometheus are still building in the background but the cluster is available on You would be able to put workloads on it at this stage to download the cube conflict so that I can put workloads on it. It's again the three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster. All right, Now that the build is fully completed, we can check out cluster info on. We can see that all the satellite components have been built. All the storage is there, and we have access to the CPU. I. So if we click into the cluster, we can access the UCP dashboard, click the signing with the clock button to use the SSO. We give Mary's possible to use the name once again. Thing is an unlicensed cluster way could license at this point. Or just skip it on. Do we have the UCP dashboard? You could see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon. A data just been automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Um, so we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah, a general dashboard of Cuba Navies cluster. What If this is configurable, you can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster. All right to scale the cluster on to add a No. This is simple. Is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger group disks and you'll see that worker has been added in the provisioning state. On shortly, we will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node we would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workloads are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button validating which release you would like to update to this case. This available releases five point seven point one give you I'm kicking the update back in the background. We will coordinate. Drain each node slowly, go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Who we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already. Yeah, and in a few minutes, we'll see that the upgrade has been completed. There we go. Great. Done. If you work loads of both using proper cloud native community standards, there will be no impact. >>All right, there. We haven't. We got our first workload cluster spun up and managed by Dr Enterprise Container Cloud. So I I loved Shawn's classic warning there. When you're spinning up an actual doctor enterprise deployment, you see little errors and warnings popping up. Just don't touch it. Just leave it alone and let Dr Enterprises self healing properties take care of all those very transient temporary glitches, resolve themselves and leave you with a functioning workload cluster within victims. >>And now, if you think about it that that video was not very long at all. And that's how long it would take you if someone came into you and said, Hey, can you spend up a kubernetes cluster for development development A. Over here, um, it literally would take you a few minutes to thio Accomplish that. And that was with a W s. Obviously, which is sort of, ah, transient resource in the cloud. But you could do exactly the same thing with resource is on Prem or resource is, um physical resource is and will be going through that later in the process. >>Yeah, absolutely one thing that is present in that demo, but that I like to highlight a little bit more because it just kind of glides by Is this notion of, ah, cluster release? So when Sean was creating that cluster, and also when when he was upgrading that cluster, he had to choose a release. What does that didn't really explain? What does that mean? Well, in Dr Enterprise Container Cloud, we have released numbers that capture the entire staff of container ization tools that will be deploying to that workload costume. So that's your version of kubernetes sed cor DNs calico. Doctor Engineer. All the different bits and pieces that not only work independently but are validated toe work together as a staff appropriate for production, humanities, adopted enterprise environments. >>Yep. From the bottom of the stack to the top, we actually test it for scale. Test it for CVS, test it for all of the various things that would, you know, result in issues with you running the application services. And I've got to tell you from having, you know, managed kubernetes deployments and things like that that if you're the one doing it yourself, it can get rather messy. Eso This makes it easy. >>Bruce, you were staying a second ago. They I'll take you at least fifteen minutes to install your release. Custer. Well, sure, but what would all the other bits and pieces you need toe? Not just It's not just about pressing the button to install it, right? It's making the right decision. About what components work? Well, our best tested toe be successful working together has a staff? Absolutely. We this release mechanism and Dr Enterprise Container Cloud. Let's just kind of package up that expert knowledge and make it available in a really straightforward, fashionable species. Uh, pre Confederate release numbers and Bruce is you're pointing out earlier. He's got delivered to us is updates kind of transparent period. When when? When Sean wanted toe update that cluster, he created little update. Custer Button appeared when an update was available. All you gotta do is click. It tells you what Here's your new stack of communities components. It goes ahead. And the straps those components for you? >>Yeah, it actually even displays at the top of the screen. Ah, little header That says you've got an update available. Do you want me to apply? It s o >>Absolutely. Another couple of cool things. I think that are easy to miss in that demo was I really like the on board Bafana that comes along with this stack. So we've been Prometheus Metrics and Dr Enterprise for years and years now. They're very high level. Maybe in in previous versions of Dr Enterprise having those detailed dashboards that Ravana provides, I think that's a great value out there. People always wanted to be ableto zoom in a little bit on that, uh, on those cluster metrics, you're gonna provides them out of the box for us. Yeah, >>that was Ah, really, uh, you know, the joining of the Miranda's and Dr teams together actually spawned us to be able to take the best of what Morantes had in the open stack environment for monitoring and logging and alerting and to do that integration in in a very short period of time so that now we've got it straight across the board for both the kubernetes world and the open stack world. Using the same tool sets >>warm. One other thing I wanna point out about that demo that I think there was some questions about our last go around was that demo was all about creating a managed workplace cluster. So the doctor enterprise Container Cloud managers were using those aws credentials provisioned it toe actually create new e c two instances installed Docker engine stalled. Doctor Enterprise. Remember all that stuff on top of those fresh new VM created and managed by Dr Enterprise contain the cloud. Nothing unique about that. AWS deployments do that on open staff doing on Parramatta stuff as well. Um, there's another flavor here, though in a way to do this for all of our long time doctor Enterprise customers that have been running Doctor Enterprise for years and years. Now, if you got existing UCP points existing doctor enterprise deployments, you plug those in to Dr Enterprise Container Cloud, uh, and use darker enterprise between the cloud to manage those pre existing Oh, working clusters. You don't always have to be strapping straight from Dr Enterprises. Plug in external clusters is bad. >>Yep, the the Cube config elements of the UCP environment. The bundling capability actually gives us a very straightforward methodology. And there's instructions on our website for exactly how thio, uh, bring in import and you see p cluster. Um so it it makes very convenient for our existing customers to take advantage of this new release. >>Absolutely cool. More thoughts on this wonders if we jump onto the next video. >>I think we should move press on >>time marches on here. So let's Let's carry on. So just to recap where we are right now, first video, we create a management cluster. That's what we're gonna use to create All our downstream were closed clusters, which is what we did in this video. Let's maybe the simplest architectures, because that's doing everything in one region on AWS pretty common use case because we want to be able to spin up workload clusters across many regions. And so to do that, we're gonna add a third layer in between the management and work cluster layers. That's gonna be our regional cluster managers. So this is gonna be, uh, our regional management cluster that exists per region that we're going to manage those regional managers will be than the ones responsible for spending part clusters across all these different regions. Let's see it in action in our next video. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectural overview, how to set up the management environment, prepare for the deployment deployment overview, and then just to prove it, to play a regional child cluster. So looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case, AWS on the L C M components on the d you speak cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need original cluster? Different platform architectures, for example AWS open stack, even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager. We also machine manager. We're hell Mandel are managed as well as the actual provider logic. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. When you see it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster, similar to what we're going to deploy now. Also only has three managers once again, no workers. But as a comparison is a child cluster. This one has three managers, but also has additional workers associate it to the cluster. Yeah, all right, we need to connect. Tell bootstrap note, preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine Mhm. All right, A few things we have to do to make sure the environment is ready. First thing we're gonna pseudo into route. I mean, we'll go into our releases folder where we have the car's boot strap on. This was the original bootstrap used to build the original management cluster. We're going to double check to make sure our cube con figures there It's again. The one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything's working, condone, load our images waken access to a swell. Yeah, Next, we're gonna edit the machine definitions what we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I So that's found under the templates AWS directory. We don't need to edit anything else here, but we could change items like the size of the machines attempts we want to use but the key items to ensure where changed the am I reference for the junta image is the one for the region in this case aws region of re utilizing. This was an open stack deployment. We have to make sure we're pointing in the correct open stack images. Yeah, yeah. Okay. Sit the correct Am I save the file? Yeah. We need to get up credentials again. When we originally created the bootstrap cluster, we got credentials made of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we just exporting AWS access key and I d. What's important is Kaz aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our Q conflict that we want to use for the management cluster when we looked at earlier. Yeah, now we're exporting that. Want to call? The cluster region is Frankfurt's Socrates Frankfurt yet trying to use something descriptive? It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at us and waiting for the past, you know, to get started. Please the best your node, onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy Dr Enterprise, he says. Probably the longest face we'll see in a second that all the nodes will go from the player deployed. Prepare, prepare Mhm. We'll see. Their status changes updates. It was the first word ready. Second, just applying second. Grady, both my time away from home control that's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running a data for us? Yeah, almost a on. Now we're playing Stockland. Thanks. Whichever is done on Done. Now we'll build a child cluster in the new region very, very quickly. Find the cluster will pick our new credential have shown up. We'll just call it Frankfurt for simplicity. A key on customers to find. That's the machine. That cluster stop with three manages set the correct Am I for the region? Yeah, Same to add workers. There we go. That's the building. Yeah. Total bill of time. Should be about fifteen minutes. Concedes in progress. Can we expect this up a little bit? Check the events. We've created all the dependencies, machine instances, machines. A boat? Yeah. Shortly. We should have a working caster in the Frankfurt region. Now almost a one note is ready from management. Two in progress. On we're done. Trust us up and running. >>Excellent. There we have it. We've got our three layered doctor enterprise container cloud structure in place now with our management cluster in which we scrap everything else. Our regional clusters which manage individual aws regions and child clusters sitting over depends. >>Yeah, you can. You know you can actually see in the hierarchy the advantages that that presents for folks who have multiple locations where they'd like a geographic locations where they'd like to distribute their clusters so that you can access them or readily co resident with your development teams. Um and, uh, one of the other things I think that's really unique about it is that we provide that same operational support system capability throughout. So you've got stack light monitoring the stack light that's monitoring the stack light down to the actual child clusters that they have >>all through that single pane of glass that shows you all your different clusters, whether their workload cluster like what the child clusters or usual clusters from managing different regions. Cool. Alright, well, time marches on your folks. We've only got a few minutes left and I got one more video in our last video for the session. We're gonna walk through standing up a child cluster on bare metal. So so far, everything we've seen so far has been aws focus. Just because it's kind of easy to make that was on AWS. We don't want to leave you with the impression that that's all we do, we're covering AWS bare metal and open step deployments as well documented Craftsman Cloud. Let's see it in action with a bare metal child cluster. >>We are on the home stretch, >>right. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. Yeah, so why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and support high performance workouts like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another hyper visor layer in between. So continuing on the theme Why communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p g A s G p, us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. We can handle utilization in the scheduling better Onda. We increase the performance and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project. Will add the bare metal hosts, including the host name. I put my credentials. I pay my address, Mac address on, then provide a machine type label to determine what type of machine it is. Related use. Okay, let's get started Certain Blufgan was the operator thing. We'll go and we'll create a project for our machines to be a member off. Helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. Yeah. So the first thing we had to be in post many of the machine a name. Anything you want? Yeah, in this case by mental zero one. Provide the IAP My user name. Type my password? Yeah. On the Mac address for the active, my interface with boot interface and then the i p m i P address. Yeah, these machines. We have the time storage worker manager. He's a manager. We're gonna add a number of other machines on will speed this up just so you could see what the process. Looks like in the future, better discovery will be added to the product. Okay, Okay. Getting back there. We haven't Are Six machines have been added. Are busy being inspected, being added to the system. Let's have a look at the details of a single note. Mhm. We can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. Okay, it's going to create the cluster. Mhm. Okay, so we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So credit custom. We'll give it a name. Thank you. But he thought were selecting bare metal on the region. We're going to select the version we want to apply on. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of the dress range update the address range that we want to use for the cluster. Check that the sea idea blocks for the communities and tunnels are what we want them to be. Enable disabled stack light and said the stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here we're focused on building communities clusters. So we're gonna put the count of machines. You want managers? We're gonna pick the label type manager on create three machines. Is a manager for the Cuban a disgusting? Yeah, they were having workers to the same. It's a process. Just making sure that the worker label host like you are so yes, on Duin wait for the machines to deploy. Let's go through the process of putting the operating system on the notes, validating that operating system. Deploying Docker enterprise on making sure that the cluster is up and running ready to go. Okay, let's review the bold events. We can see the machine info now populated with more information about the specifics of things like storage. Yeah, of course. Details of a cluster, etcetera. Yeah, Yeah. Okay. Well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build, and that brings us to the end of this particular do my as you can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>Here we have a child cluster on bare metal for folks that wanted to play the stuff on Prem. >>It's ah been an interesting journey taken from the mothership as we started out building ah management cluster and then populating it with a child cluster and then finally creating a regional cluster to spread the geographically the management of our clusters and finally to provide a platform for supporting, you know, ai needs and and big Data needs, uh, you know, thank goodness we're now able to put things like Hadoop on, uh, bare metal thio in containers were pretty exciting. >>Yeah, absolutely. So with this Doctor Enterprise container cloud platform. Hopefully this commoditized scooping clusters, doctor enterprise clusters that could be spun up and use quickly taking provisioning times. You know, from however many months to get new clusters spun up for our teams. Two minutes, right. We saw those clusters gets better. Just a couple of minutes. Excellent. All right, well, thank you, everyone, for joining us for our demo session for Dr Enterprise Container Cloud. Of course, there's many many more things to discuss about this and all of Miranda's products. If you'd like to learn more, if you'd like to get your hands dirty with all of this content, police see us a training don Miranda's dot com, where we can offer you workshops and a number of different formats on our entire line of products and hands on interactive fashion. Thanks, everyone. Enjoy the rest of the launchpad of that >>thank you all enjoy.

Published Date : Sep 17 2020

SUMMARY :

So for the next couple of hours, I'm the Western regional Solutions architect for Moran At least somebody on the call knows something about your enterprise Computer club. And that's really the key to this thing is to provide some, you know, many training clusters so that by the end of the tutorial content today, I think that's that's pretty much what we had to nail down here. So the management costs was always We have to give this brief little pause of the management cluster in the first regional clusters to support AWS deployments. So in that video are wonderful field CTO Shauna Vera bootstrapped So primarily the foundation for being able to deploy So this cluster isn't yet for workloads. Read the phone book, So and just to make sure I understood The output that when it says I'm pivoting, I'm pivoting from on the bootstrap er go away afterwards. So that there's no dependencies on any of the clouds that get created thereafter. Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, The config file that that's generated the template is fairly straightforward We always insist on high availability for this management cluster the scenes without you having toe worry about it as a developer. Examples of that is the day goes on. either the the regional cluster or a We've got the management cluster, and we're gonna go straight with child cluster. as opposed to having to centralize thumb So just head on in, head on into the docks like the Dale provided here. That's going to be in a very near term I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. No, just that the fact that we're running through these individual So let's go to that video and see just how We can check the status of the machine bulls as individuals so we can check the machine the thing that jumped out to me at first Waas like the inputs that go into defining Yeah, and and And that's really the focus of our effort is to ensure that So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old of the bootstrapping as well that the processes themselves are self healing, And the worst thing you could do is panic at the first warning and start tearing things that don't that then go out to touch slack and say hi, You need to watch your disk But Sean mentioned it on the video. And And the kubernetes, uh, scaling methodology is is he adhered So should we go to the questions. Um, that's kind of the point, right? you know, set up things and deploy your applications and things. that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. to the standards that we would want to set to make sure that we're not overloading On the next video, we're gonna learn how to spin up a Yeah, Do the same to add workers. We got that management cluster that we do strapped in the first video. Yeah, that's the key to this is to be able to have co resident with So we don't have to go back to the mother ship. So it's just one pane of glass to the bootstrapped cluster of the regional services. and another, you know, detail for those that have sharp eyes. Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want all of the other major players in the cloud arena. Let's jump into our last video in the Siri's, So the first thing we had to be in post, Yeah, many of the machine A name. Much the same is how we did for AWS. nodes and and that the management layer is going to have sufficient horsepower to, are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. that's been put out in the chat, um, that you'll be able to give this a go yourself, Uh, take the opportunity to let your colleagues know if they were in another session I e just interest will feel for you. Use A I'm the one with the gray hair and the glasses. And for the providers in the very near future. I can hardly wait. Let's do it all right to share my video So the first thing is, we need those route credentials which we're going to export on the command That is the tool and you're gonna use to start spinning up downstream It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, and all of the necessary parameters that you would fill in have That's the very first thing you're going to Yeah, for the most part. Let's now that we have our management cluster set up, let's create a first We can check the status of the machine balls as individuals so we can check the glitches, resolve themselves and leave you with a functioning workload cluster within exactly the same thing with resource is on Prem or resource is, All the different bits and pieces And I've got to tell you from having, you know, managed kubernetes And the straps those components for you? Yeah, it actually even displays at the top of the screen. I really like the on board Bafana that comes along with this stack. the best of what Morantes had in the open stack environment for monitoring and logging So the doctor enterprise Container Cloud managers were Yep, the the Cube config elements of the UCP environment. More thoughts on this wonders if we jump onto the next video. Let's maybe the simplest architectures, of the regional cluster and how it connects to the management cluster on their components, There we have it. that we provide that same operational support system capability Just because it's kind of easy to make that was on AWS. Just making sure that the worker label host like you are so yes, It's ah been an interesting journey taken from the mothership Enjoy the rest of the launchpad

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MaryPERSON

0.99+

SeanPERSON

0.99+

Sean O'MaraPERSON

0.99+

BrucePERSON

0.99+

FrankfurtLOCATION

0.99+

three machinesQUANTITY

0.99+

Bill MilksPERSON

0.99+

AWSORGANIZATION

0.99+

first videoQUANTITY

0.99+

second phaseQUANTITY

0.99+

ShawnPERSON

0.99+

first phaseQUANTITY

0.99+

ThreeQUANTITY

0.99+

Two minutesQUANTITY

0.99+

three managersQUANTITY

0.99+

fifth phaseQUANTITY

0.99+

ClarkPERSON

0.99+

Bill MillsPERSON

0.99+

DalePERSON

0.99+

Five minutesQUANTITY

0.99+

NanPERSON

0.99+

second sessionQUANTITY

0.99+

Third phaseQUANTITY

0.99+

SeymourPERSON

0.99+

Bruce Basil MatthewsPERSON

0.99+

Moran TousPERSON

0.99+

five minutesQUANTITY

0.99+

hundredsQUANTITY

0.99+

Jitesh Ghai, Informatica | Informatica World 2019


 

>> Live from Las Vegas, it's theCUBE. Covering Informatica World 2019, brought to you by Informatica. >> Welcome back everyone to theCUBE's live coverage of Informatica World here in Las Vegas. I'm your host, Rebecca Knight along with my co-host John Furrier, we are joined by Jitesh Ghai, he is the Senior Vice President and General Manager Data Quality, Security and Governance at Informatica. Thank you so much for coming or returning to the show Jitesh. >> My pleasure, happy to be here. >> So, this is a real moment for data governance, we have the anniversary of GDPR and the California Privacy Act it's a topic at Dabos, there is growing concern among the public and lawmakers over security and privacy, give us the lay of the land from your perspective. >> Right, you know it is a moment for data governance, what's exciting in the space is governance was born out of risk and compliance and managing for risk and compliance, but really what it was mandating was healthy data management practices, how do we give the regulators comfort that our data is of high quality, that we know the lineage of where data is coming from that we know how the business relies on the data what is critical data? And while it was born to give the regulators comfort, what organizations very quickly realized is well when you democratize data, you need to give everybody that comfort, you need to give your data scientists, your data analysts, that same level of contextual understanding of their data right, where did it come from? What's the quality of it? How does the business use it, rely on it? And so that has been a tremendous opportunity for us, we've supported organizations, financial services from a BCBS 239 CCAR, counterparty credit risk, but what's happened is from a data democratization, data scale perspective, self-service analytics perspective, is what moved from terabytes to petabytes. We've moved from data warehouses, to data lakes and you can't democratize data unless there's a governed framework. I don't know, it sounds kind of like wait, democratizing data is supposed to be free data everywhere, but without some governed framework, it's a bit of a mess, and so what we're enabling organizations is the effective consumption and understanding of where their data is, discovering it, so that the right people can consume the data that they care about, the right data scientists can build the right models, the right analysts can build the right reports and the executives get the right confidence on what reports they're getting, what KPI's they're getting. >> One of the things that we talked last year, you had a couple customers on, you had told a great story, you guys had had the benefit as a long-standing company, 25 years in the private for large-customer base, but the markets changed, you mentioned governance I mean we're in the one year-anniversary of GDPR. >> Right. >> And I think everyone's kind of like OK what happened last year? More privacy laws are coming and one of the themes this year is clarity with data, but also in the industry you know access to data, making data addressable, because AI needs data sets, cloud has proven that, SAS business models, using data winning formula, that's clear if you're born in the cloud. Enterprises now want that same kind of SAS-like execution on the applications side, whether it's SAS or using AI for instance, >> Right. >> So when you have more regulation, inherent nature is to oh like more complexity, how are customers dealing with the complexity of this, because they want to free it up, but at the same time they want to make sure that they can respect the laws for individuals, but also governments aren't that smart either so you know, the balance there, what's the strategy? >> And therein lies the challenges with privacy specifically, it's not just about quality counterparty credit risk in like five or seven systems in a data warehouse, it's all the data in your enterprise, it's the data in production, there's the data in your DevOps environment, it's all your data literally, structured all the way to unstructured data like Word, PDFs, Powerpoints. And you need a governing framework around it, you need to enable organizations to be able to discover where is there sensitive information, how is there sensitive information proliferating through the organization? Is it protected? Is it not protected? And what's particularly, you know, we're all consumers, I'm pretty confident some or all of our data has been breached at some point, enabling organizations, what these privacy regulations are doing is they are giving us, as individuals, rights to go to the organizations we transact with and ask them, what are you doing with out data? Forget my data or at least tell me how you're processing it and get my consent for the data. >> Yeah, I mean policy and business models are certainly driving that and with regulation, I see that, but the question is that when you move the impact to the enterprise, you got storage drives. You store it on drives as a storage administrator you've got software abstractions with data, like you guys do. So, it's complicated, so the question is, for you, is what are customers doing now? What's the answer to all this? >> The answer really comes down to you need to scale to the scope of the problem, it's a thousand x-increase, you're going from terabytes to petabytes right? And so, you need an AI, an ML, an intelligent solution that can discover all of this information, but it can map it to John Furrier, this is where John Furrier's information is, it's in the human capital management system, the CRM system, organizations know, may start knowing whether sensitive data is, but hey don't know who it belongs to, so when you go to invoke your right to be forgotten or portability, today, what we're enabling organizations with is hey, we'll help you discover the sensitive information, but we'll also tell you who it belongs to, so that when John shows up or Rebecca, you show up, you just have to punch in their name and we'll tell you all the systems, that it's in. That is something that requires teams of database administrators, lawyers, system administrators that needs to be automated, to truly realize the potential of these privacy regulations, while enabling organizations to continue to innovate and disrupt with data. >> What's your take on whether or not consumers truly understand the scope of these privacy regulations, I mean talking about GDPR and you get the pop-ups that say do you consent and you just say yes, I just need to get to this site and so you blithely, just press yes, yes, yes so you are technically giving your consent, but do you, I mean what's your take, do consumers truly understand what they're doing here? >> You know, I think historically, we've all said yes, yes, yes, over the last, I would say two years with growing regulations and significant breaches, there is a change in customer expectations, you know, there's a stat out there in the event of a data breach, two-thirds of consumers of a particular organization blame the organization for the breach, not the hackers, right, so it's a mindshift in all of us, where you're the custodian of my data I'm counting on you, whatever organization I'm transacting with ,to ensure and preserve my privacy, ensure my data's protected. So, that's a big shift that's happened, so whether you're doing it for regulatory reasons, CCPA North America, there's several other state-wide regulations coming out or GDPR, the consumer expectation, forget regulations, it's brand preservation, it's customer trust, it's customer experience, that organizations are really having to solve for from a privacy standpoint. >> Tell what the news around yesterday around the shift of the trust pieces, because that's a huge deal. Because trust is shifting, expectations are shifting, so when you have shifting expectations, with users and buyers, customers, the experience has to shift. So, take us through what's the new things? >> Well, the new things are, you know, you look at we're enabling organizations to be data-driven, we're enabling organizations to transform, build new products, new services, be more efficient and for that, you need to enable them to get access to data. The counter, the tension on the other end is how do we get them broad-based access while ensuring privacy, right, and that's the balance. How do we enable them to be customer-centric and optimal in engaging with their customers while preserving the privacy of their customers and that really comes down to having a detailed understanding of what your critical data is, where is it in the organization and how an organization is using that data. Enabling an organization to know that they're processing data with the appropriate consent. >> What's interesting to me, when I was with press yesterday, is also the addition of how the cloud players are coming onboard, because you know, one constituent that's not mentioned in that statement is that you guys are kind of keeping an eye on, that are impacted by this, is developers, because you know developers like infrastructures coded with DevOps. Don't want to be provisioning networks and storage, they just write to the API's. Data is kind of going through that similar experience where, if I'm a developer doing an IOT app, I'm just going to use the cloud. I put the data there, I don't need to have a mismatch of mechanisms to deal with some governance compliance rules. >> Correct and that's why it needs to be built-in by design. And you know there's this connotation that- >> Explain that, what does built-in by design mean. >> Well you need to have privacy built-into how you as a business operate, how you as a DevOps team or development team, build products, if that's built-in to how you operate, you enable the innovation without falling into the pitfalls of oh you know what we broke some privacy regulations there we breached our customers trust there, we used data or engaged with them in manner that they weren't comfortable with. >> So, don't retro-fit after the fact? Think holistically on the front-end of the transformation in architecture. >> It's an enabler, in that if you do it right to begin with, you can continue to innovate and engage effectively, versus bolting it on as an afterthought and retro-fitting. >> It really seems like it is this evolution in thinking from this risk and compliance, overdoing this to check all the boxes, versus here are our constraints, but our constraints are actually liberating, is what you're saying. >> Right, but you can't democratize data, without giving the consumers of that data an understanding of the quality of that data, the trustworthiness of that data, the relevance of the data to the business, you give them that and now you're enabling your analytics, your data scientists, your analytics organizations to innovate with that data with confidence and if you do it within a framework of privacy, you're ensuring that you're preserving customer trust while you're automating and building intelligent and engaging customer experiences. >> What I love about the data business right now, is it's exciting because it's real specific examples of impact, security, you know, national security, to hackers, to just general security, privacy of the laws, But, I've seen the development angles interesting too, so when you got these two things moving, customers can ignore this, it's not like back-up and recovery where same kind of ethos is there, you don't want to think about it after the fact, you want to build it in, you know, there's certainly reasons why you do that, in case there's a disaster, but data is highly impactful all the time. This is a challenge, you guys can pull this off. >> Well you know, it's a, with privacy, it's no longer about a few systems, it's all your data and so the scope is the challenge and the scale applies for privacy, the scale applies for making data available enterprise wide and that's where you need and you know we spoke about AI needs data, well data also needs AI. And that's where we're leveraging AI and ML. Building out intelligence, to help organizations solve that problem and not do it manually. >> You know, I've said it on theCUBE, you've probably heard it many times, I say it all the time, scale is the new competitive advantage. Value is the new lock-in. No proprietary software anymore, but technology is needed. I want to ask you, you've been talking about this with some of your customers last year around data is that you need more scale, because AI needs more access to data, because the more visibility into data, the smarter, machine learning and AI applications can become. So Scale is real. What is the, what are you, you guys have some scalosity in your customers, you got the end-to-end, got the catalog and everything is kind of looking good, but you have competition How would you compare to the competition, when people say hey Jitesh, a start-up just popped out or XYZ company's got the solution, why should I go with them or you? What's the difference, what's the competitive angle? >> You know, the way we're thinking the problem is founded on governance is an enabler it's not about locking things down for risk and compliance, because you know, the regulators want to know that this particular warehouse is highly tightly controlled, it's about getting the data out there, it's about enabling end-users to have a contextual understanding when you're doing that for all of your data, within around, that's a thousand X-increase in the data, it's a thousand X-increase in your constituents, you're not supporting, the risk and compliance portions of the organization, you're supporting marketing, you're supporting sales, you're supporting business operations, supply chain, customer-onboarding and so with the problem of scale, practices of the past, which were typically manual laborious, but hey at the risk of non-compliance, we just had to deal with them, don't practically in any way scale, to the requirements of the future which is a thousand X-increase in consumers and that's where intelligence and AI and ML come in. >> The question I have for you is, where should customers store their data? Is there an answer to that on premises or in the cloud? What are they doing? >> The answer is yes, (Knight laughs) the customer should store their data, what we see, the world is going to be hybrid, mainframes are still here, on-premise will still be here many years from now. >> So you're taking the middle of the road here, so >> There's Switzerland. >> You're saying whatever they want on-premise or cloud, is there a preference you see with customers? >> Well, you know it depends on the applications , depends on regulations, historically regulations especially in financial services, have mandated a more on-premise stance, but those regulations, are also evolving and so we see, the global investment banks all of a sudden, we're having all sorts of conversations about enabling them to move select portions of their data estate to the cloud, enabling them to be more agile, so the answer is yes and it will be for a very long time to come. >> Final question, one of the most pressing problems in the technology industry is the skills gap. I want to hear your thoughts on it, how as a Senior Executive at Informatica, how worried are you about finding qualified candidates for your open-roles? >> You know, it is a challenge, good news is, we're a global organization, my teams are globally-distributed. I have teams in Europe, North America and Asia and the good part about that is if you can't find it in the valley, you can certainly find the talent elsewhere, and so while, it is a challenge, we're able to find talented engineers, software developers, data scientists, to help us innovate and build the intelligence capabilities to solve the productivity challenges, the scale challenges of data consumption. >> Jitesh, talk about the skills required for people coming out of school, take your Informatica hat off, put your expertise hat on, data guru hat, knowing that data is going to continue to grow, continue to have more impact across the board, from coding to society affix, whatever, what are some of the key skills in training, classes or courses or areas of expertise that people an dial-up or dig into that might be beneficial to them that may or may not be on the radar curriculum or, say is, part of school curriculum, >> you know we engage with universities in North America, in Europe, in Asia, we have a large development center in India and we're constantly, engaging with them. We're on various boards at various universities, advisory standpoint, big data standpoint and what we're seeing is as we engage with these organizations, we're able to feed back on where the market is going, what the requirements are, the nature of data science, the enabling technologies such as platforms like Spark, languages like Python and so we're working with these schools to share our perspectives, they in turn, are incorporating this into their curriculums and how they train future data scientists. >> When you see a young gun out there that's kicking butt and taking names and data, what are some of the backgrounds? Is it math, is it philosophy, is there a certain kind of pattern that you've seen as the makeup of just the killer data person? >> You know, it's interesting, you mention philosophy, I'm a big, I've hired many philosophy majors that have been some of the best architects, having said that, from a data science perspective, it's all about stats, it's all about math and while that's an important skillset to have, we're also focused on making their lives easier, they're spending 70% of their time, doing data engineering versus data science and so while they are being educated from a stats, from a data science foundation, when they come into the industry, they end up spend 70% of their time doing data engineering, that's where we're helping them as well. >> So study your Socrates and study your stats. >> I like that. (Knight and Furrier laugh) >> Jitesh, thank you so much for coming on theCUBE. >> My pleasure, happy to be here, thank you. >> I'm Rebecca Knight for John Furrier, you are watching theCUBE.

Published Date : May 22 2019

SUMMARY :

brought to you by Informatica. are joined by Jitesh Ghai, he is the the lay of the land from your perspective. so that the right people can consume the data but the markets changed, you mentioned governance one of the themes this year is it's all the data in your enterprise, but the question is that when you move the impact The answer really comes down to you need in customer expectations, you know, there's customers, the experience has to shift. Well, the new things are, you know, is also the addition of how the cloud players And you know into the pitfalls of oh you know what of the transformation in architecture. right to begin with, you can continue to innovate this to check all the boxes, versus here the relevance of the data to the business, about it after the fact, you want to and you know we spoke about AI needs data, is that you need more scale, because AI needs and compliance, because you know, the the customer should store their data, so the answer is yes and it will the most pressing problems in the and the good part about that is if you can't data science, the enabling technologies such as some of the best architects, having said that, (Knight and Furrier laugh) John Furrier, you are watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

EuropeLOCATION

0.99+

JiteshPERSON

0.99+

John FurrierPERSON

0.99+

70%QUANTITY

0.99+

fiveQUANTITY

0.99+

IndiaLOCATION

0.99+

last yearDATE

0.99+

Jitesh GhaiPERSON

0.99+

InformaticaORGANIZATION

0.99+

JohnPERSON

0.99+

North AmericaLOCATION

0.99+

RebeccaPERSON

0.99+

25 yearsQUANTITY

0.99+

seven systemsQUANTITY

0.99+

AsiaLOCATION

0.99+

PythonTITLE

0.99+

two yearsQUANTITY

0.99+

yesterdayDATE

0.99+

GDPRTITLE

0.99+

WordTITLE

0.99+

California Privacy ActTITLE

0.99+

Las VegasLOCATION

0.99+

two-thirdsQUANTITY

0.99+

XYZORGANIZATION

0.99+

SwitzerlandLOCATION

0.99+

SparkTITLE

0.99+

this yearDATE

0.98+

todayDATE

0.97+

2019DATE

0.97+

OneQUANTITY

0.97+

CCPA North AmericaORGANIZATION

0.97+

oneQUANTITY

0.96+

DevOpsTITLE

0.96+

two thingsQUANTITY

0.96+

PowerpointsTITLE

0.95+

FurrierPERSON

0.95+

KnightPERSON

0.94+

couple customersQUANTITY

0.87+

one year-anniversaryQUANTITY

0.86+

Informatica World 2019EVENT

0.84+

Informatica WorldORGANIZATION

0.82+

theCUBEORGANIZATION

0.79+

a thousand XQUANTITY

0.78+

one constituentQUANTITY

0.77+

terabytesQUANTITY

0.74+

a thousand XQUANTITY

0.74+

SocratesPERSON

0.73+

petabytesQUANTITY

0.73+

Informatica WorldEVENT

0.7+

thousandQUANTITY

0.67+

BCBS 239ORGANIZATION

0.65+

SASORGANIZATION

0.62+

PDFsTITLE

0.58+

SASTITLE

0.53+

CCARORGANIZATION

0.5+

DabosLOCATION

0.48+

David Comroe, The Wharton School of the University of Pennsylvania | Dell Technologies World 2018


 

>> Announcer: Live from Las Vegas, it's theCUBE! Covering Dell Technologies World 2018. Brought to you by Dell EMC, and it's ecosystem partners. >> And welcome back to Las Vegas, as thCUBE continues our coverage here of Dell Technologies World 2018. So glad to have you along here for our Day Three coverage. Along with Stu Miniman, I'm John Walls. It's now a pleasure to welcome David Comroe with us. David is the Senior Director of Client Technology Services at the Wharton School of Business, at the University of Pennsylvania. David, thanks for being with us. >> No problem. Glad to be here. >> Thank for sharing your time with us. First off let's just talk about, about the scope of your work. Again, you take care of all the obviously IT needs for the largest business school faculty in the world. Right? No pressure on you there. But talk about day to day, those responsibilities. >> As you mentioned my title is Senior Director for Client Technology Services. I'm essentially responsible for providing the support and services to four very distinct user groups that we happen to have at a university. That's of course our wonderful faculty, our staff that make everything happen, our incredible students, and of course our alumni group, which is about 100,000 people strong at this point. Just Wharton alums that are again, very important. Give back to the school. Provide mentorship and job opportunities for our graduates. Again very distinct needs for each of those four groups. We provide a high quality, and all the buzzwords. You know, secure, safe, efficient, highly available services to these groups. That's kind of what I do all day. >> One of the cool things, I love acronyms. Not that this industry doesn't have a few, as you know Stu. But WHOOPPEE. I absolutely love making whoopie. But not what you might think. But walk us through that and what it stands for, and what you did in it. It really was groundbreaking. >> You're putting me on the spot with this one. So WHOOPPEE is the Wharton, let's see if I can get this, Online Ordinal Peer Performance Evaluation Engine. One of our incredible faculty, Pete Fader, came up with this idea. It's no secret that grading is kind of bad. Faculty grading students. There's all kinds of challenges. >> It's tedious. >> Well it's tedious. There's inherit biases when you're, the larger the class. And when you have to grade 80 papers, or 100 papers or 200 papers. It's really hard to keep consistency across when your grading paper one through paper 100 through paper 200. Plus when you start divvying up the work between TA's and different faculty teaching the same class. Again fraught with bias. A number of people, again Pet Fader's idea, to come up with basically an algorithm that helps the grading process. And basically what happens is, is students are grading themselves. What we'll do is we'll give them five papers or five projects to grade. And they don't actually grade. All they have to do is rank it. You know, this is the best one. This is number one. This is the worst one. This is number five. And then there's this magic behind the scenes that that runs in our local infrastructure, in our cloud infrastructure. That basically runs an algorithm. And that algorithm is the secret sauce that some of our statistical geniuses at the Wharton school, of which we have many, came up with. And it has all kinds of cool features. You can say, well this batch of five papers might be harder. I might have the five best papers in the class. That's not fair. They still have to rank one the worst. You know, five. You can't say these two are the best. And this one's third. You actually, the students have to read the paper, and just rank it. I like this one the best. I like second, third, fourth, fifth. The algorithm takes into account difficulty of batches of papers. You could literally have the five best or the five worst papers in the class. And that's still going to provide meaningful data to the algorithm. So when you have 50, 100, 500 batches of five. They all start to figure it out. And the algorithm will actually figure out what the best paper is in the class. And what the maybe again at the Wharton. But not so great, greatest paper in the class. >> But not the worst. Just not so great. Again cause our students are brilliant. It basically goes on the fact that if you do a quality paper. If the algorithm says you're the best. Your weight means more than someone who might not have done such a good job on the paper. And you're considered a better grader. And it's weighted towards the better graders. There's all kinds of really cool stuff in there that we think is going to change... Get rid of some of that bias that I spoke about before. And help provide. And the data we've seen is, frankly the students like doing it. They don't like the additional work involved with it. We're seeing some empirical evidence, and some in person interviews. That they're learning more. They're reading five other student's papers. They're getting five other perspectives. They're saying, hey I didn't think about that. Or even, hey they're all wrong here. My paper was much better than theirs. But again that doesn't necessarily matter when we start running the ranks. And we're getting much better, much better grading, which is hard to quantify, but the folks that are on the academic team that are doing that, have some really great data. With the data. Yup, mm-hm. >> David, one of the themes we keep hearing in this show is about transformation. Is change happening? You're talking about IT, how it's working with the business more and more. Bring us inside university life in general and specifically. You work with one of the ancient eight. How does cutting edge technology fit in with - >> That's really interesting. I do have a couple thoughts on that. My boss has a picture in his office, of a Penn classroom from I think it's like 1908 or 1910. And there's literally a bunch of students sitting around. There's a faculty member standing up. And there's a candle-powered projector, which I didn't know is a thing but it's in the picture, projecting an image onto the wall. From over 100 years ago. What's different about our classrooms today? Everything's the same, except the projector's now in LED. Or a L3D projector. We still got people sitting around the room, standing up. I think what we're seeing now in probably the previous ten years from now and to the next ten years is education's probably going to change more in those 20 years than it has in 2,000 years since Socrates was standing around with a stone tablet or whatever they were doing. Things like globalization, online courses, the MOOC space, where Wharton is huge in the MOOC space. Wharton online programs. Where students can take, not even students, anybody! If you're in China or Africa or South America. You can take an introduction to Wharton, introduction to marketing class from a Wharton professor for free. I mean we're a business school. We sell some of that content as well. But you can get verified certificates. We're seeing a lot of stuff change. The students today expect more. We can get into, we won't though, we can get into the whole millennial issue and short attention span and all that kind of stuff. Students today expect their faculty to be technology savvy. They expect content to be online. They expect to use devices. The expect to use... We got tablets, and laptops and phones. They want to be able to consume this content on multiple devices. We're seeing significant transformations in education. Which is, hasn't necessarily changed much in 2,000 years. Or even 200 years, right? So there's that. Speaking specifically about Wharton, one of the things I really thought is interesting, is I've been there 13 years now. When I first started working there, I'm going to make some generalizations here, a lot of our student wanted to go work in iBanking. They wanted to go work for the big banks. They wanted to go work for Goldman Sachs and things like that. In the last five, seven, ten years ago. They wanted to create their own company. Start up their own company. Be entrepreneurial. Have their app. Have their their big idea. Start the next whatever dot com. And be successful that way. Now in the last two or three, four years. We're seeing a lot of our students analytics. We're putting analytics with everything. Companies, businesses, organizations, no matter what you are, we have huge amounts of data available. How can we make meaningful decisions based on that data? Our dean. I guess I can't call him our new dean. He's been there three or four years at this point. Really wants to position Wharton as the analytics school. Every company in the world is trying to hire these kinds of people. There just frankly aren't enough of them out there. The thing we're trying to teach our students is, or one of the many things, is how to analyze data. How to make meaningful decisions based on that data. And of course when you have more data, you need more storage. You need more infrastructure. You need more processing. All the stuff that you know, Dell and Nutanix are providing us, with their hyper convergence infrastructure. Their cloud offerings. Whether private cloud, public cloud, hybrid cloud. All that kind of stuff is... Positioning us as the analytics school requires a significant amount of technology on the backend. And again working with our trusted partners like Dell and Nutanix we can provide that seamlessly in the backend. They don't necessarily know, is it in our data center? Is it in the cloud? And they don't care. They shouldn't care. But as they're collecting huge amounts of data, running these reports, and creating it, and going back to creating these algorithms that do incredible things. And these secret sauces. We need the infrastructure to run that kind of stuff. That's I think one of the greatest things that Wharton Computing provides the Wharton School of Business, and their business, which is creating and disseminating knowledge. >> David, I think you've encapsulated something that I've been hearing from lot's of users over the last year or so. The vendors sometimes, it's private, it's hybrid, it's public. From the user standpoint it's like, no well we have a cloud strategy that we're working on. Can you bring us inside a little bit? How did you get to where you are today? How do you choose who you're partnering with? What leads to some of those decisions? >> I love the word partner. I hate the word vendor. One of the great things about working at Wharton is, is we get to have these awesome partners. I want someone... When we're going to make an IT spend, we want someone who cares about our business. We don't want somebody who just, will come in, give you a dog and pony show, write us a check. And when you want more stuff call us. We want folks that are going to provide the support. You know, pre-sales during installation. Post-sales when they're coming out with new features. We want them to be invested in what we do. I can truly say that Nutanix is a fantastic partner of ours. Dell-Nutanix are great partners. Dell is a great partner of Wharton and Penn as well. That's what we really look for, is someone who is willing to invest their time, their smart people. Tell us about the new features and functionality that are coming out. Call on us and say, hey how are thing going? It's not just the little things. But those little things really mean a lot to us as we're picking an IT partner. Because when you're working for the best business school in the world. Having the best students, the brightest faculty, the best, hardest working staff. We want to provide them a very, very high quality IT support. We need high quality partners. And not just vendors who care about the transaction. That's really the bottom line for us. When we're choosing our partners. >> When you were talking about analytics, and Wharton being the school of data analytics. What are your measuring sticks? In terms of what are you looking at? You're talking about four very separate groups of constituencies. What are you doing to evaluate your performance? And what's critical? >> I think it all comes down to, what do our business units think about us? We're a service organization. Almost all IT shops are. If the business units aren't successful, they don't need an IT department. If we're not providing them high quality IT services, we're not going to get the best faculty. We're not going to get the brightest students. We're not going to get the alumni engagement. They want to be wowed by their IT support. That's a big part of my job, is providing that quality of support. Helping train. Technology breaks, right? How do you deal with the problem? Nobody runs at rock solid 100% infrastructure. Murphy's Law always comes into play. Problems always happen. How do you deal with the cracks in the armor as they come off? I think that's what our business units want. I think we're fortunate that we're computing. Our team, our staff, our CIO. My colleagues, my peers, my team. Our team, right? They're very well thought of, hopefully, by our clients. And that's how we're measured is by their success. We want to help them, empower them to do their job at the highest level. We are playing in pretty rare air, when it comes to the faculty, staff, students and alumni, that we attract to Penn and Wharton. We want to keep doing that. One of the things I love best, and I tell our wonderful faculty when we meet with them, is don't tell me we did a great job. Here's what I want you to tell me. I want you to say, three years ago I was at, I'm not going to name drop schools, but I was at this school and I asked them to do this thing, that you said, sure, no problem to. And they couldn't do it, wouldn't do it, didn't have the ability, the infrastructure in place to do that. But you guys with a smile on your face just made it happen. Stuff like WHOOPPEE. Stuff like the analytics stuff. All the, tying it back to why we're here today, is our partners and our technology partners that help us provide scalable, flexible solutions. That's how we're measured. >> Higher learning. >> Higher learning, absolutely. >> David, thanks for being with us. >> No problem, it was great. >> David Comroe from the Wharton School of Business, University of Pennsylvania. Back with more live coverage here from Dell Technologies World 2018. Right after this break. You're watching theCUBE.

Published Date : May 2 2018

SUMMARY :

Brought to you by Dell EMC, David is the Senior Director of Client Technology Services Glad to be here. for the largest business school faculty in the world. and all the buzzwords. One of the cool things, You're putting me on the spot with this one. You actually, the students have to read the paper, And the data we've seen is, David, one of the themes we keep hearing in this show We need the infrastructure to run that kind of stuff. over the last year or so. One of the great things about working at Wharton is, and Wharton being the school of data analytics. One of the things I love best, David Comroe from the Wharton School of Business,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

NutanixORGANIZATION

0.99+

ChinaLOCATION

0.99+

David ComroePERSON

0.99+

John WallsPERSON

0.99+

Stu MinimanPERSON

0.99+

AfricaLOCATION

0.99+

Goldman SachsORGANIZATION

0.99+

DellORGANIZATION

0.99+

50QUANTITY

0.99+

threeQUANTITY

0.99+

Pete FaderPERSON

0.99+

80 papersQUANTITY

0.99+

100 papersQUANTITY

0.99+

five papersQUANTITY

0.99+

five projectsQUANTITY

0.99+

fifthQUANTITY

0.99+

1910DATE

0.99+

WhartonORGANIZATION

0.99+

200 papersQUANTITY

0.99+

100QUANTITY

0.99+

fourthQUANTITY

0.99+

Wharton School of BusinessORGANIZATION

0.99+

Las VegasLOCATION

0.99+

secondQUANTITY

0.99+

fiveQUANTITY

0.99+

13 yearsQUANTITY

0.99+

PennORGANIZATION

0.99+

four yearsQUANTITY

0.99+

South AmericaLOCATION

0.99+

100%QUANTITY

0.99+

1908DATE

0.99+

thirdQUANTITY

0.99+

twoQUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

five best papersQUANTITY

0.99+

oneQUANTITY

0.99+

200 yearsQUANTITY

0.99+

five worst papersQUANTITY

0.99+

three years agoDATE

0.99+

last yearDATE

0.99+

eachQUANTITY

0.98+

2,000 yearsQUANTITY

0.98+

about 100,000 peopleQUANTITY

0.98+

four groupsQUANTITY

0.98+

firstQUANTITY

0.98+

20 yearsQUANTITY

0.98+

OneQUANTITY

0.97+

Murphy's LawTITLE

0.97+

Dell Technologies World 2018EVENT

0.96+

FirstQUANTITY

0.96+

todayDATE

0.96+

Day ThreeQUANTITY

0.96+

500 batchesQUANTITY

0.96+

SocratesPERSON

0.95+

Wharton and PennORGANIZATION

0.95+

Wharton schoolORGANIZATION

0.95+

fourQUANTITY

0.95+

Wharton ComputingORGANIZATION

0.94+

Duncan Angove, Infor - Inforum 2017 - #Inforum2017 - #theCUBE


 

>> Announcer: Live from the Javits Center in New York City, it's theCUBE. Covering Inforum 2017. Brought to you buy Infor. >> Welcome back to Inforum 2017 everybody. This is theCUBE, the leader in live tech coverage. Duncan Angove is here, the President of Infor and a Cube alum. Good to see you again Duncan. >> Hey, afternoon guys. >> So it's all coming together right? When we first met you guys down in New Orleans, we were sort of unpacking, trying to squint through what the strategy is. Now we call it the layer cake, we were talking about off camera, really starting to be cohesive. But set up sort of what's been going on at Infor. How are you feeling? What the vibe is like? >> Yeah it's been an amazing journey over the last six years. And, um, you know, all the investments we put in products, as you know, we said to you guys way back then, we've always put products at the center. Our belief is that if you put innovation and dramatic amounts of investment in the core product, everything else ends up taking care of itself. And we put our money where our mouth was. You know, we're a private company, so we can be fairly aggressive on the level of investment we put into R&D and it's increased double digit every single year. And I think the results you've seen over the last two years, in terms of our financials is that, you know the market's voting in a way that we're growing double digits dramatically faster than our peers. So that feels pretty good. >> So Jim is, I know, dying to get into the AI piece, but lets work our way up that sort of strategy layer cake with an individual had a lot to do with that. So you know, you guys started with the decision of Micro-verticals and you know the interesting thing to us is you're starting to see some of the big SI's join in. And I always joke, that they love to eat at the trough. But you took a lot of the food away by doing that last mile. >> Yeah. >> But now you're seeing them come in, why is that? >> You know I think the whole industry is evolving. And the roles that different and the valor that different companies in that ecosystem play, whether it's an enterprise software vendor or it's a systems integrator. Everything's changing. I mean, The Cloud was a big part of that. That took away tasks that you would sometimes see a systems integrator doing. As larger companies started to build more completely integrated suites, that took away the notion that you need a systems integrator to plug all those pieces together. And then the last piece for us was all of the modifications that were done to those suites of software to cover off gaps in industry functionality or gaps in localizations for a country, should be done inside the software. And you can only do that if you have a deep focus, by industry on going super, super deep at a rapid rate on covering out what we call these last malfeatures. So that means that the role of the systems integrators shifted. I mean they've obviously pivoted more recently into a digital realm. They've all acquired digital agencies. And having to adapt to this world where you have these suites of software that run in The Cloud that don't need as much integration or as much customization. So we were there you know five, six years ago. They weren't quite there. It was still part of this symbiotic relationship with other large vendors. And I think now, you know, the reason for the first time we've got guys like Accenture, and Deloitte, and Capgemini, and Grant Thornton here, is that they see that. And their business model's evolved. And you know those guys obviously like to be where they can win business and like to build practices around companies they see winning business. So the results we've seen and the growth we've seen over the last two to three years, obviously that's something they want a piece of. So I think it's going to work out. >> Alright so Jim, you're going to have to bear with me a second 'cause I want to keep going up the stack. So the second big milestone decision was AWS. >> Duncan: Yeah. >> And we all understand the benefits of AWS. But there's two sides to that cone and one is, when you show your architectural diagram, there's a lot of AWS in there. There's S3, there's DynamoDB, I think I saw Kinesis in there. I'm sure there's some Ec2 and other things. And it just allows you to focus on what you do best. At the same time, you're getting an increasingly complex data pipeline and ensuring end-to-end performance has to be technically, a real challenge for you. So, I wanted to ask you about that and see if you could comment and how you're managing that. >> Yeah so, I mean obviously, we were one of the first guys to actually go all in on Amazon as a Cloud delivery platform. And obviously others now have followed. But we're still one of their top five ISV's on there. The only company that Amazon reps actually get compensated on. And it's a two way relationship right? We're not just using them as a Cloud delivery partner. We're also using some of their components. You know you talked about some of their data storage components. We're also leverage them for AI which we'll get into in a second. But it's a two way relationship. You know, they run our asset management facility for all of their data centers globally. We do all the design and manufacturing of their drones and robots. We're partnered with them on the logistic side. So it's a deep two way relationship. But to get to your question on just sort of the volume and the integration. We work in integrations with staggering volumes right? I mean, retail, you're dealing with billions and billions of data points. And we'll probably get into that in a second you know. The whole asset management space, is one of the fastest growing applications we have. Driven by cycle dynamics of IoT and explosion in device data and all of that. So we've had for a very, very long time, had to figure out an efficient way to move large amounts of data that can be highly chatty. And do it in an efficient way. And sometimes it's less about the pipes in moving it around, it's how you ingest that data into the right technology from a data storage perspective. Ingest it and then turn it into insights that can power analytics or feed back into our applications to drive execution. Whether it's us predicting maintenance failure on a pump and then feeding that back into asset management to create a work order and schedule an engineer on it. Right? >> That's not a trivial calculus. Okay, now we're starting to get into Jim's wheelhouse, which is, you call it, I think you call it the "Age of Network Intelligence". And that's the GT Nexus acquisition. >> Yeah. >> To us it's all about the data. I think you said 18 years of transaction history there. So, talk about that layer and then we'll really get into the data the burst piece and then of course the AI. >> Yeah, so there were two parts to why we called it "The Age of Network Intelligence". And it's not often that technology or an idea comes along in human history that actually bends the curve of progress right? And I think that we said it on stage, the steam engine was one of those and it lead to the combustion engine, it lead to electricity and it lead to the internet and the mobile phone and it all kind of went. Of course it was invented by a British man, an Englishman you know? That doesn't happen very often right? Where it does that. And our belief is that the rise of networks, coupled with the rise of artificial intelligence, those two things together will have the same impact on society and mankind. And it's bigger than Infor and bigger than enterprise software, it's going to change everything. And it's not going to do it in a linear way. It's going to be exponential. So the network part of that for us, from an Infor perspective was, yes it was about the commerce network, which was GT Nexus, and the belief that almost every process you have inside an enterprise at some point has to leave the enterprise. You have to work with someone else, a supplier or a customer. But ERP's in general, were designed to automate everything inside the four walls. So our belief was that you should extend that and encompass an entire network. And that's obviously what the GT Nexus guys spent 18 years building was this idea of this logistics network and this network where you can actually conduct trade and commerce. They do over 500 billion dollars a year on that network. And we believe, and we've announced this as network CloudSuites, that those two worlds will blur. Right? That ultimately, CloudSuites will run completely nakedly on the network. And that gives you some very, very interesting information models and the parallel we always give is like a Linkedin or a Facebook. On Linkedin, there's one version of the application. Right? There's one information model where everyone's contact information is. Everyone's details about who they are is stored. It's not stored in all these disparate systems that need to be synchronized constantly. Right? It's all in one. And that's the power of GT Nexus and the commerce network, is that we have this one information model for the entire supply chain. And now, when you move the CloudSuite on top of that, it's like this one plus one is five. It's a very, very powerful idea. >> Alright Jim, chime in here, because you and I both excited about the burst when we dug into that a little bit. >> Yes. >> Quite impressed actually. Not lightweight vis, you know? It's not all sort of BI. >> Well the next generation of analytics, decision support analytics that infuse and inform and optimize transactions. In a distributed value chain. And so for the burst is a fairly strong team, you've got Brad Peters who was on the keynote yesterday, and of course did the pre-briefing for the analyst community the day before. I think it's really exciting, the Coleman strategy is really an ongoing initiative of course. First of all, on the competitive front, all of your top competitors in this very, I call it a war of attrition in ERP. SAP, Oracle and Microsoft have all made major investments on going in AI across their portfolios. With a specific focus on informing and infusing their respective ERP offerings. But what I conceived from what Infor's announced with the Coleman strategy, is that yours is far more comprehensive in terms of taking it across your entire portfolio, in a fairly accelerated fashion. I mean, you've already begun to incorporate, Coleman's already embedded in several of your vertical applications. First question I have for you Duncan, as I was looking through all the discussions around Coleman, when will this process be complete in terms of, "Colemanizing", is my term? "Colemanizing" the entire CloudSuite and of course network CloudSuite portfolio. That's a huge portfolio. And it's like you got fresh funding, a lot of it, from Koch industries. To what extent can, at what point in the next year or two, can most Infor customers have the confidence that their cloud applications are "Colemanized"? And then when will, if ever, Coleman AI technology be made available to those customers who are using your premises based software packages? >> So yeah, we could spend a long time talking about this. The thing about Coleman and RAI and machine learning capabilities is that we've been at work on it for a while. And you know we created the dynamic science labs. Our team of 65 Ph.D.'s based up in M.I.T. got over three and a half four years ago. And our differentiation versus all the other guys you mentioned is that, two things, one, we bring a very application-centric view of it. We're not trying to build a horizontal, generic, machine learning platform. In the same way that we- >> Yeah you're not IBM with Watson, all that stuff. >> Yeah, no, no. Or even Auricle. >> Jim: Understood. >> Or Microsoft. >> Jim: Nobody expects you to be. >> No, you know, and we've always been the guys that have worked for the Open Source community. Even when you look at like, we're the first guys to provide a completely open source stack underneath our technology with postscripts. We don't have a dog in the hunt like most of the other guys do. Right? So we tap in to the innovation that happens in the Open Source community. And when you look at all the real innovation that's happening in machine learning, it's happening in the Open Source Community. >> Jim: Yes. >> It's not happening with the old legacy, you know, ERP guys. >> Jim: Pencer, Flow and Spark and all that stuff. >> Yeah, Google, Apple, the GAFA. >> Yeah. >> Right? Google, Apple, Facebook, those are the guys that are doing it. And the academic community is light years ahead on top of that of what these other guys will do. So that's what we tap into right? >> Are you tapping into partners like AWS? 'Cause they've obviously, >> Duncan: Absolutely >> got a huge portfolio of AI. >> Yeah, so we. >> Give us a sense whether you're going to be licensing or co-developing Coleman technologies with them going forward. >> Yeah so we obviously we have NDA's with them, we're deeply inside their development organization in terms of working on things. You know, our science is obviously presented to them around ideas we think they need to go. I mean, we're a customer of their AI frameup to machine learning and we're testing it at scale with specific use cases in industries, right? So we can give them a lot of insights around where it needs to go and problems we're trying to solve. But we do that across a number of different organizations and we've got lots and lots of academic collaborations that happen on around all of the best universities that are pushing on this. We've even received funding from DAPA in certain cases around things that we're trying to solve for. You know quietly we've made some machine-learning acquisitions over the last five, six years. That have obviously brought this capability into it. But the point is we're going to leverage the innovation that happens around these frameworks. And then our job is understanding the industries we're in and that we're an applications company, is to bring it to life in these applications in a seamless way, that solves a very specific problem in an industry, in a powerful and unique way. You know on stage I talked about this idea of bringing this AI first mindset to how we go about doing it. >> So it's important, if I can interject. This is very important. This is Infor IP, the serious R&D that's gone into this. It's innovation. 'Cause you know what your competitors are going to say. They're going to deposition and say, oh, it's Alexa on steroids. But it's not. It's substantial IP and really leveraging a lot of the open source technologies that are out there. >> Yeah. So you know, I talked about there were four components to Coleman, right? And the first part of it was, we can leverage machine-learning services to make the CloudSuites conversational. So they can chat, and talk, and see, and hear, and all of that. And yeah, some of those are going to use the technology that sits behind Alexa. And it's available in AWS's Alexa as you guys know. But that's only really a small part of what we're doing. There are some places where we are looking at using computer vision. For example, automated inspection of car rental returns, is one area. We're using it for quality management pilot at a company that normally has humans inspect something on a production line. That kind of computer-vision, that's not Alexa, right? It's you know, I gave the example of image recognition. Some of it can leverage AWS's framework there. But again, we're always going to look for the best platform and framework out there to solve the specific problem that we're trying to solve. But we don't do it just for the sake of it. We do it with a focus to begin with, with an industry. Like, where's a really big problem we can solve? Or where is there a process that happens inside an application today that if you brought an AI first mindset to it, it's revolutionary. And we use this phrase, "the AI is the UI". And we've got some pretty good analogies there that can help bring it to life. >> And I like your approach for presenting your AI strategy, in terms of the value it delivers your customers, to business. You know, there's this specter out there in the culture that AI's going to automate everybody out of a job. Automation's very much a big part of your strategy but you expressed it well. Automating out those repetitive functions so that human beings, you can augment the productivity of human beings, free them up for more value-added activities and then augment those capabilities through conversational chat box. And so forth, and so on. Provide you know, in-application, in process, in context, decision support with recommendations and all that. I think that's the exact right way to pitch it. One of the things that we focus on and work on in terms of application development, disciplines that are totally fundamental to this new paradigm. Recommendation engines, recommender systems, in line to all application. It's happening, I mean, Coleman, that really in many ways, Coleman will be the silent, well not so silent, but it'll be the recommendation engine embedded inside all of your offerings at some point. At least in terms of the strategy you laid out. >> Yeah, no, absolutely right I mean. It's not just about, we all get hung up on machine-learning and deep learning 'cause it's the sexy part of AI, right? But there's a lot more. I mean, AI, all the way back, you can go all the way back to Socrates and the father of logic right? I mean, some of the things you can do is just based on very complex rules and logic. And what used to be called process automation right? And then it extends all the way to deep learning and neural networks and so on. So one of the things that Coleman also does, is it unifies a lot of this technology. Things that you would normally do for prediction or optimization, and optimization normally is the province of operations research guys right? Which again it's a completely different field. So it unifies all of that into one consistent platform that has all of that capability into it. And then it exposes it in a consistent way through our API architecture. So same thing with bots. People always think chat bots are separate. Well that too is unified inside Coleman. So it's a cohesive platform but again, industry focused. >> What's your point of view on developers? And how do you approach the development community and what's your strategy there? >> Yeah, I mean, it's critical right? So we've always, I mean, hired an incredible number of application engineers every year. I think the first 12 months we were here, we hired 1800 right? 'Cause you know, that's kind of what we do. So we believe hugely in smarts. And it sounds kind of obvious, but experience can be learned, smarts is portable. And we have a lot of programs in place with universities. We call it the Education Alliance Program. And I think we have up to 32 different universities around the world where we're actually influencing curriculum, and actually bringing students right out of there. Using internships during the year and then actually bringing them into our development organization. So we've got a whole pipeline there. I mean that's critical that we have access to those. >> And what about outside your four walls, or virtual walls have been four? Is there a strategy to specifically pursue external developers and open up a PAZ layer? >> Yeah we do. >> Or provide an STK for Coleman for example, for developers. >> Yeah so we did, as part of our Infor Operating Service update. Which is, you know, the name for our unified technology platform. We did announce Mongoose platform was a service. Our Mongoose pass. >> Host: Oh Mongoose, sure. >> So that now is being delivered as a platform with a service for application development. And it's used in two ways. It's used for us to build new applications. It's a very mobile-first type development framework too. And obviously Hook and Loop had a huge influence in how that ships. The neat thing about it, is that it ships with plumbing into ION API, plumbing into our security layer. So customers will use it because it leverages our security model. It's easy to access everything else. But it's also used by our Hook and Loop digital team. So those guys are going off and they're building completely differentiated curated apps for customers. And again, they're using Mongoose. So I think between ION API's and between all the things you get in the Infor Operating Service, and Mongoose, we've got a pretty good story around extensibility and application development. As it relates to an STK for Coleman, we're just working through that now. Again, our number one focus is to build those things into the applications. It's a feature. The way most companies have approached optimization and machine learning historically, is it's a discrete app that you have to license. And it's off to the side and you integrate it in. We don't think that's the right way of doing it. Machine-learning and artificial intelligence, is a platform. It's an enabler. And it fuses and changes every part of the CloudSuite. And we've got a great example on how you can rethink demand forecasting, demand planning. Every, regardless of the industry we serve, everyone has to predict demand right? It's the basis for almost every other decision that happens in the enterprise. And, how much to make, how many nurses to put on staff, all of that, every industry, that prediction of demand. And the thinking there really hasn't changed in 20, 30 years. It really hasn't. And some of that's just because of the constraints with technology. Storage, compute, all of that. Well with the access we have to the elastic super-computing now and the advancements in sort of machine-learning and AI, you can radically rethink all of that, and take what we call and "AI First" approach, which is what we've done with building our brand new demand prediction platform. So the example we gave is, you think about when early music players came along on the internet right? The focus was all around building a gorgeous experience for how to build a playlist. It was drag and drop, I could do it on a phone, I could share it with people and it showed pictures of the album art. But it was all around the usability of making that playlist better. Then guys like Spotify and Pandora came around and it took an AI First approach to it. And the machine builds your playlist. There is no UI. AI is the UI. And it can recommend music I never knew I would've liked. And the way it does that, comes back to the data. Which is why I'm going to circle back to Infor here in a second. Is that, it breaks a song down into hundreds if not thousands of attributes about that song. Sometimes it's done by a human, sometimes it's even done by machine listening algorithms. Then you have something that crawls the web, finds music reviews online, and further augments it with more and more attributes. Then you layer on top of that, user listening activity, thumbs up, thumbs down, play, pause, skip, share, purchase. And you find, at that attribute level, the very lowest level, the true demand drivers of a song. And that's what's powering it right? Just like you see with Netflix for movies and so on. Imagine bringing that same thought process into how you predict demand for items, that you've never promoted before. Never changed the price before. Never put in this store before. Never seen before. >> The cold start problem in billing recommendation areas. >> Exactly right, so, that's what we mean by AI First. It's not about just taking traditional demand planning approaches and making it look sexier and putting it on an iPad right? Rethink it. >> Well it's been awesome to watch. We are out of time. >> Yeah, we're out of time. >> Been awesome to watch the evolution, >> We could go on and on with this yeah. >> of Infor as it's really becoming a data company. And we love having executives like you on. >> Yeah >> You know, super articulate. You got technical chops. Congratulations on the last six years. >> Thanks. >> The sort of quasi-exit you guys had. >> Great show, amazing turnout. >> And look forward to watching the next six to 10. So thanks very much for coming out. >> Brilliant, thank you guys. Alright thank you. >> Alright keep it right there everybody, we'll be back with our next guest, this is Inforum 2017 and this is theCUBE. We'll be right back. (digital music)

Published Date : Jul 12 2017

SUMMARY :

Brought to you buy Infor. Good to see you again Duncan. When we first met you guys down in New Orleans, and dramatic amounts of investment in the core product, And I always joke, that they love to eat at the trough. And I think now, you know, the reason for the first time So the second big milestone decision was AWS. And it just allows you to focus on what you do best. And sometimes it's less about the pipes in moving it around, And that's the GT Nexus acquisition. I think you said 18 years of transaction history there. And our belief is that the rise of networks, because you and I both excited about the burst Not lightweight vis, you know? And it's like you got fresh funding, a lot of it, And you know we created the dynamic science labs. Yeah, no, no. And when you look at all the real innovation you know, ERP guys. And the academic community is light years ahead with them going forward. that happen on around all of the best universities a lot of the open source technologies that are out there. And it's available in AWS's Alexa as you guys know. At least in terms of the strategy you laid out. I mean, some of the things you can do And I think we have up for developers. Which is, you know, And it's off to the side and you integrate it in. and putting it on an iPad right? Well it's been awesome to watch. And we love having executives like you on. Congratulations on the last six years. And look forward to watching the next six to 10. Brilliant, thank you guys. we'll be back with our next guest,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GoogleORGANIZATION

0.99+

AppleORGANIZATION

0.99+

JimPERSON

0.99+

DuncanPERSON

0.99+

FacebookORGANIZATION

0.99+

Brad PetersPERSON

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GAFAORGANIZATION

0.99+

Duncan AngovePERSON

0.99+

OracleORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

18 yearsQUANTITY

0.99+

New OrleansLOCATION

0.99+

two partsQUANTITY

0.99+

IBMORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

iPadCOMMERCIAL_ITEM

0.99+

two sidesQUANTITY

0.99+

DAPAORGANIZATION

0.99+

SocratesPERSON

0.99+

New York CityLOCATION

0.99+

20QUANTITY

0.99+

billionsQUANTITY

0.99+

KinesisTITLE

0.99+

ColemanPERSON

0.99+

CapgeminiORGANIZATION

0.99+

NDAORGANIZATION

0.99+

SAPORGANIZATION

0.99+

yesterdayDATE

0.99+

PandoraORGANIZATION

0.99+

oneQUANTITY

0.99+

secondQUANTITY

0.99+

ColemanORGANIZATION

0.99+

hundredsQUANTITY

0.99+

fiveQUANTITY

0.99+

SpotifyORGANIZATION

0.99+

First questionQUANTITY

0.99+

fiveDATE

0.99+

two thingsQUANTITY

0.99+

next yearDATE

0.99+

AuricleORGANIZATION

0.99+

InforORGANIZATION

0.98+

NetflixORGANIZATION

0.98+

1800QUANTITY

0.98+

firstQUANTITY

0.98+

two waysQUANTITY

0.98+

LinkedinORGANIZATION

0.98+

bothQUANTITY

0.98+

OneQUANTITY

0.97+

DynamoDBTITLE

0.97+

two worldsQUANTITY

0.97+

first timeQUANTITY

0.97+

S3TITLE

0.97+

over 500 billion dollars a yearQUANTITY

0.97+

one versionQUANTITY

0.96+

AlexaTITLE

0.96+

six years agoDATE

0.96+

thousands of attributesQUANTITY

0.96+

CloudSuitesTITLE

0.96+

one areaQUANTITY

0.96+

65 Ph.D.QUANTITY

0.96+