Image Title

Search Results for Morantes:

Adrian Ionel, Mirantis | DockerCon 2021


 

>>Hello and welcome to the cubes coverage of dr khan 2021. I'm john Kerry, host of the cube agent I own L. C. Ceo and co founder chairman of Morantes cube alumni Adrian Great to see you. Thanks for coming on the cube here for dr khan coverage. Good to see you. Hey >>john nice to see. You gotta do. >>So obviously open source innovation continues. You guys are at the forefront of it. Great to see you what's new Miranda's, give us the update on what's happening. >>Well, I mean what's, what's interesting is we've had one of the best years ever last year and it's very much more continuous, you know, into this year. It's pretty fantastic. We wanted about 160 new customers. Kubernetes is definitely on a tear. We see customers doing bigger and bigger and more exciting things, which is absolutely great to say lens is getting tremendous destruction and I think we have a five fold increase in user base within a year. So it's a lot of fun Right now, customers are definitely pushing the boundaries of what benefits can do. And they want to get the cloud native infrastructure and they want to get there faster and they want to be big and exciting things. And we are so happy to be part of the right. >>You guys are investing in brand new open source solutions for customers. Give us an update on on why and why do they matter for your customer? >>Well, there are, let me unpack this a little bit and there are really two elements to this. One is wide. Open Source and what's new. What matters. So the open source is not new, but open source is being embraced more and more heavily. Bye bye companies everywhere because just a very flexible and cost efficient and highly innovative way to to use innovation and to continue software and a lot of innovation these days is happening the open source communities, which is why it's super exciting for many, many users now. What's new with us? I think there are two really terrific things that we brought the market that we see, get a lot of interest and attention from our customers and create value. One is this idea of delivering, including the infrastructure that's been in space as a service for some of the largest news cases out there. Very large enterprises. We want to have a cloud experience on prime just like they have it in public clouds. That is absolutely fantastic. And that's new and different and very, very exciting. Customs. The second thing that's new and compelling and exciting is the is lands which is this kubernetes, i. e. that has empowered in the meantime, close to 180,000 communities, developers around the world to make it much much easier to take advantage of genetics. So you can think of it as a I. D. And a D. Bugger for anybody who is using genetics on public clouds or on on private infrastructure. That is getting tremendous traction and adoption. >>The interest in kubernetes has been unbelievable. I mean in coop con we saw kubernetes almost become boring in the sense of like it's everyone's using it and there's still now it's enabling a lot more cloud native development. Why does that lens matter what is the benefit? Because that's that's a killer opportunity because kubernetes is actively being adopted. The general consensus is it's delivering the value. >>Yeah. So let me unpack this in two aspects why Wise Bennett is important, why people adopting it and then how it lands adding value on top of it for people who want to use humanity's common. It is tremendously important is because it solves some very, very fundamental problems for developers and operators when building cloud native applications. These are problems that are very essential to actually operating in production but are really unpleasant people to solve, like availability, scalability, reusability of services. So all of that with amenities comes right out of the box and developers no longer have to worry about it. And at the same time, the benefits gives you a standard where you can build apps on public clouds and then move them on prem or build them on trend with them on public clouds and anywhere in between. So it gives a kind of this universal cloud native standard that you as a developer can rely on. And that's extremely valuable for developers. We all remember from the java times when java came online, people really value this idea of white ones run anywhere and that's exactly what benefits does for you in a clown in the world. So it's extremely screaming valuable for people. Um now how does let's add value in this context is also very exciting. So what's happening when you build these applications on a minute? This is that you have many, many services which interact with each other in fairly complex and sometimes unpredictable ways and they're also very much interact with the infrastructure. So you have you can you can imagine kind of this jungle this label building of many different cloud native services working together to build your app, run your app well, how are you going to navigate that and debug that as a developer as you build and optimize your code. So what lengths does it gives you kind of like a real time poppet of pounds of console. You can imagine like you're a fighter pilot in this jet and you have all these instruments kind of coming out here and gives you like this fantastic real time situational awareness. So you can very quickly figure out what is it that you need to do? Either fixing a bug in your application or optimize the performance of the code of making more your rival fixing security issues. And it makes it extremely easy for developers to use. Right? But this tradition has been hard to use complicated, this makes it super fast, easy, have a lot of fun. >>You know, that is really the great theme about this conference this year and your point exactly is developer experience making it simpler and easier. Okay. And innovative is really hits the mark on productivity. I mean and that's really been a key part. So I think that's why I think people are so excited about kubernetes because it's not like some other technologies that had all the setup requirement and making things easier to get stood up and manage. Its huge. So congratulations. A great point, great call out there, great insight. The next question to ask you is you guys have coined the term software factory. Um, yeah, this kind of plays into this. If you have all the services, you can roll them up together with lens and those tools, it's gonna be easier, more productive. So that means it's more software, open source is the software factory to what does that term mean? And how >>it is leverage. Yeah, So here's what it means to us. And so, as you know, today, Soft is being produced by two groups working together to build software, uh, certainly the poor people are the developments, these are the people who create the core functionality. Imagine all the software should be architected and ultimately ship the code right? And maintain the code, but the developers today don't operate just by themselves. They have their psychics, they have their friends for often platform engineering and platform engineers. These are the people who are helping developers, you know, make some of the most important choices as to which platform states we should use, which services they should use, how they should think about governance. How should they think about cloud infrastructure they should use, which open source libraries they should use. How often they should be fresh those libraries and support. So this platform engineers create if you want the factory, the substrate and the automation, which allows these developers to be highly productive. And the analogy want to make is the chip design, right. If you imagine ship design today, you take advantage of a lot of software, a lot of tooling and a lot of free package libraries. You get your job done, you're not doing it by yourself. Uh just wiring transistors together or logical elements. You do it using a massive amount of automation and software, like recent polls. So that's that's what we aim to provide you to customers because what we discovered is that customers, I don't want to be in the business of buildings off the factories, They don't want to be in the business or building platform engineering teams. If they can avoid it, they just do it because they have no choice. But it's difficult for them to do. It's cumbersome, it's expensive. It's a one off. It really doesn't create any unique business value because the platform engineering for a bank is very similar to the platform engineering for, let's say, an oil gas company or the insurance company. Um So we do it for them turnkey as a service. So they can be focusing on what Madison's for that. >>That's a great inside. I love that platform engineering, enabling software developers because, you know, look at sas throwing features together. Being a feature developer is cool. And and and the old days of platform was the full stack developer. And now you have this notion of platform as a service in a way, in this kind of new way. What's different agents? You've seen these waves of innovation? Certainly an open source that we've been covering your career for over a decade uh with more Anderson and open stick and others. This idea of a platform that enables software. What's changed now about this new substrate, you mentioned what's different than the old platform model? >>Uh That's a wonderful question. Uh a couple of things are different. So the first thing that's different is the openness and uh, and that everything is based on open source frameworks as opposed to platforms that we that are highly opinionated and, and I lock in. So I think that's that's a very, very fundamental difference. If you're looking at the initial kind of platform as a service approaches, there were there were extremely opinionated and very rigid and not always open source or just a combination between open source and proprietary. So that's one very big difference. The second very big difference is the emphasis on, and it goes along with the first one, the emphasis on um, multi cloud and infrastructure independence, where a platform is not wedded to a particular stack, where it's a AWS stack or a uh, an Azure stack or the EMR stack. And, and but it's truly a layer above. That's completely open source center. >>Yeah. >>And the third thing that is different is the idea that it's not just the software, the software alone will not do the job, you need the software and the content and the support and the expertise. If you're looking at how platform engineering is done at the large company like Apple, for example, facebook, it's really always the combination of those three things. It's the automation framework, the software, It's the content, the open source libraries or any other libraries that you create. And then it's the expertise that goes all this together and it's being offered to developers to be able to take advantage of this like soft factory. So I think these are the major differences in terms of where we are today was five years ago, 10 years ago. >>Thank you for unpacking that for I think that's a great uh great captures the shift and value. This brings up my next uh question for you because you know, you take that to the next level. DeVOps is now also graduating to a whole another level. The future of devops uh and software engineering more and more around kubernetes and your tools like lens and others managing the point. What is the new role of devops? Obviously Deb see cops but devops is now changing to What's the future of devops in your opinion? >>Well, I believe that there is going to become more and more integrated where our option is going to become uh something like Zero Arts, where are you going to be fully automated And something that's being delivered entirely through software and developers will be able to focus entirely, on, on creating and shipping code. I think that's the major, that's a major change that's happening. The problem is still yet I think to be solved like 100% correctly is the challenge of the last mile. like deploying that code on on on the infrastructure and making sure that he's performing correctly to the sls and optimizing everything. I also believe that the complexity veneta is very powerful by the same time offers a lot of room for complexity. There are many knobs and dials that you can turn in these microservices based architecture. And what we're discovering now is that this complexity kind of exceeds the ability of the individual developer or even a group of developers who constantly optimize things. So I believe what we will see is a I machine learning, taking charge of optimizing a lot of parameters, operating parameters around the applications and that unemployment benefits to ensure those applications perform to the expectations of the illness. And that might mean performing to a very high standard security. Or it might mean performing to a very a low latency in certain geography. Might mean performing too a very low cost structure that you can expect and those things can change over time. Right? So this challenge of operating an application introduction Burnett is substrate is I think dramatically higher than on just additional cloud infrastructure or virtualization. Because you have so many services inter operating with each other and so many different parameters you can set for machine learning and Ai >>I love the machine learning. Ai and I'd love to just get your thoughts on because I love the Zero ops narrative Because that's day one zero ops now that you're here day to being discussed and people are also hyping up, you know, ai Ops and other things. But you know this notion of day to, okay, I'm shipping stuff in the cloud and I have to have zero ops on day 234 et cetera. Uh, what's your take on that? Because that seems to be a hot air that customers and enterprises are getting in and understanding the new wave, writing it and then going, wait a minute pushing new code that's breaking something over there I built months ago. So this is just notion of day to obstacle. But again, if you want to be zero ops, it's gonna be every day. >>Oh, I think you hit the nail on the head. I don't think there's going to be a difference between they want the zero they want and today chair, I think every day is going to be the zero. And the reason for that is because people will be shipping all the time. So your application will change all the time. So the application will always be fresh, so it will always be there zero. So zero ops has to be there all the time. Not just in the birthday. >>Great slogan! Every day is day zero, which means it's going well. I mean there's no no problems. So I gotta ask you the question was one of the big things that's coming up as well as this idea of an SRE not new to devops world, but as enterprises start to get into an SRE role where with hybrid and now edge becoming people not just industrial, um there's been a lot of activity going on a distributed basis. So you're gonna need to have this kind of notion of large scale and 00 ops, which essentially means automation, all those things you mentioned, >>not everyone can >>afford that. Um Not every company can afford to have you know hardcore devops groups to manage and their release process, all that stuff. So how are you helping customers and how do you see this problem being solved? Because this is the accelerant people want, they want the the easy button, they want the zero ops but they just they don't they can't pipeline people fast enough to do this role. >>Yeah. What you're describing is the central differentiator we bring to customers is this idea of as a service experience with guaranteed outcomes. So that's what makes us different versus the traditional enterprise infrastructure software model where people just consume software vendors and system integrate themselves and then are in charge of operations themselves and carrying the technical risks themselves. We deliver everything as a service with guaranteed outcomes through the through cloud native experience. That means guaranteed as L. A. Is predictable outcomes, continuous updates, continuous upgrades. Your on prem infrastructure or your edge infrastructure is going to look and feel and behave exactly like a public cloud experience where you're not going to have to worry about sRS or maintaining the underlying being delivered to you as a service. That's a big part, that's a central part of what makes us different in this space. >>That's great value proposition. Can you just expand give an example of a use case where you guys are doing that? Because this is something that I'm seeing a lot of people looking to go faster. You know speed is good but also it could kill right? So you can break things if you go to a. >>Yeah absolutely. I can give you several examples where we're doing this um very exciting company. So one companies booking dot com booking dot com as a massive on from infrastructure but they also massive public cloud consumer. And they decided they want to bring their own infrastructure to the cloud level of automation, cloud level Sophistication, in other words, they want to have their Aws on brand, they wanted to the old, so eccentric and we're delivering this to them with very high in the cell is exactly as a service turnkey Where there is nothing for them to system in grade or to tune and optimize and operate is being really operating 24/7 guaranteed sls and outcomes by us. Well, combination of soft film expertise that we have at massive scale and to the standards of booking dot com. This is one example, another example and this is a very large company um is the opposite side of the spectrum. You know, because they're not called Mexico super successful. Soft as a service company in the security space, growing in leaps and bounds in very high technical demands and security demands. And they want to have an on prem and cloud infrastructure to complement public clouds. Why? Because security is very important to them. Latency is very important to them. Control the customer experience is very important to them. Cost is very important to them. So for that reason they want that in a network of data centers around the globe And we provide that for them. Turnkey as a service than before seven, which enables them to focus 100% on building their own sense on their the functionality which matters to their customers and not have to worry about the underlying cloud infrastructure in their data centers. All of that gets provided to them has guaranteed about experience to their end users. So this would be the examples where we're doing a >>great service. People are looking for a great job. Adrian, Great to see you. Thank you for coming on the cube here, doc are gone 2021. Um, take a minute to put a plug in for the company. What are you guys up to? What you're looking for hiring? I'll see. You got great tracks with customers, congratulates on lens. Um give a quick update on what's going >>on. Happy happy to give it up in the company. So he, here are the highlights. It was super excited about about what we achieved last year and then what we're up to this year. So last year, what we're proud of is despite Covid, we haven't laid off a single person. We kept all the staff and we hired staff. We have gained 160 new customers, many of them, some of the world's largest and best companies and 300 of all existing customers have expanded their business with us last year, which is fantastic. We also had a very strong financial physical cash flow positive. It was a tremendous, tremendous here for us. Uh, this year is very much growth here for us and we would incredible focus on customer outcomes and customer experience. So what we are really, really digging in super hard on is to give the customers the technology and the services that enable them to get to ship software faster and easier to dramatically increase the productivity of dissolved the development efforts on any cloud infrastructure on crime and public clouds using containers and is and to do that as scale. So we're extremely focused on customer outcomes, custom experience and then the innovation is required to make that happen. So you will continue to see a lot of innovation around lens. So the last better release of lens that we brought about has now a cloud service and have a lot of feature where you can share all your cloud automation with your bodies, in, in uh, in uh, in your development team. So the lens used to be a single user product. Now it's a multi user and team based product, which is fantastic, continues to grow very quickly. And then container cloud as a service. Uh, it's a very big part that we're meeting on the infrastructure side. Are you get quite >>the open source cloud company. Adrian. Congratulations. We've been again following even on the many waves of innovation. Open stack, large scale open source software. Congratulations. >>Uh chris >>Thank you very much for coming on the cube. >>Yeah. >>Okay. Dr khan 2021 cube coverage. I'm john furrier here where the Gi Enel Ceo, co founder and chairman of Miranda's sharing his perspective on the open source innovation with their process and also key trends in the industry that is changing the game in accelerating cloud value cloud scales. Cloud native applications. Thanks for watching. Mhm.

Published Date : May 27 2021

SUMMARY :

I'm john Kerry, host of the cube agent I john nice to see. Great to see you what's new Miranda's, give us the update on what's happening. are definitely pushing the boundaries of what benefits can do. You guys are investing in brand new open source solutions for customers. in the meantime, close to 180,000 communities, developers around the world to The general consensus is it's delivering the value. And at the same time, the benefits gives you a standard where you can build that had all the setup requirement and making things easier to get stood up and manage. So that's that's what we aim to provide you to customers because what we discovered And and and the old days of platform was the full stack developer. So the first thing that's different is the openness and uh, the software alone will not do the job, you need the software and the content What is the new role of devops? is going to become uh something like Zero Arts, where are you going to be fully automated okay, I'm shipping stuff in the cloud and I have to have zero ops on day 234 et cetera. So the application will always be fresh, so it will always be there zero. So I gotta ask you the question was one of the big things that's coming up as well as this idea of an SRE not new to devops world, Um Not every company can afford to have you know hardcore to worry about sRS or maintaining the underlying being delivered to you as So you can break things if you go to a. So for that reason they want that in a network of data centers around the globe in for the company. So the last better release of lens that we brought about We've been again following even on the many waves the open source innovation with their process and also key trends in the industry that is changing

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AppleORGANIZATION

0.99+

john KerryPERSON

0.99+

AdrianPERSON

0.99+

last yearDATE

0.99+

two groupsQUANTITY

0.99+

Adrian IonelPERSON

0.99+

Wise BennettPERSON

0.99+

two aspectsQUANTITY

0.99+

100%QUANTITY

0.99+

two elementsQUANTITY

0.99+

300QUANTITY

0.99+

2021DATE

0.99+

160 new customersQUANTITY

0.99+

MirantisPERSON

0.99+

facebookORGANIZATION

0.99+

one exampleQUANTITY

0.99+

johnPERSON

0.99+

five years agoDATE

0.99+

AWSORGANIZATION

0.99+

todayDATE

0.99+

CovidPERSON

0.98+

first oneQUANTITY

0.98+

secondQUANTITY

0.98+

zeroQUANTITY

0.98+

OneQUANTITY

0.98+

Gi Enel CeoPERSON

0.98+

this yearDATE

0.98+

second thingQUANTITY

0.98+

10 years agoDATE

0.98+

javaTITLE

0.98+

third thingQUANTITY

0.97+

oneQUANTITY

0.97+

dot comORGANIZATION

0.97+

180,000 communitiesQUANTITY

0.97+

three thingsQUANTITY

0.96+

single personQUANTITY

0.96+

Zero ArtsORGANIZATION

0.96+

about 160 new customersQUANTITY

0.96+

john furrierPERSON

0.95+

over a decadeQUANTITY

0.94+

a yearQUANTITY

0.94+

sevenQUANTITY

0.93+

first thingQUANTITY

0.93+

DockerConEVENT

0.93+

MirandaORGANIZATION

0.91+

five foldQUANTITY

0.91+

dr khanPERSON

0.91+

single user productQUANTITY

0.91+

khanPERSON

0.89+

chrisPERSON

0.88+

monthsDATE

0.85+

two really terrific thingsQUANTITY

0.83+

AzureTITLE

0.83+

zero opsQUANTITY

0.81+

AndersonPERSON

0.8+

MexicoLOCATION

0.79+

DebPERSON

0.77+

dot comORGANIZATION

0.75+

MadisonORGANIZATION

0.75+

minuteQUANTITY

0.75+

primeCOMMERCIAL_ITEM

0.74+

L. C. CeoORGANIZATION

0.74+

DrPERSON

0.71+

Morantes cubeORGANIZATION

0.71+

day 234QUANTITY

0.7+

EMRTITLE

0.6+

00 opsOTHER

0.59+

ZeroTITLE

0.58+

pounds of consoleQUANTITY

0.54+

of peopleQUANTITY

0.52+

Kubernetes on Any Infrastructure Top to Bottom Tutorials for Docker Enterprise Container Cloud


 

>>all right, We're five minutes after the hour. That's all aboard. Who's coming aboard? Welcome everyone to the tutorial track for our launchpad of them. So for the next couple of hours, we've got a SYRIZA videos and experts on hand to answer questions about our new product, Doctor Enterprise Container Cloud. Before we jump into the videos and the technology, I just want to introduce myself and my other emcee for the session. I'm Bill Milks. I run curriculum development for Mirant us on. And >>I'm Bruce Basil Matthews. I'm the Western regional Solutions architect for Moran Tissue esa and welcome to everyone to this lovely launchpad oven event. >>We're lucky to have you with us proof. At least somebody on the call knows something about your enterprise Computer club. Um, speaking of people that know about Dr Enterprise Container Cloud, make sure that you've got a window open to the chat for this session. We've got a number of our engineers available and on hand to answer your questions live as we go through these videos and disgusting problem. So that's us, I guess, for Dr Enterprise Container Cloud, this is Mirant asses brand new product for bootstrapping Doctor Enterprise Kubernetes clusters at scale Anything. The airport Abu's? >>No, just that I think that we're trying Thio. Uh, let's see. Hold on. I think that we're trying Teoh give you a foundation against which to give this stuff a go yourself. And that's really the key to this thing is to provide some, you know, many training and education in a very condensed period. So, >>yeah, that's exactly what you're going to see. The SYRIZA videos we have today. We're going to focus on your first steps with Dr Enterprise Container Cloud from installing it to bootstrapping your regional child clusters so that by the end of the tutorial content today, you're gonna be prepared to spin up your first documentary prize clusters using documented prize container class. So just a little bit of logistics for the session. We're going to run through these tutorials twice. We're gonna do one run through starting seven minutes ago up until I guess it will be ten fifteen Pacific time. Then we're gonna run through the whole thing again. So if you've got other colleagues that weren't able to join right at the top of the hour and would like to jump in from the beginning, ten. Fifteen Pacific time. We're gonna do the whole thing over again. So if you want to see the videos twice, you got public friends and colleagues that, you know you wanna pull in for a second chance to see this stuff, we're gonna do it all. All twice. Yeah, this session. Any any logistics I should add, Bruce that No, >>I think that's that's pretty much what we had to nail down here. But let's zoom dash into those, uh, feature films. >>Let's do Edmonds. And like I said, don't be shy. Feel free to ask questions in the chat or engineers and boosting myself are standing by to answer your questions. So let me just tee up the first video here and walk their cost. Yeah. Mhm. Yes. Sorry. And here we go. So our first video here is gonna be about installing the Doctor Enterprise Container Club Management cluster. So I like to think of the management cluster as like your mothership, right? This is what you're gonna use to deploy all those little child clusters that you're gonna use is like, Come on it as clusters downstream. So the management costs was always our first step. Let's jump in there >>now. We have to give this brief little pause >>with no good day video. Focus for this demo will be the initial bootstrap of the management cluster in the first regional clusters to support AWS deployments. The management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case, eight of us and the Elsie um, components on the UCP Cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a big strap note on this dependencies on handling with download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the adduce environment. The fourth configuring the deployment, defining things like the machine types on the fifth phase. Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node, just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now we're just checking through AWS to make sure that the account we want to use we have the correct credentials on the correct roles set up and validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just going to check that we can, from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next, we're going to run it. I'm in. Deploy it. Changing into that big struck folder. Just making see what's there. Right now we have no license file, so we're gonna get the license filed. Oh, okay. Get the license file through the more antis downloads site, signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, Once we've done that, we can now go ahead with the rest of the deployment. See that the follow is there. Uh, huh? That's again checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. All right, The next big step is valid in all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running an AWS policy create. So it is part of that is creating our Food trucks script, creating the mystery policy files on top of AWS, Just generally preparing the environment using a cloud formation script you'll see in a second will give a new policy confirmations just waiting for it to complete. Yeah, and there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created Today I am console. Go to that new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media Access key I D and the secret access key. We went, Yeah, usually then exported on the command line. Okay. Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Yes. Okay, that's the key. Secret X key. Right on. Let's kick it off. Yeah, So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you, and as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the west side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS. At the end of the process, that cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Okay. Local clusters boat just waiting for the various objects to get ready. Standard communities objects here Okay, so we speed up this process a little bit just for demonstration purposes. Yeah. There we go. So first note is being built the best in host. Just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for a W s to create the instance. Okay. Yes. Here, beauty there. Okay. Mhm. Okay. Yeah, yeah. Okay. On there. We got question. Host has been built on three instances for the management clusters have now been created. We're going through the process of preparing. Those nodes were now copying everything over. See that? The scaling up of controllers in the big Strap cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Yeah. Yeah, just waiting for key. Clark. Uh huh. Start to finish up. Yeah. No. What? Now we're shutting down control this on the local bootstrap node on preparing our I. D. C. Configuration. Fourth indication, soon as this is completed. Last phase will be to deploy stack light into the new cluster the last time Monitoring tool set way Go stack like to plan It has started. Mhm coming to the end of the deployment Mountain. Yeah, America. Final phase of the deployment. Onda, We are done. Okay, You'll see. At the end they're providing us the details of you. I log in so there's a keeper clogging. You can modify that initial default password is part of the configuration set up with one documentation way. Go Councils up way can log in. Yeah, yeah, thank you very much for watching. >>Excellent. So in that video are wonderful field CTO Shauna Vera bootstrapped up management costume for Dr Enterprise Container Cloud Bruce, where exactly does that leave us? So now we've got this management costume installed like what's next? >>So primarily the foundation for being able to deploy either regional clusters that will then allow you to support child clusters. Uh, comes into play the next piece of what we're going to show, I think with Sean O'Mara doing this is the child cluster capability, which allows you to then deploy your application services on the local cluster. That's being managed by the ah ah management cluster that we just created with the bootstrap. >>Right? So this cluster isn't yet for workloads. This is just for bootstrapping up the downstream clusters. Those or what we're gonna use for workings. >>Exactly. Yeah. And I just wanted to point out, since Sean O'Mara isn't around, toe, actually answer questions. I could listen to that guy. Read the phone book, and it would be interesting, but anyway, you can tell him I said that >>he's watching right now, Crusoe. Good. Um, cool. So and just to make sure I understood what Sean was describing their that bootstrap er knows that you, like, ran document fresh pretender Cloud from to begin with. That's actually creating a kind kubernetes deployment kubernetes and Docker deployment locally. That then hits the AWS a p i in this example that make those e c two instances, and it makes like a three manager kubernetes cluster there, and then it, like, copies itself over toe those communities managers. >>Yeah, and and that's sort of where the transition happens. You can actually see it. The output that when it says I'm pivoting, I'm pivoting from my local kind deployment of cluster AP, I toothy, uh, cluster, that's that's being created inside of AWS or, quite frankly, inside of open stack or inside of bare metal or inside of it. The targeting is, uh, abstracted. Yeah, but >>those air three environments that we're looking at right now, right? Us bare metal in open staff environments. So does that kind cluster on the bootstrap er go away afterwards. You don't need that afterwards. Yeah, that is just temporary. To get things bootstrapped, then you manage things from management cluster on aws in this example? >>Yeah. Yeah. The seed, uh, cloud that post the bootstrap is not required anymore. And there's no, uh, interplay between them after that. So that there's no dependencies on any of the clouds that get created thereafter. >>Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, be a temporary container that would bootstrap all the other containers. Go away. It's, uh, so sort of a similar, similar temporary transient bootstrapping model. Cool. Excellent. What will convict there? It looked like there wasn't a ton, right? It looked like you had to, like, set up some AWS parameters like credentials and region and stuff like that. But other than that, that looked like heavily script herbal like there wasn't a ton of point and click there. >>Yeah, very much so. It's pretty straightforward from a bootstrapping standpoint, The config file that that's generated the template is fairly straightforward and targeted towards of a small medium or large, um, deployment. And by editing that single file and then gathering license file and all of the things that Sean went through, um, that that it makes it fairly easy to script >>this. And if I understood correctly as well that three manager footprint for your management cluster, that's the minimum, right. We always insist on high availability for this management cluster because boy do not wanna see oh, >>right, right. And you know, there's all kinds of persistent data that needs to be available, regardless of whether one of the notes goes down or not. So we're taking care of all of that for you behind the scenes without you having toe worry about it as a developer. >>No, I think there's that's a theme that I think will come back to throughout the rest of this tutorial session today is there's a lot of there's a lot of expertise baked him to Dr Enterprise Container Cloud in terms of implementing best practices for you like the defaulter, just the best practices of how you should be managing these clusters, Miss Seymour. Examples of that is the day goes on. Any interesting questions you want to call out from the chap who's >>well, there was. Yeah, yeah, there was one that we had responded to earlier about the fact that it's a management cluster that then conduce oh, either the the regional cluster or a local child molester. The child clusters, in each case host the application services, >>right? So at this point, we've got, in some sense, like the simplest architectures for our documentary prize Container Cloud. We've got the management cluster, and we're gonna go straight with child cluster. In the next video, there's a more sophisticated architecture, which will also proper today that inserts another layer between those two regional clusters. If you need to manage regions like across a BS, reads across with these documents anything, >>yeah, that that local support for the child cluster makes it a lot easier for you to manage the individual clusters themselves and to take advantage of our observation. I'll support systems a stack light and things like that for each one of clusters locally, as opposed to having to centralize thumb >>eso. It's a couple of good questions. In the chat here, someone was asking for the instructions to do this themselves. I strongly encourage you to do so. That should be in the docks, which I think Dale helpfully thank you. Dale provided links for that's all publicly available right now. So just head on in, head on into the docks like the Dale provided here. You can follow this example yourself. All you need is a Mirante license for this and your AWS credentials. There was a question from many a hear about deploying this toe azure. Not at G. Not at this time. >>Yeah, although that is coming. That's going to be in a very near term release. >>I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. Very bracing. Cool. Okay. Any other thoughts on this one does. >>No, just that the fact that we're running through these individual pieces of the steps Well, I'm sure help you folks. If you go to the link that, uh, the gentleman had put into the chat, um, giving you the step by staff. Um, it makes it fairly straightforward to try this yourselves. >>E strongly encourage that, right? That's when you really start to internalize this stuff. OK, but before we move on to the next video, let's just make sure everyone has a clear picture in your mind of, like, where we are in the life cycle here creating this management cluster. Just stop me if I'm wrong. Who's creating this management cluster is like, you do that once, right? That's when your first setting up your doctor enterprise container cloud environment of system. What we're going to start seeing next is creating child clusters and this is what you're gonna be doing over and over and over again. When you need to create a cluster for this Deb team or, you know, this other team river it is that needs commodity. Doctor Enterprise clusters create these easy on half will. So this was once to set up Dr Enterprise Container Cloud Child clusters, which we're going to see next. We're gonna do over and over and over again. So let's go to that video and see just how straightforward it is to spin up a doctor enterprise cluster for work clothes as a child cluster. Undocumented brands contain >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster, the scaling of the cluster and how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the U I so you can switch. Project Mary only has access to development. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Nan Yes, this H Keys Associate ID for Mary into her team on the cloud credentials that allow you to create access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences, Right? Let's now set up semester search keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name, we copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our local machine. A simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, you go to the clusters tab. We hit the create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one release version five point seven is the current release Onda Attach. Mary's Key is necessary Key. We can then check the rest of the settings, confirming the provider Any kubernetes c r D r I p address information. We can change this. Should we wish to? We'll leave it default for now on. Then what components? A stack light I would like to deploy into my Custer. For this. I'm enabling stack light on logging on Aiken. Sit up the retention sizes Attention times on. Even at this stage, at any customer alerts for the watchdogs. E consider email alerting which I will need my smart host details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I commend side on the route device size. There we go, my three machines obviously creating. I now need to add some workers to this custom. So I go through the same process this time once again, just selecting worker. I'll just add to once again, the AM is extremely important. Will fail if we don't pick the right, Am I for a boon to machine in this case and the deployment has started. We can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here we've created the VPC. We've created the sub nets on We've created the Internet gateway. It's unnecessary made of us and we have no warnings of the stage. Yeah, this will then run for a while. We have one minute past waken click through. We can check the status of the machine bulls as individuals so we can check the machine info, details of the machines that we've assigned, right? Mhm Onda. See any events pertaining to the machine areas like this one on normal? Yeah. Just watch asked. The community's components are waiting for the machines to start. Go back to Custer's. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway on the stage. The machines have been built on assigned. I pick up the U. S. Thank you. Yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. Mhm. No speeding things up a little bit. This whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured. Mhm, mhm. And then we go. Cluster has been deployed. So once the classes deployed, we can now never get around our environment. Okay, Are cooking into configure cluster We could modify their cluster. We could get the end points for alert alert manager on See here The griffon occupying and Prometheus are still building in the background but the cluster is available on you would be able to put workloads on it the stretch to download the cube conflict so that I can put workloads on it. It's again three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster Mhm all right Now that the build is fully completed, we can check out cluster info on. We can see that Allow the satellite components have been built. All the storage is there, and we have access to the CPU. I So if we click into the cluster, we can access the UCP dashboard, right? Shit. Click the signing with Detroit button to use the SSO on. We give Mary's possible to use the name once again. Thing is, an unlicensed cluster way could license at this point. Or just skip it on. There. We have the UCP dashboard. You can see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon, a data just being automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Mhm. So we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah. General dashboard of Cuba navies cluster one of this is configurable. You can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster, all right to scale the cluster on to add a notice. A simple is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger disks and you'll see that worker has been added from the provisioning state on shortly. We will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workouts are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button, validating which release you would like to update to. In this case, the next available releases five point seven point one. Here I'm kicking the update by in the background We will coordinate. Drain each node slowly go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Girl, we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already on In a few minutes we'll see that there are great has been completed. There we go. Great. Done. Yeah. If you work loads of both using proper cloud native community standards, there will be no impact. >>Excellent. So at this point, we've now got a cluster ready to start taking our communities of workloads. He started playing or APs to that costume. So watching that video, the thing that jumped out to me at first Waas like the inputs that go into defining this workload cost of it. All right, so we have to make sure we were using on appropriate am I for that kind of defines the substrate about what we're gonna be deploying our cluster on top of. But there's very little requirements. A so far as I could tell on top of that, am I? Because Docker enterprise Container Cloud is gonna bootstrap all the components that you need. That s all we have is kind of kind of really simple bunch box that we were deploying these things on top of so one thing that didn't get dug into too much in the video. But it's just sort of implied. Bruce, maybe you can comment on this is that release that Shawn had to choose for his, uh, for his cluster in creating it. And that release was also the thing we had to touch. Wanted to upgrade part cluster. So you have really sharp eyes. You could see at the end there that when you're doing the release upgrade enlisted out a stack of components docker, engine, kubernetes, calico, aled, different bits and pieces that go into, uh, go into one of these commodity clusters that deploy. And so, as far as I can tell in that case, that's what we mean by a release. In this sense, right? It's the validated stack off container ization and orchestration components that you know we've tested out and make sure it works well, introduction environments. >>Yeah, and and And that's really the focus of our effort is to ensure that any CVS in any of the stack are taken care of that there is a fixes air documented and up streamed to the open stack community source community, um, and and that, you know, then we test for the scaling ability and the reliability in high availability configuration for the clusters themselves. The hosts of your containers. Right. And I think one of the key, uh, you know, benefits that we provide is that ability to let you know, online, high. We've got an update for you, and it's fixes something that maybe you had asked us to fix. Uh, that all comes to you online as your managing your clusters, so you don't have to think about it. It just comes as part of the product. >>You just have to click on Yes. Please give me that update. Uh, not just the individual components, but again. It's that it's that validated stack, right? Not just, you know, component X, y and Z work. But they all work together effectively Scalable security, reliably cool. Um, yeah. So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old universal control plane. Doctor Enterprise. On top of that, Sean had the classic comment there, you know? Yeah. Yeah. You'll see a little warnings and errors or whatever. When you're setting up, UCP don't handle, right, Just let it do its job, and it will converge all its components, you know, after just just a minute or two. But we saw in that video, we sped things up a little bit there just we didn't wait for, you know, progress fighters to complete. But really, in real life, that whole process is that anything so spend up one of those one of those fosters so quite quite quick. >>Yeah, and and I think the the thoroughness with which it goes through its process and re tries and re tries, uh, as you know, and it was evident when we went through the initial ah video of the bootstrapping as well that the processes themselves are self healing, as they are going through. So they will try and retry and wait for the event to complete properly on. And once it's completed properly, then it will go to the next step. >>Absolutely. And the worst thing you could do is panic at the first warning and start tearing things that don't don't do that. Just don't let it let it heal. Let take care of itself. And that's the beauty of these manage solutions is that they bake in a lot of subject matter expertise, right? The decisions that are getting made by those containers is they're bootstrapping themselves, reflect the expertise of the Mirant ISS crew that has been developing this content in these two is free for years and years now, over recognizing humanities. One cool thing there that I really appreciate it actually that it adds on top of Dr Enterprise is that automatic griffon a deployment as well. So, Dr Enterprises, I think everyone knows has had, like, some very high level of statistics baked into its dashboard for years and years now. But you know our customers always wanted a double click on that right to be able to go a little bit deeper. And Griffon are really addresses that it's built in dashboards. That's what's really nice to see. >>Yeah, uh, and all of the alerts and, uh, data are actually captured in a Prometheus database underlying that you have access to so that you are allowed to add new alerts that then go out to touch slack and say hi, You need to watch your disk space on this machine or those kinds of things. Um, and and this is especially helpful for folks who you know, want to manage the application service layer but don't necessarily want to manage the operations side of the house. So it gives them a tool set that they can easily say here, Can you watch these for us? And Miran tas can actually help do that with you, So >>yeah, yeah, I mean, that's just another example of baking in that expert knowledge, right? So you can leverage that without tons and tons of a long ah, long runway of learning about how to do that sort of thing. Just get out of the box right away. There was the other thing, actually, that you could sleep by really quickly if you weren't paying close attention. But Sean mentioned it on the video. And that was how When you use dark enterprise container cloud to scale your cluster, particularly pulling a worker out, it doesn't just like Territo worker down and forget about it. Right? Is using good communities best practices to cordon and drain the No. So you aren't gonna disrupt your workloads? You're going to just have a bunch of containers instantly. Excellent crash. You could really carefully manage the migration of workloads off that cluster has baked right in tow. How? How? Document? The brass container cloud is his handling cluster scale. >>Right? And And the kubernetes, uh, scaling methodology is is he adhered to with all of the proper techniques that ensure that it will tell you. Wait, you've got a container that actually needs three, uh, three, uh, instances of itself. And you don't want to take that out, because that node, it means you'll only be able to have to. And we can't do that. We can't allow that. >>Okay, Very cool. Further thoughts on this video. So should we go to the questions. >>Let's let's go to the questions >>that people have. Uh, there's one good one here, down near the bottom regarding whether an a p I is available to do this. So in all these demos were clicking through this web. You I Yes, this is all a p. I driven. You could do all of this. You know, automate all this away is part of the CSC change. Absolutely. Um, that's kind of the point, right? We want you to be ableto spin up. Come on. I keep calling them commodity clusters. What I mean by that is clusters that you can create and throw away. You know, easily and automatically. So everything you see in these demos eyes exposed to FBI? >>Yeah. In addition, through the standard Cube cuddle, Uh, cli as well. So if you're not a programmer, but you still want to do some scripting Thio, you know, set up things and deploy your applications and things. You can use this standard tool sets that are available to accomplish that. >>There is a good question on scale here. So, like, just how many clusters and what sort of scale of deployments come this kind of support our engineers report back here that we've done in practice up to a Zeman ia's like two hundred clusters. We've deployed on this with two hundred fifty nodes in a cluster. So were, you know, like like I said, hundreds, hundreds of notes, hundreds of clusters managed by documented press container fall and then those downstream clusters, of course, subject to the usual constraints for kubernetes, right? Like default constraints with something like one hundred pods for no or something like that. There's a few different limitations of how many pods you can run on a given cluster that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. >>Yeah, E. I mean, I don't think that we constrain any of the capabilities that are available in the, uh, infrastructure deliveries, uh, service within the goober Netease framework. So were, you know, But we are, uh, adhering to the standards that we would want to set to make sure that we're not overloading a node or those kinds of things, >>right. Absolutely cool. Alright. So at this point, we've got kind of a two layered our protection when we are management cluster, but we deployed in the first video. Then we use that to deploy one child clustering work, classroom, uh, for more sophisticated deployments where we might want to manage child clusters across multiple regions. We're gonna add another layer into our architectural we're gonna add in regional cluster management. So this idea you're gonna have the single management cluster that we started within the first video. On the next video, we're gonna learn how to spin up a regional clusters, each one of which would manage, for example, a different AWS uh, US region. So let me just pull out the video for that bill. We'll check it out for me. Mhm. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectures of you how to set up the management environment, prepare for the deployment deployment overview and then just to prove it, to play a regional child cluster. So, looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case AWS on the LCN components on the D you speak Cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need a regional cluster? Different platform architectures, for example aws who have been stack even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager we also Machine Manager were held. Mandel are managed as well as the actual provider logic. Mhm. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. And you see, it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster similar to what we're going to deploy now, also only has three managers once again, no workers. But as a comparison, here's a child cluster This one has three managers, but also has additional workers associate it to the cluster. All right, we need to connect. Tell bootstrap note. Preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine. All right. A few things we have to do to make sure the environment is ready. First thing we're going to see go into route. We'll go into our releases folder where we have the kozberg struck on. This was the original bootstrap used to build the original management cluster. Yeah, we're going to double check to make sure our cube con figures there once again, the one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything is working. A condom. No damages waken access to a swell. Yeah. Next we're gonna edit the machine definitions. What we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I. So that's found under the templates AWS directory. We don't need to edit anything else here. But we could change items like the size of the machines attempts. We want to use that The key items to ensure where you changed the am I reference for the junta image is the one for the region in this case AWS region for utilizing this was no construct deployment. We have to make sure we're pointing in the correct open stack images. Yeah, okay. Set the correct and my save file. Now we need to get up credentials again. When we originally created the bootstrap cluster, we got credentials from eight of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we're just exporting the AWS access key and I d. What's important is CAAs aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our cube conflict that we want to use for the management cluster. When we looked at earlier Yeah, now we're exporting that. Want to call the cluster region Is Frank Foods Socrates Frankfurt yet trying to use something descriptive It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed. Um, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at W s and waiting for that bastard and no to get started. Please. The best you nerd Onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy. Dr. Enterprise, this is probably the longest face. Yeah, seeing the second that all the nerds will go from the player deployed. Prepare, prepare. Yeah, You'll see their status changes updates. He was the first night ready. Second, just applying second already. Both my time. No waiting from home control. Let's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running the date of the U. S. All my stay. Ah, now we're playing Stockland. Switch over is done on. Done. Now I will build a child cluster in the new region very, very quickly to find the cluster will pick. Our new credential has shown up. We'll just call it Frankfurt for simplicity a key and customs to find. That's the machine. That cluster stop with three managers. Set the correct Am I for the region? Yeah, Do the same to add workers. There we go test the building. Yeah. Total bill of time Should be about fifteen minutes. Concedes in progress. It's going to expect this up a little bit. Check the events. We've created all the dependencies, machine instances, machines, a boat shortly. We should have a working cluster in Frankfurt region. Now almost a one note is ready from management. Two in progress. Yeah, on we're done. Clusters up and running. Yeah. >>Excellent. So at this point, we've now got that three tier structure that we talked about before the video. We got that management cluster that we do strapped in the first video. Now we have in this example to different regional clustering one in Frankfurt, one of one management was two different aws regions. And sitting on that you can do Strap up all those Doctor enterprise costumes that we want for our work clothes. >>Yeah, that's the key to this is to be able to have co resident with your actual application service enabled clusters the management co resident with it so that you can, you know, quickly access that he observation Elson Surfboard services like the graph, Ana and that sort of thing for your particular region. A supposed to having to lug back into the home. What did you call it when we started >>the mothership? >>The mothership. Right. So we don't have to go back to the mother ship. We could get >>it locally. Yeah, when, like to that point of aggregating things under a single pane of glass? That's one thing that again kind of sailed by in the demo really quickly. But you'll notice all your different clusters were on that same cluster. Your pain on your doctor Enterprise Container Cloud management. Uh, court. Right. So both your child clusters for running workload and your regional clusters for bootstrapping. Those child clusters were all listed in the same place there. So it's just one pane of glass to go look for, for all of your clusters, >>right? And, uh, this is kind of an important point. I was, I was realizing, as we were going through this. All of the mechanics are actually identical between the bootstrapped cluster of the original services and the bootstrapped cluster of the regional services. It's the management layer of everything so that you only have managers, you don't have workers and that at the child cluster layer below the regional or the management cluster itself, that's where you have the worker nodes. And those are the ones that host the application services in that three tiered architecture that we've now defined >>and another, you know, detail for those that have sharp eyes. In that video, you'll notice when deploying a child clusters. There's not on Lee. A minimum of three managers for high availability management cluster. You must have at least two workers that's just required for workload failure. It's one of those down get out of work. They could potentially step in there, so your minimum foot point one of these child clusters is fine. Violence and scalable, obviously, from a >>That's right. >>Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want to my last video. There's another question here about, like where these clusters can live. So again, I know these examples are very aws heavy. Honestly, it's just easy to set up down on the other us. We could do things on bare metal and, uh, open stack departments on Prem. That's what all of this still works in exactly the same way. >>Yeah, the, uh, key to this, especially for the the, uh, child clusters, is the provision hers? Right? See you establish on AWS provision or you establish a bare metal provision or you establish a open stack provision. Or and eventually that list will include all of the other major players in the cloud arena. But you, by selecting the provision or within your management interface, that's where you decide where it's going to be hosted, where the child cluster is to be hosted. >>Speaking off all through a child clusters. Let's jump into our last video in the Siri's, where we'll see how to spin up a child cluster on bare metal. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. So why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent. Provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and supports high performance workloads like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another opera visor. Lay it between so continue on the theme Why Communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p G A s G p us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. Uh, we can handle utilization in the scheduling. Better Onda we increase the performances and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project will add the bare metal hosts, including the host name. I put my credentials I pay my address the Mac address on then provide a machine type label to determine what type of machine it is for later use. Okay, let's get started. So well again. Was the operator thing. We'll go and we'll create a project for our machines to be a member off helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. So the first thing we had to be in post, Yeah, many of the machine A name. Anything you want, que experimental zero one. Provide the IAP my user name type my password. Okay. On the Mac address for the common interface with the boot interface and then the i p m I i p address These machines will be at the time storage worker manager. He's a manager. Yeah, we're gonna add a number of other machines on will. Speed this up just so you could see what the process looks like in the future. Better discovery will be added to the product. Okay. Okay. Getting back there we have it are Six machines have been added, are busy being inspected, being added to the system. Let's have a look at the details of a single note. Yeah, you can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. I see. Okay, let's go and create the cluster. Yeah, So we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So we'll credit custom. We'll give it a name, but if it were selecting bare metal on the region, we're going to select the version we want to apply. No way. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of dress range on update the address range that we want to use for the cluster. Check that the sea ideal blocks for the Cuban ladies and tunnels are what we want them to be. Enable disabled stack light. Yeah, and soothe stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here. We're focused on building communities clusters, so we're gonna put the count of machines. You want managers? We're gonna pick the label type manager and create three machines is the manager for the Cuban eighties. Casting Okay thing. We're having workers to the same. It's a process. Just making sure that the worker label host level are I'm sorry. On when Wait for the machines to deploy. Let's go through the process of putting the operating system on the notes validating and operating system deploying doctor identifies Make sure that the cluster is up and running and ready to go. Okay, let's review the bold events waken See the machine info now populated with more information about the specifics of things like storage and of course, details of a cluster etcetera. Yeah, yeah, well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build? And that brings us to the end of this particular demo. You can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>All right, so there we have it, deploying a cluster to bare metal. Much the same is how we did for AWS. I guess maybe the biggest different stepwise there is there is that registration face first, right? So rather than just using AWS financials toe magically create PM's in the cloud. You got a point out all your bare metal servers to Dr Enterprise between the cloud and they really come in, I guess three profiles, right? You got your manager profile with a profile storage profile which has been labeled as allocate. Um, crossword cluster has appropriate, >>right? And And I think that the you know, the key differentiator here is that you have more physical control over what, uh, attributes that love your cat, by the way, uh, where you have the different attributes of a server of physical server. So you can, uh, ensure that the SSD configuration on the storage nodes is gonna be taken advantage of in the best way the GP use on the worker nodes and and that the management layer is going to have sufficient horsepower to, um, spin up to to scale up the the environments, as required. One of the things I wanted to mention, though, um, if I could get this out without the choking much better. Um, is that Ah, hey, mentioned the load balancer and I wanted to make sure in defining the load balancer and the load balancer ranges. Um, that is for the top of the the cluster itself. That's the operations of the management, uh, layer integrating with your systems internally to be able to access the the Cube Can figs. I I p address the, uh, in a centralized way. It's not the load balancer that's working within the kubernetes cluster that you are deploying. That's still cube proxy or service mesh, or however you're intending to do it. So, um, it's kind of an interesting step that your initial step in building this, um and we typically use things like metal L B or in gen X or that kind of thing is to establish that before we deploy this bear mental cluster so that it can ride on top of that for the tips and things. >>Very cool. So any other thoughts on what we've seen so far today? Bruce, we've gone through all the different layers. Doctor enterprise container clouds in these videos from our management are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. Closing thoughts before we take just a very short break and run through these demos again. >>You know, I've been very exciting. Ah, doing the presentation with you. I'm really looking forward to doing it the second time, so that we because we've got a good rhythm going about this kind of thing. So I'm looking forward to doing that. But I think that the key elements of what we're trying to convey to the folks out there in the audience that I hope you've gotten out of it is that will that this is an easy enough process that if you follow the step by steps going through the documentation that's been put out in the chat, um, that you'll be able to give this a go yourself, Um, and you don't have to limit yourself toe having physical hardware on prim to try it. You could do it in a ws as we've shown you today. And if you've got some fancy use cases like, uh, you you need a Hadoop And and, uh, you know, cloud oriented ai stuff that providing a bare metal service helps you to get there very fast. So right. Thank you. It's been a pleasure. >>Yeah, thanks everyone for coming out. So, like I said we're going to take a very short, like, three minute break here. Uh, take the opportunity to let your colleagues know if they were in another session or they didn't quite make it to the beginning of this session. Or if you just want to see these demos again, we're going to kick off this demo. Siri's again in just three minutes at ten. Twenty five a. M. Pacific time where we will see all this great stuff again. Let's take a three minute break. I'll see you all back here in just two minutes now, you know. Okay, folks, that's the end of our extremely short break. We'll give people just maybe, like one more minute to trickle in if folks are interested in coming on in and jumping into our demo. Siri's again. Eso For those of you that are just joining us now I'm Bill Mills. I head up curriculum development for the training team here. Moran Tous on Joining me for this session of demos is Bruce. Don't you go ahead and introduce yourself doors, who is still on break? That's cool. We'll give Bruce a minute or two to get back while everyone else trickles back in. There he is. Hello, Bruce. >>How'd that go for you? Okay, >>Very well. So let's kick off our second session here. I e just interest will feel for you. Thio. Let it run over here. >>Alright. Hi. Bruce Matthews here. I'm the Western Regional Solutions architect for Marantz. Use A I'm the one with the gray hair and the glasses. Uh, the handsome one is Bill. So, uh, Bill, take it away. >>Excellent. So over the next hour or so, we've got a Siris of demos that's gonna walk you through your first steps with Dr Enterprise Container Cloud Doctor Enterprise Container Cloud is, of course, Miranda's brand new offering from bootstrapping kubernetes clusters in AWS bare metal open stack. And for the providers in the very near future. So we we've got, you know, just just over an hour left together on this session, uh, if you joined us at the top of the hour back at nine. A. M. Pacific, we went through these demos once already. Let's do them again for everyone else that was only able to jump in right now. Let's go. Our first video where we're gonna install Dr Enterprise container cloud for the very first time and use it to bootstrap management. Cluster Management Cluster, as I like to describe it, is our mother ship that's going to spin up all the other kubernetes clusters, Doctor Enterprise clusters that we're gonna run our workloads on. So I'm gonna do >>I'm so excited. I can hardly wait. >>Let's do it all right to share my video out here. Yeah, let's do it. >>Good day. The focus for this demo will be the initial bootstrap of the management cluster on the first regional clusters. To support AWS deployments, the management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case AWS and the Elsom components on the UCP cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a bootstrap note on its dependencies on handling the download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the ideas environment, the fourth configuring the deployment, defining things like the machine types on the fifth phase, Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node. Just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now, we're just checking through aws to make sure that the account we want to use we have the correct credentials on the correct roles set up on validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just gonna check that we can from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next we're going to run it. Yeah, I've been deployed changing into that big struck folder, just making see what's there right now we have no license file, so we're gonna get the license filed. Okay? Get the license file through more antis downloads site signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, since we've done that, we can now go ahead with the rest of the deployment. Yeah, see what the follow is there? Uh huh. Once again, checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. Alright. Next big step is violating all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running in AWS policy create. So it is part of that is creating our food trucks script. Creating this through policy files onto the AWS, just generally preparing the environment using a cloud formation script, you'll see in a second, I'll give a new policy confirmations just waiting for it to complete. And there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created. Good day. I am console. Go to the new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media access Key I. D and the secret access key, but usually then exported on the command line. Okay, Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Okay, thanks. Is key. So you could X key Right on. Let's kick it off. So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you. Um, as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the AWS side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS at the end of the process. That cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Yeah, okay. Local clusters boat. Just waiting for the various objects to get ready. Standard communities objects here. Yeah, you mentioned Yeah. So we've speed up this process a little bit just for demonstration purposes. Okay, there we go. So first note is being built the bastion host just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for AWS to create the instance. Okay. Yeah. Beauty there. Movies. Okay, sketch. Hello? Yeah, Okay. Okay. On. There we go. Question host has been built on three instances for the management clusters have now been created. Okay, We're going through the process of preparing. Those nodes were now copying everything over. See that scaling up of controllers in the big strapped cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Right? Okay. Just waiting for key. Clark. Uh huh. So finish up. Yeah. No. Now we're shutting down. Control this on the local bootstrap node on preparing our I. D. C configuration, fourth indication. So once this is completed, the last phase will be to deploy stack light into the new cluster, that glass on monitoring tool set, Then we go stack like deployment has started. Mhm. Coming to the end of the deployment mountain. Yeah, they were cut final phase of the deployment. And we are done. Yeah, you'll see. At the end, they're providing us the details of you. I log in. So there's a key Clark log in. Uh, you can modify that initial default possible is part of the configuration set up where they were in the documentation way. Go Councils up way can log in. Yeah. Yeah. Thank you very much for watching. >>All right, so at this point, what we have we got our management cluster spun up, ready to start creating work clusters. So just a couple of points to clarify there to make sure everyone caught that, uh, as advertised. That's darker. Enterprise container cloud management cluster. That's not rework loans. are gonna go right? That is the tool and you're gonna use to start spinning up downstream commodity documentary prize clusters for bootstrapping record too. >>And the seed host that were, uh, talking about the kind cluster dingy actually doesn't have to exist after the bootstrap succeeds eso It's sort of like, uh, copies head from the seed host Toothy targets in AWS spins it up it then boots the the actual clusters and then it goes away too, because it's no longer necessary >>so that bootstrapping know that there's not really any requirements, Hardly on that, right. It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, as you just said, it's just a kubernetes in docker cluster on that piece. Drop note is just gonna get torn down after the set up finishes on. You no longer need that. Everything you're gonna do, you're gonna drive from the single pane of glass provided to you by your management cluster Doctor enterprise Continue cloud. Another thing that I think is sort of interesting their eyes that the convict is fairly minimal. Really? You just need to provide it like aws regions. Um, am I? And that's what is going to spin up that spending that matter faster. >>Right? There is a mammal file in the bootstrap directory itself, and all of the necessary parameters that you would fill in have default set. But you have the option then of going in and defining a different Am I different for a different region, for example? Oh, are different. Size of instance from AWS. >>One thing that people often ask about is the cluster footprint. And so that example you saw they were spitting up a three manager, um, managing cluster as mandatory, right? No single manager set up at all. We want high availability for doctrine Enterprise Container Cloud management. Like so again, just to make sure everyone sort of on board with the life cycle stage that we're at right now. That's the very first thing you're going to do to set up Dr Enterprise Container Cloud. You're going to do it. Hopefully exactly once. Right now, you've got your management cluster running, and they're gonna use that to spend up all your other work clusters Day today has has needed How do we just have a quick look at the questions and then lets take a look at spinning up some of those child clusters. >>Okay, e think they've actually been answered? >>Yeah, for the most part. One thing I'll point out that came up again in the Dail, helpfully pointed out earlier in surgery, pointed out again, is that if you want to try any of the stuff yourself, it's all of the dogs. And so have a look at the chat. There's a links to instructions, so step by step instructions to do each and every thing we're doing here today yourself. I really encourage you to do that. Taking this out for a drive on your own really helps internalizing communicate these ideas after the after launch pad today, Please give this stuff try on your machines. Okay, So at this point, like I said, we've got our management cluster. We're not gonna run workloads there that we're going to start creating child clusters. That's where all of our work and we're gonna go. That's what we're gonna learn how to do in our next video. Cue that up for us. >>I so love Shawn's voice. >>Wasn't that all day? >>Yeah, I watched him read the phone book. >>All right, here we go. Let's now that we have our management cluster set up, let's create a first child work cluster. >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster the scaling of the cluster on how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the u I. So you can switch Project Mary only has access to development. Uh huh. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Man. Yes, this H keys, Associate ID for Mary into her team on the cloud credentials that allow you to create or access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences. Right. Let's now set up some ssh keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name. We copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our machine. A very simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, we got the clusters tab we had to create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one released version five point seven is the current release Onda Attach. Mary's Key is necessary key. We can then check the rest of the settings, confirming the provider any kubernetes c r D a r i p address information. We can change this. Should we wish to? We'll leave it default for now and then what components of stack light? I would like to deploy into my custom for this. I'm enabling stack light on logging, and I consider the retention sizes attention times on. Even at this stage, add any custom alerts for the watchdogs. Consider email alerting which I will need my smart host. Details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I convinced side on the route. Device size. There we go. My three machines are busy creating. I now need to add some workers to this cluster. So I go through the same process this time once again, just selecting worker. I'll just add to once again the am I is extremely important. Will fail if we don't pick the right. Am I for a Clinton machine? In this case and the deployment has started, we can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen. Cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here. We've created the VPC. We've created the sub nets on. We've created the Internet Gateway. It's unnecessary made of us. And we have no warnings of the stage. Okay, this will then run for a while. We have one minute past. We can click through. We can check the status of the machine balls as individuals so we can check the machine info, details of the machines that we've assigned mhm and see any events pertaining to the machine areas like this one on normal. Yeah. Just last. The community's components are waiting for the machines to start. Go back to customers. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway. And at this stage, the machines have been built on assigned. I pick up the U S. Yeah, yeah, yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. No speeding things up a little bit this whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured mhm and then we go. Cluster has been deployed. So once the classes deployed, we can now never get around. Our environment are looking into configure cluster. We could modify their cluster. We could get the end points for alert Alert Manager See here the griffon occupying and Prometheus are still building in the background but the cluster is available on You would be able to put workloads on it at this stage to download the cube conflict so that I can put workloads on it. It's again the three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster. All right, Now that the build is fully completed, we can check out cluster info on. We can see that all the satellite components have been built. All the storage is there, and we have access to the CPU. I. So if we click into the cluster, we can access the UCP dashboard, click the signing with the clock button to use the SSO. We give Mary's possible to use the name once again. Thing is an unlicensed cluster way could license at this point. Or just skip it on. Do we have the UCP dashboard? You could see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon. A data just been automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Um, so we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah, a general dashboard of Cuba Navies cluster. What If this is configurable, you can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster. All right to scale the cluster on to add a No. This is simple. Is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger group disks and you'll see that worker has been added in the provisioning state. On shortly, we will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node we would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workloads are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button validating which release you would like to update to this case. This available releases five point seven point one give you I'm kicking the update back in the background. We will coordinate. Drain each node slowly, go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Who we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already. Yeah, and in a few minutes, we'll see that the upgrade has been completed. There we go. Great. Done. If you work loads of both using proper cloud native community standards, there will be no impact. >>All right, there. We haven't. We got our first workload cluster spun up and managed by Dr Enterprise Container Cloud. So I I loved Shawn's classic warning there. When you're spinning up an actual doctor enterprise deployment, you see little errors and warnings popping up. Just don't touch it. Just leave it alone and let Dr Enterprises self healing properties take care of all those very transient temporary glitches, resolve themselves and leave you with a functioning workload cluster within victims. >>And now, if you think about it that that video was not very long at all. And that's how long it would take you if someone came into you and said, Hey, can you spend up a kubernetes cluster for development development A. Over here, um, it literally would take you a few minutes to thio Accomplish that. And that was with a W s. Obviously, which is sort of, ah, transient resource in the cloud. But you could do exactly the same thing with resource is on Prem or resource is, um physical resource is and will be going through that later in the process. >>Yeah, absolutely one thing that is present in that demo, but that I like to highlight a little bit more because it just kind of glides by Is this notion of, ah, cluster release? So when Sean was creating that cluster, and also when when he was upgrading that cluster, he had to choose a release. What does that didn't really explain? What does that mean? Well, in Dr Enterprise Container Cloud, we have released numbers that capture the entire staff of container ization tools that will be deploying to that workload costume. So that's your version of kubernetes sed cor DNs calico. Doctor Engineer. All the different bits and pieces that not only work independently but are validated toe work together as a staff appropriate for production, humanities, adopted enterprise environments. >>Yep. From the bottom of the stack to the top, we actually test it for scale. Test it for CVS, test it for all of the various things that would, you know, result in issues with you running the application services. And I've got to tell you from having, you know, managed kubernetes deployments and things like that that if you're the one doing it yourself, it can get rather messy. Eso This makes it easy. >>Bruce, you were staying a second ago. They I'll take you at least fifteen minutes to install your release. Custer. Well, sure, but what would all the other bits and pieces you need toe? Not just It's not just about pressing the button to install it, right? It's making the right decision. About what components work? Well, our best tested toe be successful working together has a staff? Absolutely. We this release mechanism and Dr Enterprise Container Cloud. Let's just kind of package up that expert knowledge and make it available in a really straightforward, fashionable species. Uh, pre Confederate release numbers and Bruce is you're pointing out earlier. He's got delivered to us is updates kind of transparent period. When when? When Sean wanted toe update that cluster, he created little update. Custer Button appeared when an update was available. All you gotta do is click. It tells you what Here's your new stack of communities components. It goes ahead. And the straps those components for you? >>Yeah, it actually even displays at the top of the screen. Ah, little header That says you've got an update available. Do you want me to apply? It s o >>Absolutely. Another couple of cool things. I think that are easy to miss in that demo was I really like the on board Bafana that comes along with this stack. So we've been Prometheus Metrics and Dr Enterprise for years and years now. They're very high level. Maybe in in previous versions of Dr Enterprise having those detailed dashboards that Ravana provides, I think that's a great value out there. People always wanted to be ableto zoom in a little bit on that, uh, on those cluster metrics, you're gonna provides them out of the box for us. Yeah, >>that was Ah, really, uh, you know, the joining of the Miranda's and Dr teams together actually spawned us to be able to take the best of what Morantes had in the open stack environment for monitoring and logging and alerting and to do that integration in in a very short period of time so that now we've got it straight across the board for both the kubernetes world and the open stack world. Using the same tool sets >>warm. One other thing I wanna point out about that demo that I think there was some questions about our last go around was that demo was all about creating a managed workplace cluster. So the doctor enterprise Container Cloud managers were using those aws credentials provisioned it toe actually create new e c two instances installed Docker engine stalled. Doctor Enterprise. Remember all that stuff on top of those fresh new VM created and managed by Dr Enterprise contain the cloud. Nothing unique about that. AWS deployments do that on open staff doing on Parramatta stuff as well. Um, there's another flavor here, though in a way to do this for all of our long time doctor Enterprise customers that have been running Doctor Enterprise for years and years. Now, if you got existing UCP points existing doctor enterprise deployments, you plug those in to Dr Enterprise Container Cloud, uh, and use darker enterprise between the cloud to manage those pre existing Oh, working clusters. You don't always have to be strapping straight from Dr Enterprises. Plug in external clusters is bad. >>Yep, the the Cube config elements of the UCP environment. The bundling capability actually gives us a very straightforward methodology. And there's instructions on our website for exactly how thio, uh, bring in import and you see p cluster. Um so it it makes very convenient for our existing customers to take advantage of this new release. >>Absolutely cool. More thoughts on this wonders if we jump onto the next video. >>I think we should move press on >>time marches on here. So let's Let's carry on. So just to recap where we are right now, first video, we create a management cluster. That's what we're gonna use to create All our downstream were closed clusters, which is what we did in this video. Let's maybe the simplest architectures, because that's doing everything in one region on AWS pretty common use case because we want to be able to spin up workload clusters across many regions. And so to do that, we're gonna add a third layer in between the management and work cluster layers. That's gonna be our regional cluster managers. So this is gonna be, uh, our regional management cluster that exists per region that we're going to manage those regional managers will be than the ones responsible for spending part clusters across all these different regions. Let's see it in action in our next video. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectural overview, how to set up the management environment, prepare for the deployment deployment overview, and then just to prove it, to play a regional child cluster. So looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case, AWS on the L C M components on the d you speak cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need original cluster? Different platform architectures, for example AWS open stack, even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager. We also machine manager. We're hell Mandel are managed as well as the actual provider logic. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. When you see it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster, similar to what we're going to deploy now. Also only has three managers once again, no workers. But as a comparison is a child cluster. This one has three managers, but also has additional workers associate it to the cluster. Yeah, all right, we need to connect. Tell bootstrap note, preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine Mhm. All right, A few things we have to do to make sure the environment is ready. First thing we're gonna pseudo into route. I mean, we'll go into our releases folder where we have the car's boot strap on. This was the original bootstrap used to build the original management cluster. We're going to double check to make sure our cube con figures there It's again. The one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything's working, condone, load our images waken access to a swell. Yeah, Next, we're gonna edit the machine definitions what we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I So that's found under the templates AWS directory. We don't need to edit anything else here, but we could change items like the size of the machines attempts we want to use but the key items to ensure where changed the am I reference for the junta image is the one for the region in this case aws region of re utilizing. This was an open stack deployment. We have to make sure we're pointing in the correct open stack images. Yeah, yeah. Okay. Sit the correct Am I save the file? Yeah. We need to get up credentials again. When we originally created the bootstrap cluster, we got credentials made of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we just exporting AWS access key and I d. What's important is Kaz aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our Q conflict that we want to use for the management cluster when we looked at earlier. Yeah, now we're exporting that. Want to call? The cluster region is Frankfurt's Socrates Frankfurt yet trying to use something descriptive? It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at us and waiting for the past, you know, to get started. Please the best your node, onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy Dr Enterprise, he says. Probably the longest face we'll see in a second that all the nodes will go from the player deployed. Prepare, prepare Mhm. We'll see. Their status changes updates. It was the first word ready. Second, just applying second. Grady, both my time away from home control that's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running a data for us? Yeah, almost a on. Now we're playing Stockland. Thanks. Whichever is done on Done. Now we'll build a child cluster in the new region very, very quickly. Find the cluster will pick our new credential have shown up. We'll just call it Frankfurt for simplicity. A key on customers to find. That's the machine. That cluster stop with three manages set the correct Am I for the region? Yeah, Same to add workers. There we go. That's the building. Yeah. Total bill of time. Should be about fifteen minutes. Concedes in progress. Can we expect this up a little bit? Check the events. We've created all the dependencies, machine instances, machines. A boat? Yeah. Shortly. We should have a working caster in the Frankfurt region. Now almost a one note is ready from management. Two in progress. On we're done. Trust us up and running. >>Excellent. There we have it. We've got our three layered doctor enterprise container cloud structure in place now with our management cluster in which we scrap everything else. Our regional clusters which manage individual aws regions and child clusters sitting over depends. >>Yeah, you can. You know you can actually see in the hierarchy the advantages that that presents for folks who have multiple locations where they'd like a geographic locations where they'd like to distribute their clusters so that you can access them or readily co resident with your development teams. Um and, uh, one of the other things I think that's really unique about it is that we provide that same operational support system capability throughout. So you've got stack light monitoring the stack light that's monitoring the stack light down to the actual child clusters that they have >>all through that single pane of glass that shows you all your different clusters, whether their workload cluster like what the child clusters or usual clusters from managing different regions. Cool. Alright, well, time marches on your folks. We've only got a few minutes left and I got one more video in our last video for the session. We're gonna walk through standing up a child cluster on bare metal. So so far, everything we've seen so far has been aws focus. Just because it's kind of easy to make that was on AWS. We don't want to leave you with the impression that that's all we do, we're covering AWS bare metal and open step deployments as well documented Craftsman Cloud. Let's see it in action with a bare metal child cluster. >>We are on the home stretch, >>right. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. Yeah, so why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and support high performance workouts like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another hyper visor layer in between. So continuing on the theme Why communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p g A s G p, us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. We can handle utilization in the scheduling better Onda. We increase the performance and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project. Will add the bare metal hosts, including the host name. I put my credentials. I pay my address, Mac address on, then provide a machine type label to determine what type of machine it is. Related use. Okay, let's get started Certain Blufgan was the operator thing. We'll go and we'll create a project for our machines to be a member off. Helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. Yeah. So the first thing we had to be in post many of the machine a name. Anything you want? Yeah, in this case by mental zero one. Provide the IAP My user name. Type my password? Yeah. On the Mac address for the active, my interface with boot interface and then the i p m i P address. Yeah, these machines. We have the time storage worker manager. He's a manager. We're gonna add a number of other machines on will speed this up just so you could see what the process. Looks like in the future, better discovery will be added to the product. Okay, Okay. Getting back there. We haven't Are Six machines have been added. Are busy being inspected, being added to the system. Let's have a look at the details of a single note. Mhm. We can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. Okay, it's going to create the cluster. Mhm. Okay, so we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So credit custom. We'll give it a name. Thank you. But he thought were selecting bare metal on the region. We're going to select the version we want to apply on. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of the dress range update the address range that we want to use for the cluster. Check that the sea idea blocks for the communities and tunnels are what we want them to be. Enable disabled stack light and said the stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here we're focused on building communities clusters. So we're gonna put the count of machines. You want managers? We're gonna pick the label type manager on create three machines. Is a manager for the Cuban a disgusting? Yeah, they were having workers to the same. It's a process. Just making sure that the worker label host like you are so yes, on Duin wait for the machines to deploy. Let's go through the process of putting the operating system on the notes, validating that operating system. Deploying Docker enterprise on making sure that the cluster is up and running ready to go. Okay, let's review the bold events. We can see the machine info now populated with more information about the specifics of things like storage. Yeah, of course. Details of a cluster, etcetera. Yeah, Yeah. Okay. Well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build, and that brings us to the end of this particular do my as you can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>Here we have a child cluster on bare metal for folks that wanted to play the stuff on Prem. >>It's ah been an interesting journey taken from the mothership as we started out building ah management cluster and then populating it with a child cluster and then finally creating a regional cluster to spread the geographically the management of our clusters and finally to provide a platform for supporting, you know, ai needs and and big Data needs, uh, you know, thank goodness we're now able to put things like Hadoop on, uh, bare metal thio in containers were pretty exciting. >>Yeah, absolutely. So with this Doctor Enterprise container cloud platform. Hopefully this commoditized scooping clusters, doctor enterprise clusters that could be spun up and use quickly taking provisioning times. You know, from however many months to get new clusters spun up for our teams. Two minutes, right. We saw those clusters gets better. Just a couple of minutes. Excellent. All right, well, thank you, everyone, for joining us for our demo session for Dr Enterprise Container Cloud. Of course, there's many many more things to discuss about this and all of Miranda's products. If you'd like to learn more, if you'd like to get your hands dirty with all of this content, police see us a training don Miranda's dot com, where we can offer you workshops and a number of different formats on our entire line of products and hands on interactive fashion. Thanks, everyone. Enjoy the rest of the launchpad of that >>thank you all enjoy.

Published Date : Sep 17 2020

SUMMARY :

So for the next couple of hours, I'm the Western regional Solutions architect for Moran At least somebody on the call knows something about your enterprise Computer club. And that's really the key to this thing is to provide some, you know, many training clusters so that by the end of the tutorial content today, I think that's that's pretty much what we had to nail down here. So the management costs was always We have to give this brief little pause of the management cluster in the first regional clusters to support AWS deployments. So in that video are wonderful field CTO Shauna Vera bootstrapped So primarily the foundation for being able to deploy So this cluster isn't yet for workloads. Read the phone book, So and just to make sure I understood The output that when it says I'm pivoting, I'm pivoting from on the bootstrap er go away afterwards. So that there's no dependencies on any of the clouds that get created thereafter. Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, The config file that that's generated the template is fairly straightforward We always insist on high availability for this management cluster the scenes without you having toe worry about it as a developer. Examples of that is the day goes on. either the the regional cluster or a We've got the management cluster, and we're gonna go straight with child cluster. as opposed to having to centralize thumb So just head on in, head on into the docks like the Dale provided here. That's going to be in a very near term I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. No, just that the fact that we're running through these individual So let's go to that video and see just how We can check the status of the machine bulls as individuals so we can check the machine the thing that jumped out to me at first Waas like the inputs that go into defining Yeah, and and And that's really the focus of our effort is to ensure that So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old of the bootstrapping as well that the processes themselves are self healing, And the worst thing you could do is panic at the first warning and start tearing things that don't that then go out to touch slack and say hi, You need to watch your disk But Sean mentioned it on the video. And And the kubernetes, uh, scaling methodology is is he adhered So should we go to the questions. Um, that's kind of the point, right? you know, set up things and deploy your applications and things. that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. to the standards that we would want to set to make sure that we're not overloading On the next video, we're gonna learn how to spin up a Yeah, Do the same to add workers. We got that management cluster that we do strapped in the first video. Yeah, that's the key to this is to be able to have co resident with So we don't have to go back to the mother ship. So it's just one pane of glass to the bootstrapped cluster of the regional services. and another, you know, detail for those that have sharp eyes. Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want all of the other major players in the cloud arena. Let's jump into our last video in the Siri's, So the first thing we had to be in post, Yeah, many of the machine A name. Much the same is how we did for AWS. nodes and and that the management layer is going to have sufficient horsepower to, are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. that's been put out in the chat, um, that you'll be able to give this a go yourself, Uh, take the opportunity to let your colleagues know if they were in another session I e just interest will feel for you. Use A I'm the one with the gray hair and the glasses. And for the providers in the very near future. I can hardly wait. Let's do it all right to share my video So the first thing is, we need those route credentials which we're going to export on the command That is the tool and you're gonna use to start spinning up downstream It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, and all of the necessary parameters that you would fill in have That's the very first thing you're going to Yeah, for the most part. Let's now that we have our management cluster set up, let's create a first We can check the status of the machine balls as individuals so we can check the glitches, resolve themselves and leave you with a functioning workload cluster within exactly the same thing with resource is on Prem or resource is, All the different bits and pieces And I've got to tell you from having, you know, managed kubernetes And the straps those components for you? Yeah, it actually even displays at the top of the screen. I really like the on board Bafana that comes along with this stack. the best of what Morantes had in the open stack environment for monitoring and logging So the doctor enterprise Container Cloud managers were Yep, the the Cube config elements of the UCP environment. More thoughts on this wonders if we jump onto the next video. Let's maybe the simplest architectures, of the regional cluster and how it connects to the management cluster on their components, There we have it. that we provide that same operational support system capability Just because it's kind of easy to make that was on AWS. Just making sure that the worker label host like you are so yes, It's ah been an interesting journey taken from the mothership Enjoy the rest of the launchpad

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MaryPERSON

0.99+

SeanPERSON

0.99+

Sean O'MaraPERSON

0.99+

BrucePERSON

0.99+

FrankfurtLOCATION

0.99+

three machinesQUANTITY

0.99+

Bill MilksPERSON

0.99+

AWSORGANIZATION

0.99+

first videoQUANTITY

0.99+

second phaseQUANTITY

0.99+

ShawnPERSON

0.99+

first phaseQUANTITY

0.99+

ThreeQUANTITY

0.99+

Two minutesQUANTITY

0.99+

three managersQUANTITY

0.99+

fifth phaseQUANTITY

0.99+

ClarkPERSON

0.99+

Bill MillsPERSON

0.99+

DalePERSON

0.99+

Five minutesQUANTITY

0.99+

NanPERSON

0.99+

second sessionQUANTITY

0.99+

Third phaseQUANTITY

0.99+

SeymourPERSON

0.99+

Bruce Basil MatthewsPERSON

0.99+

Moran TousPERSON

0.99+

five minutesQUANTITY

0.99+

hundredsQUANTITY

0.99+

Reliance Jio: OpenStack for Mobile Telecom Services


 

>>Hi, everyone. My name is my uncle. My uncle Poor I worked with Geo reminds you in India. We call ourselves Geo Platforms. Now on. We've been recently in the news. You've raised a lot off funding from one of the largest, most of the largest tech companies in the world. And I'm here to talk about Geos Cloud Journey, Onda Mantis Partnership. I've titled it the story often, Underdog becoming the largest telecom company in India within four years, which is really special. And we're, of course, held by the cloud. So quick disclaimer. Right. The content shared here is only for informational purposes. Um, it's only for this event. And if you want to share it outside, especially on social media platforms, we need permission from Geo Platforms limited. Okay, quick intro about myself. I am a VP of engineering a geo. I lead the Cloud Services and Platforms team with NGO Andi. I mean the geo since the beginning, since it started, and I've seen our cloud footprint grow from a handful of their models to now eight large application data centers across three regions in India. And we'll talk about how we went here. All right, Let's give you an introduction on Geo, right? Giorgio is on how we became the largest telecom campaign, India within four years from 0 to 400 million subscribers. And I think there are There are a lot of events that defined Geo and that will give you an understanding off. How do you things and what you did to overcome massive problems in India. So the slide that I want to talkto is this one and, uh, I The headline I've given is, It's the Geo is the fastest growing tech company in the world, which is not a new understatement. It's eggs, actually, quite literally true, because very few companies in the world have grown from zero to 400 million subscribers within four years paying subscribers. And I consider Geo Geos growth in three phases, which I have shown on top. The first phase we'll talk about is how geo grew in the smartphone market in India, right? And what we did to, um to really disrupt the telecom space in India in that market. Then we'll talk about the feature phone phase in India and how Geo grew there in the future for market in India. and then we'll talk about what we're doing now, which we call the Geo Platforms phase. Right. So Geo is a default four g lt. Network. Right. So there's no to geo three g networks that Joe has, Um it's a state of the art four g lt voiceover lt Network and because it was designed fresh right without any two D and three G um, legacy technologies, there were also a lot of challenges Lawn geo when we were starting up. One of the main challenges waas that all the smart phones being sold in India NGOs launching right in 2000 and 16. They did not have the voice or lt chip set embedded in the smartphone because the chips it's far costlier to embed in smartphones and India is a very price and central market. So none of the manufacturers were embedding the four g will teach upset in the smartphones. But geos are on Lee a volte in network, right for the all the network. So we faced a massive problem where we said, Look there no smartphones that can support geo. So how will we grow Geo? So in order to solve that problem, we launched our own brand of smartphones called the Life um, smartphones. And those phones were really high value devices. So there were $50 and for $50 you get you You At that time, you got a four g B storage space. A nice big display for inch display. Dual cameras, Andi. Most importantly, they had volte chip sets embedded in them. Right? And that got us our initial customers the initial for the launch customers when we launched. But more importantly, what that enabled other oh, EMS. What that forced the audience to do is that they also had to launch similar smartphones competing smartphones with voltage upset embedded in the same price range. Right. So within a few months, 3 to 4 months, um, all the other way EMS, all the other smartphone manufacturers, the Samsung's the Micromax is Micromax in India, they all had volte smartphones out in the market, right? And I think that was one key step We took off, launching our own brand of smartphone life that helped us to overcome this problem that no smartphone had. We'll teach upsets in India and then in order. So when when we were launching there were about 13 telecom companies in India. It was a very crowded space on demand. In order to gain a foothold in that market, we really made a few decisions. Ah, phew. Key product announcement that really disrupted this entire industry. Right? So, um, Geo is a default for GLT network itself. All I p network Internet protocol in everything. All data. It's an all data network and everything from voice to data to Internet traffic. Everything goes over this. I'll goes over Internet protocol, and the cost to carry voice on our smartphone network is very low, right? The bandwidth voice consumes is very low in the entire Lt band. Right? So what we did Waas In order to gain a foothold in the market, we made voice completely free, right? He said you will not pay anything for boys and across India, we will not charge any roaming charges across India. Right? So we made voice free completely and we offer the lowest data rates in the world. We could do that because we had the largest capacity or to carry data in India off all the other telecom operators. And these data rates were unheard off in the world, right? So when we launched, we offered a $2 per month or $3 per month plan with unlimited data, you could consume 10 gigabytes of data all day if you wanted to, and some of our subscriber day. Right? So that's the first phase off the overgrowth and smartphones and that really disorders. We hit 100 million subscribers in 170 days, which was very, very fast. And then after the smartphone faith, we found that India still has 500 million feature phones. And in order to grow in that market, we launched our own phone, the geo phone, and we made it free. Right? So if you take if you took a geo subscription and you carried you stayed with us for three years, we would make this phone tree for your refund. The initial deposit that you paid for this phone and this phone had also had quite a few innovations tailored for the Indian market. It had all of our digital services for free, which I will talk about soon. And for example, you could plug in. You could use a cable right on RCR HDMI cable plug into the geo phone and you could watch TV on your big screen TV from the geophones. You didn't need a separate cable subscription toe watch TV, right? So that really helped us grow. And Geo Phone is now the largest selling feature phone in India on it. 100 million feature phones in India now. So now now we're in what I call the geo platforms phase. We're growing of a geo fiber fiber to the home fiber toe the office, um, space. And we've also launched our new commerce initiatives over e commerce initiatives and were steadily building platforms that other companies can leverage other companies can use in the Jeon o'clock. Right? So this is how a small startup not a small start, but a start of nonetheless least 400 million subscribers within four years the fastest growing tech company in the world. Next, Geo also helped a systemic change in India, and this is massive. A lot of startups are building on this India stack, as people call it, and I consider this India stack has made up off three things, and the acronym I use is jam. Trinity, right. So, um, in India, systemic change happened recently because the Indian government made bank accounts free for all one billion Indians. There were no service charges to store money in bank accounts. This is called the Jonathan. The J. GenDyn Bank accounts. The J out off the jam, then India is one of the few countries in the world toe have a digital biometric identity, which can be used to verify anyone online, which is huge. So you can simply go online and say, I am my ankle poor on duh. I verify that this is indeed me who's doing this transaction. This is the A in the jam and the last M stands for Mobil's, which which were held by Geo Mobile Internet in a plus. It is also it is. It also stands for something called the U. P I. The United Unified Payments Interface. This was launched by the Indian government, where you can carry digital transactions for free. You can transfer money from one person to the to another, essentially for free for no fee, right so I can transfer one group, even Indian rupee to my friend without paying any charges. That is huge, right? So you have a country now, which, with a with a billion people who are bank accounts, money in the bank, who you can verify online, right and who can pay online without any problems through their mobile connections held by G right. So suddenly our market, our Internet market, exploded from a few million users to now 506 106 100 million mobile Internet users. So that that I think, was a massive such a systemic change that happened in India. There are some really large hail, um, numbers for this India stack, right? In one month. There were 1.6 billion nuclear transactions in the last month, which is phenomenal. So next What is the impact of geo in India before you started, we were 155th in the world in terms off mobile in terms of broadband data consumption. Right. But after geo, India went from one 55th to the first in the world in terms of broadband data, largely consumed on mobile devices were a mobile first country, right? We have a habit off skipping technology generation, so we skip fixed line broadband and basically consuming Internet on our mobile phones. On average, Geo subscribers consumed 12 gigabytes of data per month, which is one of the highest rates in the world. So Geo has a huge role to play in making India the number one country in terms off broad banded consumption and geo responsible for quite a few industry first in the telecom space and in fact, in the India space, I would say so before Geo. To get a SIM card, you had to fill a form off the physical paper form. It used to go toe Ah, local distributor. And that local distributor is to check the farm that you feel incorrectly for your SIM card and then that used to go to the head office and everything took about 48 hours or so, um, to get your SIM card. And sometimes there were problems there also with a hard biometric authentication. We enable something, uh, India enable something called E K Y C Elektronik. Know your customer? We took a fingerprint scan at our point of Sale Reliance Digital stores, and within 15 minutes we could verify within a few minutes. Within a few seconds we could verify that person is indeed my hunk, right, buying the same car, Elektronik Lee on we activated the SIM card in 15 minutes. That was a massive deal for our growth. Initially right toe onboard 100 million customers. Within our and 70 days. We couldn't have done it without be K. I see that was a massive deal for us and that is huge for any company starting a business or start up in India. We also made voice free, no roaming charges and the lowest data rates in the world. Plus, we gave a full suite of cloud services for free toe all geo customers. For example, we give goTV essentially for free. We give GOTV it'll law for free, which people, when we have a launching, told us that no one would see no one would use because the Indians like watching TV in the living rooms, um, with the family on a big screen television. But when we actually launched, they found that GOTV is one off our most used app. It's like 70,000,080 million monthly active users, and now we've basically been changing culture in India where culture is on demand. You can watch TV on the goal and you can pause it and you can resume whenever you have some free time. So really changed culture in India, India on we help people liver, digital life online. Right, So that was massive. So >>I'm now I'd like to talk about our cloud >>journey on board Animal Minorities Partnership. We've been partners that since 2014 since the beginning. So Geo has been using open stack since 2014 when we started with 14 note luster. I'll be one production environment One right? And that was I call it the first wave off our cloud where we're just understanding open stack, understanding the capabilities, understanding what it could do. Now we're in our second wave. Where were about 4000 bare metal servers in our open stack cloud multiple regions, Um, on that around 100,000 CPU cores, right. So it's a which is one of the bigger clouds in the world, I would say on almost all teams, with Ngor leveraging the cloud and soon I think we're going to hit about 10,000 Bama tools in our cloud, which is massive and just to give you a scale off our network, our in French, our data center footprint. Our network introduction is about 30 network data centers that carry just network traffic across there are there across India and we're about eight application data centers across three regions. Data Center is like a five story building filled with servers. So we're talking really significant scale in India. And we had to do this because when we were launching, there are the government regulation and try it. They've gotten regulatory authority of India, mandates that any telecom company they have to store customer data inside India and none of the other cloud providers were big enough to host our clothes. Right. So we we made all this intellectual for ourselves, and we're still growing next. I love to show you how we grown with together with Moran says we started in 2014 with the fuel deployment pipelines, right? And then we went on to the NK deployment. Pipelines are cloud started growing. We started understanding the clouds and we picked up M C p, which has really been a game changer for us in automation, right on DNA. Now we are in the latest release, ofem CPM CPI $2019 to on open stack queens, which on we've just upgraded all of our clouds or the last few months. Couple of months, 2 to 3 months. So we've done about nine production clouds and there are about 50 internal, um, teams consuming cloud. We call as our tenants, right. We have open stack clouds and we have communities clusters running on top of open stack. There are several production grade will close that run on this cloud. The Geo phone, for example, runs on our cloud private cloud Geo Cloud, which is a backup service like Google Drive and collaboration service. It runs out of a cloud. Geo adds G o g S t, which is a tax filing system for small and medium enterprises, our retail post service. There are all these production services running on our private clouds. We're also empaneled with the government off India to provide cloud services to the government to any State Department that needs cloud services. So we were empaneled by Maiti right in their ego initiative. And our clouds are also Easter. 20,000 certified 20,000 Colin one certified for software processes on 27,001 and said 27,017 slash 18 certified for security processes. Our clouds are also P our data centers Alsop a 942 be certified. So significant effort and investment have gone toe These data centers next. So this is where I think we've really valued the partnership with Morantes. Morantes has has trained us on using the concepts of get offs and in fries cold, right, an automated deployments and the tool change that come with the M C P Morantes product. Right? So, um, one of the key things that has happened from a couple of years ago to today is that the deployment time to deploy a new 100 north production cloud has decreased for us from about 55 days to do it in 2015 to now, we're down to about five days to deploy a cloud after the bear metals a racked and stacked. And the network is also the physical network is also configured, right? So after that, our automated pipelines can deploy 100 0 clock in five days flight, which is a massive deal for someone for a company that there's adding bear metals to their infrastructure so fast, right? It helps us utilize our investment, our assets really well. By the time it takes to deploy a cloud control plane for us is about 19 hours. It takes us two hours to deploy a compu track and it takes us three hours to deploy a storage rack. Right? And we really leverage the re class model off M C. P. We've configured re class model to suit almost every type of cloud that we have, right, and we've kept it fairly generous. It can be, um, Taylor to deploy any type of cloud, any type of story, nor any type of compute north. Andi. It just helps us automate our deployments by putting every configuration everything that we have in to get into using infra introduction at school, right plus M. C. P also comes with pipelines that help us run automated tests, automated validation pipelines on our cloud. We also have tempest pipelines running every few hours every three hours. If I recall correctly which run integration test on our clouds to make sure the clouds are running properly right, that that is also automated. The re class model and the pipelines helpers automate day to operations and changes as well. There are very few seventh now, compared toa a few years ago. It very rare. It's actually the exception and that may be because off mainly some user letter as opposed to a cloud problem. We also have contributed auto healing, Prometheus and Manager, and we integrate parameters and manager with our even driven automation framework. Currently, we're using Stack Storm, but you could use anyone or any event driven automation framework out there so that it indicates really well. So it helps us step away from constantly monitoring our cloud control control planes and clothes. So this has been very fruitful for us and it has actually apps killed our engineers also to use these best in class practices like get off like in France cord. So just to give you a flavor on what stacks our internal teams are running on these clouds, Um, we have a multi data center open stack cloud, and on >>top of that, >>teams use automation tools like terra form to create the environments. They also create their own Cuba these clusters and you'll see you'll see in the next slide also that we have our own community that the service platform that we built on top of open stack to give developers development teams NGO um, easy to create an easy to destroy Cuban. It is environment and sometimes leverage the Murano application catalog to deploy using heats templates to deploy their own stacks. Geo is largely a micro services driven, Um um company. So all of our applications are micro services, multiple micro services talking to each other, and the leverage develops. Two sets, like danceable Prometheus, Stack stone from for Otto Healing and driven, not commission. Big Data's tax are already there Kafka, Patches, Park Cassandra and other other tools as well. We're also now using service meshes. Almost everything now uses service mesh, sometimes use link. Erred sometimes are experimenting. This is Theo. So So this is where we are and we have multiple clients with NGO, so our products and services are available on Android IOS, our own Geo phone, Windows Macs, Web, Mobile Web based off them. So any client you can use our services and there's no lock in. It's always often with geo, so our sources have to be really good to compete in the open Internet. And last but not least, I think I love toe talk to you about our container journey. So a couple of years ago, almost every team started experimenting with containers and communities and they were demand for as a platform team. They were demanding community that the service from us a manage service. Right? So we built for us, it was much more comfortable, much more easier toe build on top of open stack with cloud FBI s as opposed to doing this on bare metal. So we built a fully managed community that a service which was, ah, self service portal, where you could click a button and get a community cluster deployed in your own tenant on Do the >>things that we did are quite interesting. We also handle some geo specific use cases. So we have because it was a >>manage service. We deployed the city notes in our own management tenant, right? We didn't give access to the customer to the city. Notes. We deployed the master control plane notes in the tenant's tenant and our customers tenant, but we didn't give them access to the Masters. We didn't give them the ssh key the workers that the our customers had full access to. And because people in Genova learning and experimenting, we gave them full admin rights to communities customers as well. So that way that really helped on board communities with NGO. And now we have, like 15 different teams running multiple communities clusters on top, off our open stack clouds. We even handle the fact that there are non profiting. I people separate non profiting I peoples and separate production 49 p pools NGO. So you could create these clusters in whatever environment that non prod environment with more open access or a prod environment with more limited access. So we had to handle these geo specific cases as well in this communities as a service. So on the whole, I think open stack because of the isolation it provides. I think it made a lot of sense for us to do communities our service on top off open stack. We even did it on bare metal, but that not many people use the Cuban, indeed a service environmental, because it is just so much easier to work with. Cloud FBI STO provision much of machines and covering these clusters. That's it from me. I think I've said a mouthful, and now I love for you toe. I'd love to have your questions. If you want to reach out to me. My email is mine dot capulet r l dot com. I'm also you can also message me on Twitter at my uncouple. So thank you. And it was a pleasure talking to you, Andre. Let let me hear your questions.

Published Date : Sep 14 2020

SUMMARY :

So in order to solve that problem, we launched our own brand of smartphones called the So just to give you a flavor on what stacks our internal It is environment and sometimes leverage the Murano application catalog to deploy So we have because it was a So on the whole, I think open stack because of the isolation

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2015DATE

0.99+

IndiaLOCATION

0.99+

2014DATE

0.99+

two hoursQUANTITY

0.99+

$50QUANTITY

0.99+

3QUANTITY

0.99+

12 gigabytesQUANTITY

0.99+

three yearsQUANTITY

0.99+

MorantesORGANIZATION

0.99+

70,000,080 millionQUANTITY

0.99+

AndrePERSON

0.99+

three hoursQUANTITY

0.99+

SamsungORGANIZATION

0.99+

2000DATE

0.99+

70 daysQUANTITY

0.99+

GenovaLOCATION

0.99+

five daysQUANTITY

0.99+

2QUANTITY

0.99+

zeroQUANTITY

0.99+

0QUANTITY

0.99+

170 daysQUANTITY

0.99+

100 million subscribersQUANTITY

0.99+

Onda Mantis PartnershipORGANIZATION

0.99+

first phaseQUANTITY

0.99+

100 millionQUANTITY

0.99+

15 minutesQUANTITY

0.99+

10 gigabytesQUANTITY

0.99+

firstQUANTITY

0.99+

16DATE

0.99+

four yearsQUANTITY

0.99+

4 monthsQUANTITY

0.99+

one personQUANTITY

0.99+

49 pQUANTITY

0.99+

100 million customersQUANTITY

0.99+

todayDATE

0.99+

one billionQUANTITY

0.99+

Two setsQUANTITY

0.99+

155thQUANTITY

0.99+

oneQUANTITY

0.99+

one key stepQUANTITY

0.99+

last monthDATE

0.99+

first countryQUANTITY

0.98+

3 monthsQUANTITY

0.98+

around 100,000 CPU coresQUANTITY

0.98+

JoePERSON

0.98+

100QUANTITY

0.98+

27,001QUANTITY

0.98+

OneQUANTITY

0.98+

15 different teamsQUANTITY

0.98+

Android IOSTITLE

0.98+

one monthQUANTITY

0.98+

FranceLOCATION

0.98+

506 106 100 millionQUANTITY

0.98+

GeoORGANIZATION

0.98+

Elektronik LeeORGANIZATION

0.98+

FBIORGANIZATION

0.98+

one groupQUANTITY

0.98+

1.6 billion nuclear transactionsQUANTITY

0.98+

AndiPERSON

0.97+

Geo Mobile InternetORGANIZATION

0.97+

five storyQUANTITY

0.97+

PrometheusTITLE

0.97+