Image Title

Search Results for Custer:

Kamal Shah, Red Hat & Kirsten Newcomer, Red Hat | Red Hat Summit 2021 Virtual Experience


 

>>Hey, welcome to the Cubes coverage of Red Hat Summit 2021, the virtual experience, I'm lisa martin, I have two guests joining me. One is a cube alum kamal Shah is back, he's now the VP of cloud platforms at Brent had come on, it's great to have you back on the program. You're in a new role, we're going to talk about that. Thank you. And Kirsten newcomer is here as well. She's the Director of cloud and Death stickups strategy at Red Hat, Kirsten, Welcome and thank you for bringing the red hat vibe to the segment. >>Absolutely, very happy to be here. >>So looking forward to this conversation that we're going to be having in the next 20 minutes or so. We're gonna be talking about the last time come on, you were on, you were the ceo of stack rocks In January of 2021. The announcement that red hat plans to acquire stack rocks, it wouldn't be talking all about that. But I'd like to start with Kirsten, give us your perspective from red hats perspective, why is red hat a good fit for stack rocks? >>You know, there are so many reasons first of all as as you know, right? Red hat has been working with product Izing kubernetes since kubernetes one dato. Right, so so open shift three dato shipped with kubernetes one dot Oh, so we've been working with kubernetes for a long time, stack rocks embraces kind of is kubernetes native security embraces the declarative nature of kubernetes and brings that to security. Red hats, Custer's red hat enterprise customers, we have a great set across different verticals that are very security conscious and and during my five years at red hat, that's where I spend the majority of my time is talking with our customers about container and kubernetes security. And while there's a great deal of security built in to open shift as it goes to market out of the box, customers need the additional capabilities that stack rock springs. Historically, we've met those needs with our security partners. We have a great ecosystem of security partners. And with the stack rocks acquisition, we're now in a position to offer additional choice. Right. If a customer wants those capabilities from Red hat tightly integrated with open shift, we'll have those available and we continue to support and work with our broad ecosystem of security partners. >>Excellent customers always want choice. Come on. Give me your perspective. You were at the helm the ceo of stack rocks as you were last time you were on the cube. Talk to me about the redhead acquisition from your seat. >>Yeah. So as as Kirsten mentioned, we were partners of red hat. You're part of the red hat partner ecosystem. And uh, what we found is that was both a great strategic fit and a great cultural fit between our two companies. Right? And so the discussions that we had were how do we go and quickly enable our customers to accelerate their digital transformation initiatives to move workloads to the cloud, to containerized them, to manage them through kubernetes and make sure that we seamlessly addressed their security concerns. Right? Because it continues to be the number one concern for large enterprises and medium sized enterprises and frankly any enterprise that uh, you know, uh, working out today. So, so that was kind of the impetus behind it. And I must say that so far the the acquisition has been going on very smoothly. So we had two months in roughly and everybody and has been very welcoming, very collaborative, very supportive. And we are already working hand in hand to to integrate our companies and to make sure that we are working closely together to make our customers successful. >>Excellent. We're gonna talk about that integration in a second. But I can imagine challenging going through an acquisition during a global pandemic. Um but that is one of the things that I think lends itself to the cultural alignment. Kamal that you talked about, Kirsten. I want to get your perspective. We know we talk about corporate culture and corporate culture has changed a lot in the last year with everybody or so many of us being remote. Talk to me about kind of the core values that red hat and stack rocks share >>actually, you know, that's been one of the great joys doing during the acquisition process in particular, Kamal and and ali shared kind of their key values and how they um how they talked to talk with their team And some of the overlap was just so resonated so much for all of us. In particular the sense of transparency, uh, that the, that the team the stack rocks executive team brings and approaches. That's a that's a clear value for red hat um strongly maintained. Uh, that was one of the key things the interest in um uh, containers and kubernetes. Right. So the technology alignment was very clear. We probably wouldn't have proceeded without that. But again, um and I think the investment in people and the independence and the and the strong drive of the individuals and supporting the individuals as they contribute to the offering so that it really creates that sense of community um and collaboration that is key. Uh and and it's just really strong overlap in in cultural values and we so appreciated that >>community and collaboration couldn't be more important these days. And ultimately the winner is the customers. So let's dig in. Let's talk about what stack rocks brings to open shift Kirsten take it away >>man. So as I said earlier, um so I think we we really believe in continuous security at red hat and in defense and depth. And so when we look at an enterprise kubernetes distribution that involves security at the real core os layer security and kubernetes adding the things into the distribution, making sure they're there by default, that any distribution needs to be secured to be hardened, auditing, logging, identity, access management, just a wealth of things. And Red hat has historically focused on infrastructure and platform security, building those capabilities into what we bring to market stack rocks enhances what we already have and really adds workload protection, which is really when it comes down to it. Especially if you're looking at hybrid cloud, multi cloud, how you secure, not just the platform, but how you secure your workloads changes. And we're moving from a world where, you know, you're deploying anti virus or malware scanners on your VMS and your host operating system to a world where those work clothes may be very short lived. And if they aren't secured from the get go, you miss your opportunity to secure them right? You can't rely on, you know, you do need controls in the infrastructure but they need to be kubernetes native controls and you need to shift that security left. Right? You never patch a running container. You always have to rebuild and redeploy if you patch the running container the next time that container images deployed, you've missed, you've lost that patch. And so the whole ethos the whole shift left. The Deb sec ops capabilities that stack rock springs really adds such value. Right? You can't just do DEF SEc or set cops. You need to do a full infinity loop to really have def SEc ops and stack rocks. I'm gonna let Kamal tell you about it, but they have so many capabilities that that really drive that shift left and enable that closed loop. We're just so excited that they're part of our offerings. >>So can you take us through that? How does stack rocks facilitate the shift left? >>Yeah, absolutely. So stack rocks, which we we announced at summit is now being rebranded as red hat. Advanced cluster security was really purpose built to help our customers address the use cases across the entire application lifecycle. Right? So from bill to deploy to run time. So this is the infinite loop that Kirsten mentioned earlier and one of our foundations was to be kubernetes native to ensure that security is really built into the application is supposed to bolt it on. So specifically, we help our customers shift left by securing the supply chain and we're making sure that we identifying vulnerabilities early during the build process before they make it to a production environment. We helped them secure the infrastructure by preventing miS configurations again early in the process because as we all know, MIS configurations often lead to breaches at at runtime. Right? We help them address uh compliance requirements by ensuring that we can check for CS benchmarks are regulatory requirements around the C I P C I, hip hop and this and and that's uh you know, just focusing on shift left, doesn't really mean that you ignore the right side or ignore the controls you need uh when your applications are running in production. So we help them secure that at runtime by identifying preventing breaches the threat detection, prevention and incident response. >>That built in security is you both mentioned that built in versus bolt on Kirsten? Talk to me about that, that as really kind of a door opener. We talked a lot about security issues, especially in the last year. I don't know how many times we've talked about miS configurations leading to breaches that we've seen so many security challenges present in the last year. We talked to me a little bit Kirsten about >>what >>customers appetites are for going. All right now, I've got cloud native security, I'm going to be able to, I'm going to feel more comfortable with rolling out production deployments. >>It's, it's a great place to go. So there are a number of elements to think about. And if I could, I could, I could start with by building on the example that Kamal said, Right, So when we think about um I need to build security into my pipeline so that when I deliver my containerized workloads, they're secure. What if I miss a step or what if a new vulnerability is discovered after the fact? Right. So one of the things that stack rocks or redhead a CS offers is it has built in policy checks to see whether a container or running image has something like a package manager in it. Well, a package manager can be used to load software that is not delivered with the container. And so the idea of ensuring that you are including workload, built in workload, protect locks with policies that are written for you. So you can focus on building your applications. You don't necessarily have to learn everything there is to know about the new attack vectors that are really just it it's new packaging, it's new technology. It's not so much there are some new attack vectors, but mostly it's a new way of delivering and running your applications. That requires some changes to how you implement your security policies. And so ensuring that you have the tools and the technology that you're running on have those capabilities built in. So that when we have conversations with our security conscious customers, we can talk with them about the attack vectors they care about. We can illustrate how we are addressing those particular concerns. Right? One of them being malware in a container, we can look for stack. Rocks can look for a package manager that could be used to pull in, you know, code that could be exploited and you can stop a running container. Um, we can do deeper data collection with stack rocks. Again, one of the challenges when you're looking at moving your security capabilities from a traditional application environment is containers come and go all the time. In a kubernetes cluster nodes, your servers can come and go in a cloud native kubernetes cluster, right? If you're running on on cloud public cloud infrastructure, um, those things are the nodes are ephemeral to, they're designed to be shut down and brought back up. So you've got a lot more data that you need to collect and that you need to analyze and you need to correlate the information between these. Right? I no longer have one application stack running on one or more VMS, it's just things are things are moving fast so you want the right type of data collection and the right correlation to have good visibility into your environment. >>And if I can just build on that a little bit. The whole idea here is that these policies really serve as god rails right for the developers. So the it allows developers to move quickly to accelerate the speed of development without having to worry about hundreds of potential security issues because there are guardrails that will notify that with concrete recommendations early in the process. And the analogy I often use is that you know the reason we have breaks in our cars, it's not to slow us down but to allow us to go faster because we know we can slow down when we need to write. So similarly these policies are really it's really designed to accelerate the speed of development and accelerate digital transformation initiatives that our customers are embarking on >>and come on. I want to stick with you on the digital transformation front. We've talked so much about how accelerated that has been in the last year with everything going on in such a dynamic market. Talk to me Kamal about some of the feedback that you've gotten from stack rocks customers about the acquisition and how it is that maybe that facilitator of the many pivots that businesses have had to do in the last year to go from survival mode to thriving business. >>Yeah. Yes, absolutely. The feedback from all of our customers bar none has been very very positive. So it's been it's allowed us to invest more in the business and you know, we publicly stated that we are going to invest more in adding more capabilities. We are more than doubling the size of our teams as an example. And really working hand in hand with our uh the broader team at Red had to uh further accelerate the speed of development and digital transformation initiatives. So it's been extremely positive because we're adding more resources, We're investing more. We're accelerating the product roadmap uh based on uh compared to what we could do as a, as a start up as you can imagine. And and the feedback has been nothing but positive. So that's kind of where we are today. And what we're doing with the summit is rolling out a new bundle called open shift uh, Open shift platform plus, which includes not just Red hat A CS which used to be Stock rocks, but also red hat open shift hybrid cloud platform as well as Red hat advanced uh container cluster management, ACM capabilities as well as create the container registry. So we're making it easier for our customers to get all the capabilities that they need to for the drive digital transformation initiatives to get. It goes back to this whole customer centric city team that red hat has, that was also core value of stack rocks and and the winner and all of this, we believe ultimately is our, our our customers because that's where we exist to serve them, >>right. And I really like that if I could chime in kind of on top of that a little bit. Um so, so I think that one of the things we've seen with the pandemic is more of the red Hat customers are accelerating their move to public cloud and away from on premises data centers. Uh and and you know, that's just part partly because of so many people working remotely. Um it just has really pushed things. And so with Hybrid cloud becoming even more key to our joint customer base and by hybrid cloud, I mean that they have some environments that are on premises as they're making this transition. Some of those environments may stay that footprint may stay on premises, but it might be smaller, they may not have settled on a single public cloud. They could, in fact, they often are picking a public cloud based on where their development focuses. Google is very popular for ai and ml workloads. Amazon of course is just used, you know, by pretty much everybody. Um and then Azzurri is popular with um a subset of customers as well. And so we see our customers investing in all of these environments and stack rocks red hat A CS like open shift runs in all these environments. So with open shift platform plus you get a complete solution that helps with multi cluster management with a C. M with security across all of these environments, right? You can take one approach to how you secure your cluster, how you secure your workloads, how you manage configurations, You get one approach no matter where you're running your containers and kubernetes platform when you're doing this with open shift platform plus. So you also get portability. If today you want to be running an amazon maybe tomorrow you need to spin up a cluster in google, you can do that if you're working with the K s or G K E, you can or a Ks, you can do that with red hat a CS as well. So we really give you everything you need to be successful in this move and we give you back to that choice word, right? We give you the opportunity to choose and to migrate at the speed that works for you. >>So that's simplicity. That streamlining. I gotta ask you the last question here in our last couple of minutes. Come on, what's the integration process been like? as we said the acquisition just a couple of months in. But talk to me about that integration process. What that's been like? >>Yeah, absolutely. So as I mentioned earlier, the process has been very smooth so far, so two months in and it's largely driven by the common set of culture and core values that exists between our two companies. And so uh you know, from a product standpoint, we've been working hand in hand because I mentioned earlier, we were partners are working hand in hand on accelerating the road map the joint roadmap that we have here uh from a go to market perspective teams are well integrated. We are going to be rolling out the rolling out the bundle and we're gonna be rolling out additional uh options for our customers. We've also publicly announced that will be open sourcing uh red hat A. C. S. Uh formerly known as Stock Rock. So stay tuned for further news and that announcement. And, and so you know, uh, again two months and everybody's been super collaborative. Super helpful, super welcoming. And the team is the well settled and we're looking forward to now focusing on our primary objective is just to make sure that our customers are successful. >>Absolutely. That customer focus is absolutely critical. But also so is the employee experience. And it sounds like we both talked about the ethos and the and the core value alignment. They're probably being pretty critical to doing an integration during a very challenging time globally. I appreciate both of you joining me on the program today, sharing what's going on stack rocks now asks the opportunities for customers to have that built in cuBA and the security. Thanks so much for your time. >>Thank you. Thank >>you for Camel shaw and Kirsten newcomer. I'm lisa martin. You're watching the cubes coverage of Red Hat Summit, The virtual experience. Mhm

Published Date : Apr 28 2021

SUMMARY :

at Brent had come on, it's great to have you back on the program. the last time come on, you were on, you were the ceo of stack rocks In January of 2021. security embraces the declarative nature of kubernetes and brings that to security. Talk to me about the redhead acquisition from your seat. And so the discussions that we had were Um but that is one of the things that I think lends the individuals and supporting the individuals as they contribute to And ultimately the winner is the customers. You always have to rebuild and redeploy if you patch the running container the next time or ignore the controls you need uh when your applications are running in production. We talked a lot about security issues, especially in the last year. I'm going to be able to, I'm going to feel more comfortable with rolling out production deployments. And so ensuring that you have And the analogy I often use is that you know the reason we have breaks in our cars, the many pivots that businesses have had to do in the last year to go from invest more in the business and you know, we publicly stated that we are going to You can take one approach to how you secure your cluster, how you secure your workloads, But talk to me about that integration process. And so uh you know, from a product standpoint, we've been working hand in hand because the opportunities for customers to have that built in cuBA and the security. Thank you. you for Camel shaw and Kirsten newcomer.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KirstenPERSON

0.99+

KamalPERSON

0.99+

January of 2021DATE

0.99+

two monthsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

red hatORGANIZATION

0.99+

two companiesQUANTITY

0.99+

lisa martinPERSON

0.99+

Red HatORGANIZATION

0.99+

tomorrowDATE

0.99+

red hatORGANIZATION

0.99+

five yearsQUANTITY

0.99+

red HatORGANIZATION

0.99+

Red hatORGANIZATION

0.99+

two guestsQUANTITY

0.99+

last yearDATE

0.99+

todayDATE

0.99+

hundredsQUANTITY

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

Stock RockORGANIZATION

0.99+

RedORGANIZATION

0.98+

amazonORGANIZATION

0.98+

OneQUANTITY

0.98+

Kirsten NewcomerPERSON

0.98+

Red Hat Summit 2021EVENT

0.98+

Red Hat SummitEVENT

0.98+

one approachQUANTITY

0.98+

Red hatsORGANIZATION

0.97+

red hatsORGANIZATION

0.96+

GoogleORGANIZATION

0.96+

googleORGANIZATION

0.96+

singleQUANTITY

0.96+

CusterORGANIZATION

0.95+

Kamal ShahPERSON

0.93+

yearDATE

0.93+

firstQUANTITY

0.92+

one applicationQUANTITY

0.91+

AzzurriORGANIZATION

0.91+

Red hatTITLE

0.84+

pandemicEVENT

0.81+

kubernetesORGANIZATION

0.78+

kamal ShahPERSON

0.77+

stack rocksORGANIZATION

0.76+

Red Hat Summit 2021 VirtualEVENT

0.75+

red hatTITLE

0.74+

K sCOMMERCIAL_ITEM

0.73+

MTITLE

0.7+

Camel shawPERSON

0.66+

20 minutesDATE

0.65+

G K ECOMMERCIAL_ITEM

0.65+

potential securityQUANTITY

0.64+

secondQUANTITY

0.61+

BrentORGANIZATION

0.57+

Kubernetes on Any Infrastructure Top to Bottom Tutorials for Docker Enterprise Container Cloud


 

>>all right, We're five minutes after the hour. That's all aboard. Who's coming aboard? Welcome everyone to the tutorial track for our launchpad of them. So for the next couple of hours, we've got a SYRIZA videos and experts on hand to answer questions about our new product, Doctor Enterprise Container Cloud. Before we jump into the videos and the technology, I just want to introduce myself and my other emcee for the session. I'm Bill Milks. I run curriculum development for Mirant us on. And >>I'm Bruce Basil Matthews. I'm the Western regional Solutions architect for Moran Tissue esa and welcome to everyone to this lovely launchpad oven event. >>We're lucky to have you with us proof. At least somebody on the call knows something about your enterprise Computer club. Um, speaking of people that know about Dr Enterprise Container Cloud, make sure that you've got a window open to the chat for this session. We've got a number of our engineers available and on hand to answer your questions live as we go through these videos and disgusting problem. So that's us, I guess, for Dr Enterprise Container Cloud, this is Mirant asses brand new product for bootstrapping Doctor Enterprise Kubernetes clusters at scale Anything. The airport Abu's? >>No, just that I think that we're trying Thio. Uh, let's see. Hold on. I think that we're trying Teoh give you a foundation against which to give this stuff a go yourself. And that's really the key to this thing is to provide some, you know, many training and education in a very condensed period. So, >>yeah, that's exactly what you're going to see. The SYRIZA videos we have today. We're going to focus on your first steps with Dr Enterprise Container Cloud from installing it to bootstrapping your regional child clusters so that by the end of the tutorial content today, you're gonna be prepared to spin up your first documentary prize clusters using documented prize container class. So just a little bit of logistics for the session. We're going to run through these tutorials twice. We're gonna do one run through starting seven minutes ago up until I guess it will be ten fifteen Pacific time. Then we're gonna run through the whole thing again. So if you've got other colleagues that weren't able to join right at the top of the hour and would like to jump in from the beginning, ten. Fifteen Pacific time. We're gonna do the whole thing over again. So if you want to see the videos twice, you got public friends and colleagues that, you know you wanna pull in for a second chance to see this stuff, we're gonna do it all. All twice. Yeah, this session. Any any logistics I should add, Bruce that No, >>I think that's that's pretty much what we had to nail down here. But let's zoom dash into those, uh, feature films. >>Let's do Edmonds. And like I said, don't be shy. Feel free to ask questions in the chat or engineers and boosting myself are standing by to answer your questions. So let me just tee up the first video here and walk their cost. Yeah. Mhm. Yes. Sorry. And here we go. So our first video here is gonna be about installing the Doctor Enterprise Container Club Management cluster. So I like to think of the management cluster as like your mothership, right? This is what you're gonna use to deploy all those little child clusters that you're gonna use is like, Come on it as clusters downstream. So the management costs was always our first step. Let's jump in there >>now. We have to give this brief little pause >>with no good day video. Focus for this demo will be the initial bootstrap of the management cluster in the first regional clusters to support AWS deployments. The management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case, eight of us and the Elsie um, components on the UCP Cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a big strap note on this dependencies on handling with download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the adduce environment. The fourth configuring the deployment, defining things like the machine types on the fifth phase. Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node, just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now we're just checking through AWS to make sure that the account we want to use we have the correct credentials on the correct roles set up and validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just going to check that we can, from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next, we're going to run it. I'm in. Deploy it. Changing into that big struck folder. Just making see what's there. Right now we have no license file, so we're gonna get the license filed. Oh, okay. Get the license file through the more antis downloads site, signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, Once we've done that, we can now go ahead with the rest of the deployment. See that the follow is there. Uh, huh? That's again checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. All right, The next big step is valid in all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running an AWS policy create. So it is part of that is creating our Food trucks script, creating the mystery policy files on top of AWS, Just generally preparing the environment using a cloud formation script you'll see in a second will give a new policy confirmations just waiting for it to complete. Yeah, and there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created Today I am console. Go to that new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media Access key I D and the secret access key. We went, Yeah, usually then exported on the command line. Okay. Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Yes. Okay, that's the key. Secret X key. Right on. Let's kick it off. Yeah, So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you, and as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the west side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS. At the end of the process, that cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Okay. Local clusters boat just waiting for the various objects to get ready. Standard communities objects here Okay, so we speed up this process a little bit just for demonstration purposes. Yeah. There we go. So first note is being built the best in host. Just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for a W s to create the instance. Okay. Yes. Here, beauty there. Okay. Mhm. Okay. Yeah, yeah. Okay. On there. We got question. Host has been built on three instances for the management clusters have now been created. We're going through the process of preparing. Those nodes were now copying everything over. See that? The scaling up of controllers in the big Strap cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Yeah. Yeah, just waiting for key. Clark. Uh huh. Start to finish up. Yeah. No. What? Now we're shutting down control this on the local bootstrap node on preparing our I. D. C. Configuration. Fourth indication, soon as this is completed. Last phase will be to deploy stack light into the new cluster the last time Monitoring tool set way Go stack like to plan It has started. Mhm coming to the end of the deployment Mountain. Yeah, America. Final phase of the deployment. Onda, We are done. Okay, You'll see. At the end they're providing us the details of you. I log in so there's a keeper clogging. You can modify that initial default password is part of the configuration set up with one documentation way. Go Councils up way can log in. Yeah, yeah, thank you very much for watching. >>Excellent. So in that video are wonderful field CTO Shauna Vera bootstrapped up management costume for Dr Enterprise Container Cloud Bruce, where exactly does that leave us? So now we've got this management costume installed like what's next? >>So primarily the foundation for being able to deploy either regional clusters that will then allow you to support child clusters. Uh, comes into play the next piece of what we're going to show, I think with Sean O'Mara doing this is the child cluster capability, which allows you to then deploy your application services on the local cluster. That's being managed by the ah ah management cluster that we just created with the bootstrap. >>Right? So this cluster isn't yet for workloads. This is just for bootstrapping up the downstream clusters. Those or what we're gonna use for workings. >>Exactly. Yeah. And I just wanted to point out, since Sean O'Mara isn't around, toe, actually answer questions. I could listen to that guy. Read the phone book, and it would be interesting, but anyway, you can tell him I said that >>he's watching right now, Crusoe. Good. Um, cool. So and just to make sure I understood what Sean was describing their that bootstrap er knows that you, like, ran document fresh pretender Cloud from to begin with. That's actually creating a kind kubernetes deployment kubernetes and Docker deployment locally. That then hits the AWS a p i in this example that make those e c two instances, and it makes like a three manager kubernetes cluster there, and then it, like, copies itself over toe those communities managers. >>Yeah, and and that's sort of where the transition happens. You can actually see it. The output that when it says I'm pivoting, I'm pivoting from my local kind deployment of cluster AP, I toothy, uh, cluster, that's that's being created inside of AWS or, quite frankly, inside of open stack or inside of bare metal or inside of it. The targeting is, uh, abstracted. Yeah, but >>those air three environments that we're looking at right now, right? Us bare metal in open staff environments. So does that kind cluster on the bootstrap er go away afterwards. You don't need that afterwards. Yeah, that is just temporary. To get things bootstrapped, then you manage things from management cluster on aws in this example? >>Yeah. Yeah. The seed, uh, cloud that post the bootstrap is not required anymore. And there's no, uh, interplay between them after that. So that there's no dependencies on any of the clouds that get created thereafter. >>Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, be a temporary container that would bootstrap all the other containers. Go away. It's, uh, so sort of a similar, similar temporary transient bootstrapping model. Cool. Excellent. What will convict there? It looked like there wasn't a ton, right? It looked like you had to, like, set up some AWS parameters like credentials and region and stuff like that. But other than that, that looked like heavily script herbal like there wasn't a ton of point and click there. >>Yeah, very much so. It's pretty straightforward from a bootstrapping standpoint, The config file that that's generated the template is fairly straightforward and targeted towards of a small medium or large, um, deployment. And by editing that single file and then gathering license file and all of the things that Sean went through, um, that that it makes it fairly easy to script >>this. And if I understood correctly as well that three manager footprint for your management cluster, that's the minimum, right. We always insist on high availability for this management cluster because boy do not wanna see oh, >>right, right. And you know, there's all kinds of persistent data that needs to be available, regardless of whether one of the notes goes down or not. So we're taking care of all of that for you behind the scenes without you having toe worry about it as a developer. >>No, I think there's that's a theme that I think will come back to throughout the rest of this tutorial session today is there's a lot of there's a lot of expertise baked him to Dr Enterprise Container Cloud in terms of implementing best practices for you like the defaulter, just the best practices of how you should be managing these clusters, Miss Seymour. Examples of that is the day goes on. Any interesting questions you want to call out from the chap who's >>well, there was. Yeah, yeah, there was one that we had responded to earlier about the fact that it's a management cluster that then conduce oh, either the the regional cluster or a local child molester. The child clusters, in each case host the application services, >>right? So at this point, we've got, in some sense, like the simplest architectures for our documentary prize Container Cloud. We've got the management cluster, and we're gonna go straight with child cluster. In the next video, there's a more sophisticated architecture, which will also proper today that inserts another layer between those two regional clusters. If you need to manage regions like across a BS, reads across with these documents anything, >>yeah, that that local support for the child cluster makes it a lot easier for you to manage the individual clusters themselves and to take advantage of our observation. I'll support systems a stack light and things like that for each one of clusters locally, as opposed to having to centralize thumb >>eso. It's a couple of good questions. In the chat here, someone was asking for the instructions to do this themselves. I strongly encourage you to do so. That should be in the docks, which I think Dale helpfully thank you. Dale provided links for that's all publicly available right now. So just head on in, head on into the docks like the Dale provided here. You can follow this example yourself. All you need is a Mirante license for this and your AWS credentials. There was a question from many a hear about deploying this toe azure. Not at G. Not at this time. >>Yeah, although that is coming. That's going to be in a very near term release. >>I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. Very bracing. Cool. Okay. Any other thoughts on this one does. >>No, just that the fact that we're running through these individual pieces of the steps Well, I'm sure help you folks. If you go to the link that, uh, the gentleman had put into the chat, um, giving you the step by staff. Um, it makes it fairly straightforward to try this yourselves. >>E strongly encourage that, right? That's when you really start to internalize this stuff. OK, but before we move on to the next video, let's just make sure everyone has a clear picture in your mind of, like, where we are in the life cycle here creating this management cluster. Just stop me if I'm wrong. Who's creating this management cluster is like, you do that once, right? That's when your first setting up your doctor enterprise container cloud environment of system. What we're going to start seeing next is creating child clusters and this is what you're gonna be doing over and over and over again. When you need to create a cluster for this Deb team or, you know, this other team river it is that needs commodity. Doctor Enterprise clusters create these easy on half will. So this was once to set up Dr Enterprise Container Cloud Child clusters, which we're going to see next. We're gonna do over and over and over again. So let's go to that video and see just how straightforward it is to spin up a doctor enterprise cluster for work clothes as a child cluster. Undocumented brands contain >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster, the scaling of the cluster and how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the U I so you can switch. Project Mary only has access to development. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Nan Yes, this H Keys Associate ID for Mary into her team on the cloud credentials that allow you to create access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences, Right? Let's now set up semester search keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name, we copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our local machine. A simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, you go to the clusters tab. We hit the create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one release version five point seven is the current release Onda Attach. Mary's Key is necessary Key. We can then check the rest of the settings, confirming the provider Any kubernetes c r D r I p address information. We can change this. Should we wish to? We'll leave it default for now on. Then what components? A stack light I would like to deploy into my Custer. For this. I'm enabling stack light on logging on Aiken. Sit up the retention sizes Attention times on. Even at this stage, at any customer alerts for the watchdogs. E consider email alerting which I will need my smart host details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I commend side on the route device size. There we go, my three machines obviously creating. I now need to add some workers to this custom. So I go through the same process this time once again, just selecting worker. I'll just add to once again, the AM is extremely important. Will fail if we don't pick the right, Am I for a boon to machine in this case and the deployment has started. We can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here we've created the VPC. We've created the sub nets on We've created the Internet gateway. It's unnecessary made of us and we have no warnings of the stage. Yeah, this will then run for a while. We have one minute past waken click through. We can check the status of the machine bulls as individuals so we can check the machine info, details of the machines that we've assigned, right? Mhm Onda. See any events pertaining to the machine areas like this one on normal? Yeah. Just watch asked. The community's components are waiting for the machines to start. Go back to Custer's. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway on the stage. The machines have been built on assigned. I pick up the U. S. Thank you. Yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. Mhm. No speeding things up a little bit. This whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured. Mhm, mhm. And then we go. Cluster has been deployed. So once the classes deployed, we can now never get around our environment. Okay, Are cooking into configure cluster We could modify their cluster. We could get the end points for alert alert manager on See here The griffon occupying and Prometheus are still building in the background but the cluster is available on you would be able to put workloads on it the stretch to download the cube conflict so that I can put workloads on it. It's again three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster Mhm all right Now that the build is fully completed, we can check out cluster info on. We can see that Allow the satellite components have been built. All the storage is there, and we have access to the CPU. I So if we click into the cluster, we can access the UCP dashboard, right? Shit. Click the signing with Detroit button to use the SSO on. We give Mary's possible to use the name once again. Thing is, an unlicensed cluster way could license at this point. Or just skip it on. There. We have the UCP dashboard. You can see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon, a data just being automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Mhm. So we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah. General dashboard of Cuba navies cluster one of this is configurable. You can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster, all right to scale the cluster on to add a notice. A simple is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger disks and you'll see that worker has been added from the provisioning state on shortly. We will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workouts are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button, validating which release you would like to update to. In this case, the next available releases five point seven point one. Here I'm kicking the update by in the background We will coordinate. Drain each node slowly go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Girl, we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already on In a few minutes we'll see that there are great has been completed. There we go. Great. Done. Yeah. If you work loads of both using proper cloud native community standards, there will be no impact. >>Excellent. So at this point, we've now got a cluster ready to start taking our communities of workloads. He started playing or APs to that costume. So watching that video, the thing that jumped out to me at first Waas like the inputs that go into defining this workload cost of it. All right, so we have to make sure we were using on appropriate am I for that kind of defines the substrate about what we're gonna be deploying our cluster on top of. But there's very little requirements. A so far as I could tell on top of that, am I? Because Docker enterprise Container Cloud is gonna bootstrap all the components that you need. That s all we have is kind of kind of really simple bunch box that we were deploying these things on top of so one thing that didn't get dug into too much in the video. But it's just sort of implied. Bruce, maybe you can comment on this is that release that Shawn had to choose for his, uh, for his cluster in creating it. And that release was also the thing we had to touch. Wanted to upgrade part cluster. So you have really sharp eyes. You could see at the end there that when you're doing the release upgrade enlisted out a stack of components docker, engine, kubernetes, calico, aled, different bits and pieces that go into, uh, go into one of these commodity clusters that deploy. And so, as far as I can tell in that case, that's what we mean by a release. In this sense, right? It's the validated stack off container ization and orchestration components that you know we've tested out and make sure it works well, introduction environments. >>Yeah, and and And that's really the focus of our effort is to ensure that any CVS in any of the stack are taken care of that there is a fixes air documented and up streamed to the open stack community source community, um, and and that, you know, then we test for the scaling ability and the reliability in high availability configuration for the clusters themselves. The hosts of your containers. Right. And I think one of the key, uh, you know, benefits that we provide is that ability to let you know, online, high. We've got an update for you, and it's fixes something that maybe you had asked us to fix. Uh, that all comes to you online as your managing your clusters, so you don't have to think about it. It just comes as part of the product. >>You just have to click on Yes. Please give me that update. Uh, not just the individual components, but again. It's that it's that validated stack, right? Not just, you know, component X, y and Z work. But they all work together effectively Scalable security, reliably cool. Um, yeah. So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old universal control plane. Doctor Enterprise. On top of that, Sean had the classic comment there, you know? Yeah. Yeah. You'll see a little warnings and errors or whatever. When you're setting up, UCP don't handle, right, Just let it do its job, and it will converge all its components, you know, after just just a minute or two. But we saw in that video, we sped things up a little bit there just we didn't wait for, you know, progress fighters to complete. But really, in real life, that whole process is that anything so spend up one of those one of those fosters so quite quite quick. >>Yeah, and and I think the the thoroughness with which it goes through its process and re tries and re tries, uh, as you know, and it was evident when we went through the initial ah video of the bootstrapping as well that the processes themselves are self healing, as they are going through. So they will try and retry and wait for the event to complete properly on. And once it's completed properly, then it will go to the next step. >>Absolutely. And the worst thing you could do is panic at the first warning and start tearing things that don't don't do that. Just don't let it let it heal. Let take care of itself. And that's the beauty of these manage solutions is that they bake in a lot of subject matter expertise, right? The decisions that are getting made by those containers is they're bootstrapping themselves, reflect the expertise of the Mirant ISS crew that has been developing this content in these two is free for years and years now, over recognizing humanities. One cool thing there that I really appreciate it actually that it adds on top of Dr Enterprise is that automatic griffon a deployment as well. So, Dr Enterprises, I think everyone knows has had, like, some very high level of statistics baked into its dashboard for years and years now. But you know our customers always wanted a double click on that right to be able to go a little bit deeper. And Griffon are really addresses that it's built in dashboards. That's what's really nice to see. >>Yeah, uh, and all of the alerts and, uh, data are actually captured in a Prometheus database underlying that you have access to so that you are allowed to add new alerts that then go out to touch slack and say hi, You need to watch your disk space on this machine or those kinds of things. Um, and and this is especially helpful for folks who you know, want to manage the application service layer but don't necessarily want to manage the operations side of the house. So it gives them a tool set that they can easily say here, Can you watch these for us? And Miran tas can actually help do that with you, So >>yeah, yeah, I mean, that's just another example of baking in that expert knowledge, right? So you can leverage that without tons and tons of a long ah, long runway of learning about how to do that sort of thing. Just get out of the box right away. There was the other thing, actually, that you could sleep by really quickly if you weren't paying close attention. But Sean mentioned it on the video. And that was how When you use dark enterprise container cloud to scale your cluster, particularly pulling a worker out, it doesn't just like Territo worker down and forget about it. Right? Is using good communities best practices to cordon and drain the No. So you aren't gonna disrupt your workloads? You're going to just have a bunch of containers instantly. Excellent crash. You could really carefully manage the migration of workloads off that cluster has baked right in tow. How? How? Document? The brass container cloud is his handling cluster scale. >>Right? And And the kubernetes, uh, scaling methodology is is he adhered to with all of the proper techniques that ensure that it will tell you. Wait, you've got a container that actually needs three, uh, three, uh, instances of itself. And you don't want to take that out, because that node, it means you'll only be able to have to. And we can't do that. We can't allow that. >>Okay, Very cool. Further thoughts on this video. So should we go to the questions. >>Let's let's go to the questions >>that people have. Uh, there's one good one here, down near the bottom regarding whether an a p I is available to do this. So in all these demos were clicking through this web. You I Yes, this is all a p. I driven. You could do all of this. You know, automate all this away is part of the CSC change. Absolutely. Um, that's kind of the point, right? We want you to be ableto spin up. Come on. I keep calling them commodity clusters. What I mean by that is clusters that you can create and throw away. You know, easily and automatically. So everything you see in these demos eyes exposed to FBI? >>Yeah. In addition, through the standard Cube cuddle, Uh, cli as well. So if you're not a programmer, but you still want to do some scripting Thio, you know, set up things and deploy your applications and things. You can use this standard tool sets that are available to accomplish that. >>There is a good question on scale here. So, like, just how many clusters and what sort of scale of deployments come this kind of support our engineers report back here that we've done in practice up to a Zeman ia's like two hundred clusters. We've deployed on this with two hundred fifty nodes in a cluster. So were, you know, like like I said, hundreds, hundreds of notes, hundreds of clusters managed by documented press container fall and then those downstream clusters, of course, subject to the usual constraints for kubernetes, right? Like default constraints with something like one hundred pods for no or something like that. There's a few different limitations of how many pods you can run on a given cluster that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. >>Yeah, E. I mean, I don't think that we constrain any of the capabilities that are available in the, uh, infrastructure deliveries, uh, service within the goober Netease framework. So were, you know, But we are, uh, adhering to the standards that we would want to set to make sure that we're not overloading a node or those kinds of things, >>right. Absolutely cool. Alright. So at this point, we've got kind of a two layered our protection when we are management cluster, but we deployed in the first video. Then we use that to deploy one child clustering work, classroom, uh, for more sophisticated deployments where we might want to manage child clusters across multiple regions. We're gonna add another layer into our architectural we're gonna add in regional cluster management. So this idea you're gonna have the single management cluster that we started within the first video. On the next video, we're gonna learn how to spin up a regional clusters, each one of which would manage, for example, a different AWS uh, US region. So let me just pull out the video for that bill. We'll check it out for me. Mhm. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectures of you how to set up the management environment, prepare for the deployment deployment overview and then just to prove it, to play a regional child cluster. So, looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case AWS on the LCN components on the D you speak Cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need a regional cluster? Different platform architectures, for example aws who have been stack even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager we also Machine Manager were held. Mandel are managed as well as the actual provider logic. Mhm. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. And you see, it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster similar to what we're going to deploy now, also only has three managers once again, no workers. But as a comparison, here's a child cluster This one has three managers, but also has additional workers associate it to the cluster. All right, we need to connect. Tell bootstrap note. Preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine. All right. A few things we have to do to make sure the environment is ready. First thing we're going to see go into route. We'll go into our releases folder where we have the kozberg struck on. This was the original bootstrap used to build the original management cluster. Yeah, we're going to double check to make sure our cube con figures there once again, the one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything is working. A condom. No damages waken access to a swell. Yeah. Next we're gonna edit the machine definitions. What we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I. So that's found under the templates AWS directory. We don't need to edit anything else here. But we could change items like the size of the machines attempts. We want to use that The key items to ensure where you changed the am I reference for the junta image is the one for the region in this case AWS region for utilizing this was no construct deployment. We have to make sure we're pointing in the correct open stack images. Yeah, okay. Set the correct and my save file. Now we need to get up credentials again. When we originally created the bootstrap cluster, we got credentials from eight of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we're just exporting the AWS access key and I d. What's important is CAAs aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our cube conflict that we want to use for the management cluster. When we looked at earlier Yeah, now we're exporting that. Want to call the cluster region Is Frank Foods Socrates Frankfurt yet trying to use something descriptive It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed. Um, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at W s and waiting for that bastard and no to get started. Please. The best you nerd Onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy. Dr. Enterprise, this is probably the longest face. Yeah, seeing the second that all the nerds will go from the player deployed. Prepare, prepare. Yeah, You'll see their status changes updates. He was the first night ready. Second, just applying second already. Both my time. No waiting from home control. Let's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running the date of the U. S. All my stay. Ah, now we're playing Stockland. Switch over is done on. Done. Now I will build a child cluster in the new region very, very quickly to find the cluster will pick. Our new credential has shown up. We'll just call it Frankfurt for simplicity a key and customs to find. That's the machine. That cluster stop with three managers. Set the correct Am I for the region? Yeah, Do the same to add workers. There we go test the building. Yeah. Total bill of time Should be about fifteen minutes. Concedes in progress. It's going to expect this up a little bit. Check the events. We've created all the dependencies, machine instances, machines, a boat shortly. We should have a working cluster in Frankfurt region. Now almost a one note is ready from management. Two in progress. Yeah, on we're done. Clusters up and running. Yeah. >>Excellent. So at this point, we've now got that three tier structure that we talked about before the video. We got that management cluster that we do strapped in the first video. Now we have in this example to different regional clustering one in Frankfurt, one of one management was two different aws regions. And sitting on that you can do Strap up all those Doctor enterprise costumes that we want for our work clothes. >>Yeah, that's the key to this is to be able to have co resident with your actual application service enabled clusters the management co resident with it so that you can, you know, quickly access that he observation Elson Surfboard services like the graph, Ana and that sort of thing for your particular region. A supposed to having to lug back into the home. What did you call it when we started >>the mothership? >>The mothership. Right. So we don't have to go back to the mother ship. We could get >>it locally. Yeah, when, like to that point of aggregating things under a single pane of glass? That's one thing that again kind of sailed by in the demo really quickly. But you'll notice all your different clusters were on that same cluster. Your pain on your doctor Enterprise Container Cloud management. Uh, court. Right. So both your child clusters for running workload and your regional clusters for bootstrapping. Those child clusters were all listed in the same place there. So it's just one pane of glass to go look for, for all of your clusters, >>right? And, uh, this is kind of an important point. I was, I was realizing, as we were going through this. All of the mechanics are actually identical between the bootstrapped cluster of the original services and the bootstrapped cluster of the regional services. It's the management layer of everything so that you only have managers, you don't have workers and that at the child cluster layer below the regional or the management cluster itself, that's where you have the worker nodes. And those are the ones that host the application services in that three tiered architecture that we've now defined >>and another, you know, detail for those that have sharp eyes. In that video, you'll notice when deploying a child clusters. There's not on Lee. A minimum of three managers for high availability management cluster. You must have at least two workers that's just required for workload failure. It's one of those down get out of work. They could potentially step in there, so your minimum foot point one of these child clusters is fine. Violence and scalable, obviously, from a >>That's right. >>Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want to my last video. There's another question here about, like where these clusters can live. So again, I know these examples are very aws heavy. Honestly, it's just easy to set up down on the other us. We could do things on bare metal and, uh, open stack departments on Prem. That's what all of this still works in exactly the same way. >>Yeah, the, uh, key to this, especially for the the, uh, child clusters, is the provision hers? Right? See you establish on AWS provision or you establish a bare metal provision or you establish a open stack provision. Or and eventually that list will include all of the other major players in the cloud arena. But you, by selecting the provision or within your management interface, that's where you decide where it's going to be hosted, where the child cluster is to be hosted. >>Speaking off all through a child clusters. Let's jump into our last video in the Siri's, where we'll see how to spin up a child cluster on bare metal. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. So why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent. Provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and supports high performance workloads like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another opera visor. Lay it between so continue on the theme Why Communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p G A s G p us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. Uh, we can handle utilization in the scheduling. Better Onda we increase the performances and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project will add the bare metal hosts, including the host name. I put my credentials I pay my address the Mac address on then provide a machine type label to determine what type of machine it is for later use. Okay, let's get started. So well again. Was the operator thing. We'll go and we'll create a project for our machines to be a member off helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. So the first thing we had to be in post, Yeah, many of the machine A name. Anything you want, que experimental zero one. Provide the IAP my user name type my password. Okay. On the Mac address for the common interface with the boot interface and then the i p m I i p address These machines will be at the time storage worker manager. He's a manager. Yeah, we're gonna add a number of other machines on will. Speed this up just so you could see what the process looks like in the future. Better discovery will be added to the product. Okay. Okay. Getting back there we have it are Six machines have been added, are busy being inspected, being added to the system. Let's have a look at the details of a single note. Yeah, you can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. I see. Okay, let's go and create the cluster. Yeah, So we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So we'll credit custom. We'll give it a name, but if it were selecting bare metal on the region, we're going to select the version we want to apply. No way. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of dress range on update the address range that we want to use for the cluster. Check that the sea ideal blocks for the Cuban ladies and tunnels are what we want them to be. Enable disabled stack light. Yeah, and soothe stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here. We're focused on building communities clusters, so we're gonna put the count of machines. You want managers? We're gonna pick the label type manager and create three machines is the manager for the Cuban eighties. Casting Okay thing. We're having workers to the same. It's a process. Just making sure that the worker label host level are I'm sorry. On when Wait for the machines to deploy. Let's go through the process of putting the operating system on the notes validating and operating system deploying doctor identifies Make sure that the cluster is up and running and ready to go. Okay, let's review the bold events waken See the machine info now populated with more information about the specifics of things like storage and of course, details of a cluster etcetera. Yeah, yeah, well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build? And that brings us to the end of this particular demo. You can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>All right, so there we have it, deploying a cluster to bare metal. Much the same is how we did for AWS. I guess maybe the biggest different stepwise there is there is that registration face first, right? So rather than just using AWS financials toe magically create PM's in the cloud. You got a point out all your bare metal servers to Dr Enterprise between the cloud and they really come in, I guess three profiles, right? You got your manager profile with a profile storage profile which has been labeled as allocate. Um, crossword cluster has appropriate, >>right? And And I think that the you know, the key differentiator here is that you have more physical control over what, uh, attributes that love your cat, by the way, uh, where you have the different attributes of a server of physical server. So you can, uh, ensure that the SSD configuration on the storage nodes is gonna be taken advantage of in the best way the GP use on the worker nodes and and that the management layer is going to have sufficient horsepower to, um, spin up to to scale up the the environments, as required. One of the things I wanted to mention, though, um, if I could get this out without the choking much better. Um, is that Ah, hey, mentioned the load balancer and I wanted to make sure in defining the load balancer and the load balancer ranges. Um, that is for the top of the the cluster itself. That's the operations of the management, uh, layer integrating with your systems internally to be able to access the the Cube Can figs. I I p address the, uh, in a centralized way. It's not the load balancer that's working within the kubernetes cluster that you are deploying. That's still cube proxy or service mesh, or however you're intending to do it. So, um, it's kind of an interesting step that your initial step in building this, um and we typically use things like metal L B or in gen X or that kind of thing is to establish that before we deploy this bear mental cluster so that it can ride on top of that for the tips and things. >>Very cool. So any other thoughts on what we've seen so far today? Bruce, we've gone through all the different layers. Doctor enterprise container clouds in these videos from our management are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. Closing thoughts before we take just a very short break and run through these demos again. >>You know, I've been very exciting. Ah, doing the presentation with you. I'm really looking forward to doing it the second time, so that we because we've got a good rhythm going about this kind of thing. So I'm looking forward to doing that. But I think that the key elements of what we're trying to convey to the folks out there in the audience that I hope you've gotten out of it is that will that this is an easy enough process that if you follow the step by steps going through the documentation that's been put out in the chat, um, that you'll be able to give this a go yourself, Um, and you don't have to limit yourself toe having physical hardware on prim to try it. You could do it in a ws as we've shown you today. And if you've got some fancy use cases like, uh, you you need a Hadoop And and, uh, you know, cloud oriented ai stuff that providing a bare metal service helps you to get there very fast. So right. Thank you. It's been a pleasure. >>Yeah, thanks everyone for coming out. So, like I said we're going to take a very short, like, three minute break here. Uh, take the opportunity to let your colleagues know if they were in another session or they didn't quite make it to the beginning of this session. Or if you just want to see these demos again, we're going to kick off this demo. Siri's again in just three minutes at ten. Twenty five a. M. Pacific time where we will see all this great stuff again. Let's take a three minute break. I'll see you all back here in just two minutes now, you know. Okay, folks, that's the end of our extremely short break. We'll give people just maybe, like one more minute to trickle in if folks are interested in coming on in and jumping into our demo. Siri's again. Eso For those of you that are just joining us now I'm Bill Mills. I head up curriculum development for the training team here. Moran Tous on Joining me for this session of demos is Bruce. Don't you go ahead and introduce yourself doors, who is still on break? That's cool. We'll give Bruce a minute or two to get back while everyone else trickles back in. There he is. Hello, Bruce. >>How'd that go for you? Okay, >>Very well. So let's kick off our second session here. I e just interest will feel for you. Thio. Let it run over here. >>Alright. Hi. Bruce Matthews here. I'm the Western Regional Solutions architect for Marantz. Use A I'm the one with the gray hair and the glasses. Uh, the handsome one is Bill. So, uh, Bill, take it away. >>Excellent. So over the next hour or so, we've got a Siris of demos that's gonna walk you through your first steps with Dr Enterprise Container Cloud Doctor Enterprise Container Cloud is, of course, Miranda's brand new offering from bootstrapping kubernetes clusters in AWS bare metal open stack. And for the providers in the very near future. So we we've got, you know, just just over an hour left together on this session, uh, if you joined us at the top of the hour back at nine. A. M. Pacific, we went through these demos once already. Let's do them again for everyone else that was only able to jump in right now. Let's go. Our first video where we're gonna install Dr Enterprise container cloud for the very first time and use it to bootstrap management. Cluster Management Cluster, as I like to describe it, is our mother ship that's going to spin up all the other kubernetes clusters, Doctor Enterprise clusters that we're gonna run our workloads on. So I'm gonna do >>I'm so excited. I can hardly wait. >>Let's do it all right to share my video out here. Yeah, let's do it. >>Good day. The focus for this demo will be the initial bootstrap of the management cluster on the first regional clusters. To support AWS deployments, the management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case AWS and the Elsom components on the UCP cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a bootstrap note on its dependencies on handling the download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the ideas environment, the fourth configuring the deployment, defining things like the machine types on the fifth phase, Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node. Just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now, we're just checking through aws to make sure that the account we want to use we have the correct credentials on the correct roles set up on validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just gonna check that we can from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next we're going to run it. Yeah, I've been deployed changing into that big struck folder, just making see what's there right now we have no license file, so we're gonna get the license filed. Okay? Get the license file through more antis downloads site signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, since we've done that, we can now go ahead with the rest of the deployment. Yeah, see what the follow is there? Uh huh. Once again, checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. Alright. Next big step is violating all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running in AWS policy create. So it is part of that is creating our food trucks script. Creating this through policy files onto the AWS, just generally preparing the environment using a cloud formation script, you'll see in a second, I'll give a new policy confirmations just waiting for it to complete. And there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created. Good day. I am console. Go to the new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media access Key I. D and the secret access key, but usually then exported on the command line. Okay, Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Okay, thanks. Is key. So you could X key Right on. Let's kick it off. So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you. Um, as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the AWS side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS at the end of the process. That cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Yeah, okay. Local clusters boat. Just waiting for the various objects to get ready. Standard communities objects here. Yeah, you mentioned Yeah. So we've speed up this process a little bit just for demonstration purposes. Okay, there we go. So first note is being built the bastion host just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for AWS to create the instance. Okay. Yeah. Beauty there. Movies. Okay, sketch. Hello? Yeah, Okay. Okay. On. There we go. Question host has been built on three instances for the management clusters have now been created. Okay, We're going through the process of preparing. Those nodes were now copying everything over. See that scaling up of controllers in the big strapped cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Right? Okay. Just waiting for key. Clark. Uh huh. So finish up. Yeah. No. Now we're shutting down. Control this on the local bootstrap node on preparing our I. D. C configuration, fourth indication. So once this is completed, the last phase will be to deploy stack light into the new cluster, that glass on monitoring tool set, Then we go stack like deployment has started. Mhm. Coming to the end of the deployment mountain. Yeah, they were cut final phase of the deployment. And we are done. Yeah, you'll see. At the end, they're providing us the details of you. I log in. So there's a key Clark log in. Uh, you can modify that initial default possible is part of the configuration set up where they were in the documentation way. Go Councils up way can log in. Yeah. Yeah. Thank you very much for watching. >>All right, so at this point, what we have we got our management cluster spun up, ready to start creating work clusters. So just a couple of points to clarify there to make sure everyone caught that, uh, as advertised. That's darker. Enterprise container cloud management cluster. That's not rework loans. are gonna go right? That is the tool and you're gonna use to start spinning up downstream commodity documentary prize clusters for bootstrapping record too. >>And the seed host that were, uh, talking about the kind cluster dingy actually doesn't have to exist after the bootstrap succeeds eso It's sort of like, uh, copies head from the seed host Toothy targets in AWS spins it up it then boots the the actual clusters and then it goes away too, because it's no longer necessary >>so that bootstrapping know that there's not really any requirements, Hardly on that, right. It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, as you just said, it's just a kubernetes in docker cluster on that piece. Drop note is just gonna get torn down after the set up finishes on. You no longer need that. Everything you're gonna do, you're gonna drive from the single pane of glass provided to you by your management cluster Doctor enterprise Continue cloud. Another thing that I think is sort of interesting their eyes that the convict is fairly minimal. Really? You just need to provide it like aws regions. Um, am I? And that's what is going to spin up that spending that matter faster. >>Right? There is a mammal file in the bootstrap directory itself, and all of the necessary parameters that you would fill in have default set. But you have the option then of going in and defining a different Am I different for a different region, for example? Oh, are different. Size of instance from AWS. >>One thing that people often ask about is the cluster footprint. And so that example you saw they were spitting up a three manager, um, managing cluster as mandatory, right? No single manager set up at all. We want high availability for doctrine Enterprise Container Cloud management. Like so again, just to make sure everyone sort of on board with the life cycle stage that we're at right now. That's the very first thing you're going to do to set up Dr Enterprise Container Cloud. You're going to do it. Hopefully exactly once. Right now, you've got your management cluster running, and they're gonna use that to spend up all your other work clusters Day today has has needed How do we just have a quick look at the questions and then lets take a look at spinning up some of those child clusters. >>Okay, e think they've actually been answered? >>Yeah, for the most part. One thing I'll point out that came up again in the Dail, helpfully pointed out earlier in surgery, pointed out again, is that if you want to try any of the stuff yourself, it's all of the dogs. And so have a look at the chat. There's a links to instructions, so step by step instructions to do each and every thing we're doing here today yourself. I really encourage you to do that. Taking this out for a drive on your own really helps internalizing communicate these ideas after the after launch pad today, Please give this stuff try on your machines. Okay, So at this point, like I said, we've got our management cluster. We're not gonna run workloads there that we're going to start creating child clusters. That's where all of our work and we're gonna go. That's what we're gonna learn how to do in our next video. Cue that up for us. >>I so love Shawn's voice. >>Wasn't that all day? >>Yeah, I watched him read the phone book. >>All right, here we go. Let's now that we have our management cluster set up, let's create a first child work cluster. >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster the scaling of the cluster on how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the u I. So you can switch Project Mary only has access to development. Uh huh. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Man. Yes, this H keys, Associate ID for Mary into her team on the cloud credentials that allow you to create or access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences. Right. Let's now set up some ssh keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name. We copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our machine. A very simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, we got the clusters tab we had to create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one released version five point seven is the current release Onda Attach. Mary's Key is necessary key. We can then check the rest of the settings, confirming the provider any kubernetes c r D a r i p address information. We can change this. Should we wish to? We'll leave it default for now and then what components of stack light? I would like to deploy into my custom for this. I'm enabling stack light on logging, and I consider the retention sizes attention times on. Even at this stage, add any custom alerts for the watchdogs. Consider email alerting which I will need my smart host. Details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I convinced side on the route. Device size. There we go. My three machines are busy creating. I now need to add some workers to this cluster. So I go through the same process this time once again, just selecting worker. I'll just add to once again the am I is extremely important. Will fail if we don't pick the right. Am I for a Clinton machine? In this case and the deployment has started, we can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen. Cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here. We've created the VPC. We've created the sub nets on. We've created the Internet Gateway. It's unnecessary made of us. And we have no warnings of the stage. Okay, this will then run for a while. We have one minute past. We can click through. We can check the status of the machine balls as individuals so we can check the machine info, details of the machines that we've assigned mhm and see any events pertaining to the machine areas like this one on normal. Yeah. Just last. The community's components are waiting for the machines to start. Go back to customers. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway. And at this stage, the machines have been built on assigned. I pick up the U S. Yeah, yeah, yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. No speeding things up a little bit this whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured mhm and then we go. Cluster has been deployed. So once the classes deployed, we can now never get around. Our environment are looking into configure cluster. We could modify their cluster. We could get the end points for alert Alert Manager See here the griffon occupying and Prometheus are still building in the background but the cluster is available on You would be able to put workloads on it at this stage to download the cube conflict so that I can put workloads on it. It's again the three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster. All right, Now that the build is fully completed, we can check out cluster info on. We can see that all the satellite components have been built. All the storage is there, and we have access to the CPU. I. So if we click into the cluster, we can access the UCP dashboard, click the signing with the clock button to use the SSO. We give Mary's possible to use the name once again. Thing is an unlicensed cluster way could license at this point. Or just skip it on. Do we have the UCP dashboard? You could see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon. A data just been automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Um, so we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah, a general dashboard of Cuba Navies cluster. What If this is configurable, you can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster. All right to scale the cluster on to add a No. This is simple. Is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger group disks and you'll see that worker has been added in the provisioning state. On shortly, we will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node we would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workloads are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button validating which release you would like to update to this case. This available releases five point seven point one give you I'm kicking the update back in the background. We will coordinate. Drain each node slowly, go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Who we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already. Yeah, and in a few minutes, we'll see that the upgrade has been completed. There we go. Great. Done. If you work loads of both using proper cloud native community standards, there will be no impact. >>All right, there. We haven't. We got our first workload cluster spun up and managed by Dr Enterprise Container Cloud. So I I loved Shawn's classic warning there. When you're spinning up an actual doctor enterprise deployment, you see little errors and warnings popping up. Just don't touch it. Just leave it alone and let Dr Enterprises self healing properties take care of all those very transient temporary glitches, resolve themselves and leave you with a functioning workload cluster within victims. >>And now, if you think about it that that video was not very long at all. And that's how long it would take you if someone came into you and said, Hey, can you spend up a kubernetes cluster for development development A. Over here, um, it literally would take you a few minutes to thio Accomplish that. And that was with a W s. Obviously, which is sort of, ah, transient resource in the cloud. But you could do exactly the same thing with resource is on Prem or resource is, um physical resource is and will be going through that later in the process. >>Yeah, absolutely one thing that is present in that demo, but that I like to highlight a little bit more because it just kind of glides by Is this notion of, ah, cluster release? So when Sean was creating that cluster, and also when when he was upgrading that cluster, he had to choose a release. What does that didn't really explain? What does that mean? Well, in Dr Enterprise Container Cloud, we have released numbers that capture the entire staff of container ization tools that will be deploying to that workload costume. So that's your version of kubernetes sed cor DNs calico. Doctor Engineer. All the different bits and pieces that not only work independently but are validated toe work together as a staff appropriate for production, humanities, adopted enterprise environments. >>Yep. From the bottom of the stack to the top, we actually test it for scale. Test it for CVS, test it for all of the various things that would, you know, result in issues with you running the application services. And I've got to tell you from having, you know, managed kubernetes deployments and things like that that if you're the one doing it yourself, it can get rather messy. Eso This makes it easy. >>Bruce, you were staying a second ago. They I'll take you at least fifteen minutes to install your release. Custer. Well, sure, but what would all the other bits and pieces you need toe? Not just It's not just about pressing the button to install it, right? It's making the right decision. About what components work? Well, our best tested toe be successful working together has a staff? Absolutely. We this release mechanism and Dr Enterprise Container Cloud. Let's just kind of package up that expert knowledge and make it available in a really straightforward, fashionable species. Uh, pre Confederate release numbers and Bruce is you're pointing out earlier. He's got delivered to us is updates kind of transparent period. When when? When Sean wanted toe update that cluster, he created little update. Custer Button appeared when an update was available. All you gotta do is click. It tells you what Here's your new stack of communities components. It goes ahead. And the straps those components for you? >>Yeah, it actually even displays at the top of the screen. Ah, little header That says you've got an update available. Do you want me to apply? It s o >>Absolutely. Another couple of cool things. I think that are easy to miss in that demo was I really like the on board Bafana that comes along with this stack. So we've been Prometheus Metrics and Dr Enterprise for years and years now. They're very high level. Maybe in in previous versions of Dr Enterprise having those detailed dashboards that Ravana provides, I think that's a great value out there. People always wanted to be ableto zoom in a little bit on that, uh, on those cluster metrics, you're gonna provides them out of the box for us. Yeah, >>that was Ah, really, uh, you know, the joining of the Miranda's and Dr teams together actually spawned us to be able to take the best of what Morantes had in the open stack environment for monitoring and logging and alerting and to do that integration in in a very short period of time so that now we've got it straight across the board for both the kubernetes world and the open stack world. Using the same tool sets >>warm. One other thing I wanna point out about that demo that I think there was some questions about our last go around was that demo was all about creating a managed workplace cluster. So the doctor enterprise Container Cloud managers were using those aws credentials provisioned it toe actually create new e c two instances installed Docker engine stalled. Doctor Enterprise. Remember all that stuff on top of those fresh new VM created and managed by Dr Enterprise contain the cloud. Nothing unique about that. AWS deployments do that on open staff doing on Parramatta stuff as well. Um, there's another flavor here, though in a way to do this for all of our long time doctor Enterprise customers that have been running Doctor Enterprise for years and years. Now, if you got existing UCP points existing doctor enterprise deployments, you plug those in to Dr Enterprise Container Cloud, uh, and use darker enterprise between the cloud to manage those pre existing Oh, working clusters. You don't always have to be strapping straight from Dr Enterprises. Plug in external clusters is bad. >>Yep, the the Cube config elements of the UCP environment. The bundling capability actually gives us a very straightforward methodology. And there's instructions on our website for exactly how thio, uh, bring in import and you see p cluster. Um so it it makes very convenient for our existing customers to take advantage of this new release. >>Absolutely cool. More thoughts on this wonders if we jump onto the next video. >>I think we should move press on >>time marches on here. So let's Let's carry on. So just to recap where we are right now, first video, we create a management cluster. That's what we're gonna use to create All our downstream were closed clusters, which is what we did in this video. Let's maybe the simplest architectures, because that's doing everything in one region on AWS pretty common use case because we want to be able to spin up workload clusters across many regions. And so to do that, we're gonna add a third layer in between the management and work cluster layers. That's gonna be our regional cluster managers. So this is gonna be, uh, our regional management cluster that exists per region that we're going to manage those regional managers will be than the ones responsible for spending part clusters across all these different regions. Let's see it in action in our next video. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectural overview, how to set up the management environment, prepare for the deployment deployment overview, and then just to prove it, to play a regional child cluster. So looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case, AWS on the L C M components on the d you speak cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need original cluster? Different platform architectures, for example AWS open stack, even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager. We also machine manager. We're hell Mandel are managed as well as the actual provider logic. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. When you see it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster, similar to what we're going to deploy now. Also only has three managers once again, no workers. But as a comparison is a child cluster. This one has three managers, but also has additional workers associate it to the cluster. Yeah, all right, we need to connect. Tell bootstrap note, preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine Mhm. All right, A few things we have to do to make sure the environment is ready. First thing we're gonna pseudo into route. I mean, we'll go into our releases folder where we have the car's boot strap on. This was the original bootstrap used to build the original management cluster. We're going to double check to make sure our cube con figures there It's again. The one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything's working, condone, load our images waken access to a swell. Yeah, Next, we're gonna edit the machine definitions what we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I So that's found under the templates AWS directory. We don't need to edit anything else here, but we could change items like the size of the machines attempts we want to use but the key items to ensure where changed the am I reference for the junta image is the one for the region in this case aws region of re utilizing. This was an open stack deployment. We have to make sure we're pointing in the correct open stack images. Yeah, yeah. Okay. Sit the correct Am I save the file? Yeah. We need to get up credentials again. When we originally created the bootstrap cluster, we got credentials made of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we just exporting AWS access key and I d. What's important is Kaz aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our Q conflict that we want to use for the management cluster when we looked at earlier. Yeah, now we're exporting that. Want to call? The cluster region is Frankfurt's Socrates Frankfurt yet trying to use something descriptive? It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at us and waiting for the past, you know, to get started. Please the best your node, onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy Dr Enterprise, he says. Probably the longest face we'll see in a second that all the nodes will go from the player deployed. Prepare, prepare Mhm. We'll see. Their status changes updates. It was the first word ready. Second, just applying second. Grady, both my time away from home control that's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running a data for us? Yeah, almost a on. Now we're playing Stockland. Thanks. Whichever is done on Done. Now we'll build a child cluster in the new region very, very quickly. Find the cluster will pick our new credential have shown up. We'll just call it Frankfurt for simplicity. A key on customers to find. That's the machine. That cluster stop with three manages set the correct Am I for the region? Yeah, Same to add workers. There we go. That's the building. Yeah. Total bill of time. Should be about fifteen minutes. Concedes in progress. Can we expect this up a little bit? Check the events. We've created all the dependencies, machine instances, machines. A boat? Yeah. Shortly. We should have a working caster in the Frankfurt region. Now almost a one note is ready from management. Two in progress. On we're done. Trust us up and running. >>Excellent. There we have it. We've got our three layered doctor enterprise container cloud structure in place now with our management cluster in which we scrap everything else. Our regional clusters which manage individual aws regions and child clusters sitting over depends. >>Yeah, you can. You know you can actually see in the hierarchy the advantages that that presents for folks who have multiple locations where they'd like a geographic locations where they'd like to distribute their clusters so that you can access them or readily co resident with your development teams. Um and, uh, one of the other things I think that's really unique about it is that we provide that same operational support system capability throughout. So you've got stack light monitoring the stack light that's monitoring the stack light down to the actual child clusters that they have >>all through that single pane of glass that shows you all your different clusters, whether their workload cluster like what the child clusters or usual clusters from managing different regions. Cool. Alright, well, time marches on your folks. We've only got a few minutes left and I got one more video in our last video for the session. We're gonna walk through standing up a child cluster on bare metal. So so far, everything we've seen so far has been aws focus. Just because it's kind of easy to make that was on AWS. We don't want to leave you with the impression that that's all we do, we're covering AWS bare metal and open step deployments as well documented Craftsman Cloud. Let's see it in action with a bare metal child cluster. >>We are on the home stretch, >>right. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. Yeah, so why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and support high performance workouts like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another hyper visor layer in between. So continuing on the theme Why communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p g A s G p, us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. We can handle utilization in the scheduling better Onda. We increase the performance and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project. Will add the bare metal hosts, including the host name. I put my credentials. I pay my address, Mac address on, then provide a machine type label to determine what type of machine it is. Related use. Okay, let's get started Certain Blufgan was the operator thing. We'll go and we'll create a project for our machines to be a member off. Helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. Yeah. So the first thing we had to be in post many of the machine a name. Anything you want? Yeah, in this case by mental zero one. Provide the IAP My user name. Type my password? Yeah. On the Mac address for the active, my interface with boot interface and then the i p m i P address. Yeah, these machines. We have the time storage worker manager. He's a manager. We're gonna add a number of other machines on will speed this up just so you could see what the process. Looks like in the future, better discovery will be added to the product. Okay, Okay. Getting back there. We haven't Are Six machines have been added. Are busy being inspected, being added to the system. Let's have a look at the details of a single note. Mhm. We can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. Okay, it's going to create the cluster. Mhm. Okay, so we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So credit custom. We'll give it a name. Thank you. But he thought were selecting bare metal on the region. We're going to select the version we want to apply on. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of the dress range update the address range that we want to use for the cluster. Check that the sea idea blocks for the communities and tunnels are what we want them to be. Enable disabled stack light and said the stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here we're focused on building communities clusters. So we're gonna put the count of machines. You want managers? We're gonna pick the label type manager on create three machines. Is a manager for the Cuban a disgusting? Yeah, they were having workers to the same. It's a process. Just making sure that the worker label host like you are so yes, on Duin wait for the machines to deploy. Let's go through the process of putting the operating system on the notes, validating that operating system. Deploying Docker enterprise on making sure that the cluster is up and running ready to go. Okay, let's review the bold events. We can see the machine info now populated with more information about the specifics of things like storage. Yeah, of course. Details of a cluster, etcetera. Yeah, Yeah. Okay. Well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build, and that brings us to the end of this particular do my as you can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>Here we have a child cluster on bare metal for folks that wanted to play the stuff on Prem. >>It's ah been an interesting journey taken from the mothership as we started out building ah management cluster and then populating it with a child cluster and then finally creating a regional cluster to spread the geographically the management of our clusters and finally to provide a platform for supporting, you know, ai needs and and big Data needs, uh, you know, thank goodness we're now able to put things like Hadoop on, uh, bare metal thio in containers were pretty exciting. >>Yeah, absolutely. So with this Doctor Enterprise container cloud platform. Hopefully this commoditized scooping clusters, doctor enterprise clusters that could be spun up and use quickly taking provisioning times. You know, from however many months to get new clusters spun up for our teams. Two minutes, right. We saw those clusters gets better. Just a couple of minutes. Excellent. All right, well, thank you, everyone, for joining us for our demo session for Dr Enterprise Container Cloud. Of course, there's many many more things to discuss about this and all of Miranda's products. If you'd like to learn more, if you'd like to get your hands dirty with all of this content, police see us a training don Miranda's dot com, where we can offer you workshops and a number of different formats on our entire line of products and hands on interactive fashion. Thanks, everyone. Enjoy the rest of the launchpad of that >>thank you all enjoy.

Published Date : Sep 17 2020

SUMMARY :

So for the next couple of hours, I'm the Western regional Solutions architect for Moran At least somebody on the call knows something about your enterprise Computer club. And that's really the key to this thing is to provide some, you know, many training clusters so that by the end of the tutorial content today, I think that's that's pretty much what we had to nail down here. So the management costs was always We have to give this brief little pause of the management cluster in the first regional clusters to support AWS deployments. So in that video are wonderful field CTO Shauna Vera bootstrapped So primarily the foundation for being able to deploy So this cluster isn't yet for workloads. Read the phone book, So and just to make sure I understood The output that when it says I'm pivoting, I'm pivoting from on the bootstrap er go away afterwards. So that there's no dependencies on any of the clouds that get created thereafter. Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, The config file that that's generated the template is fairly straightforward We always insist on high availability for this management cluster the scenes without you having toe worry about it as a developer. Examples of that is the day goes on. either the the regional cluster or a We've got the management cluster, and we're gonna go straight with child cluster. as opposed to having to centralize thumb So just head on in, head on into the docks like the Dale provided here. That's going to be in a very near term I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. No, just that the fact that we're running through these individual So let's go to that video and see just how We can check the status of the machine bulls as individuals so we can check the machine the thing that jumped out to me at first Waas like the inputs that go into defining Yeah, and and And that's really the focus of our effort is to ensure that So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old of the bootstrapping as well that the processes themselves are self healing, And the worst thing you could do is panic at the first warning and start tearing things that don't that then go out to touch slack and say hi, You need to watch your disk But Sean mentioned it on the video. And And the kubernetes, uh, scaling methodology is is he adhered So should we go to the questions. Um, that's kind of the point, right? you know, set up things and deploy your applications and things. that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. to the standards that we would want to set to make sure that we're not overloading On the next video, we're gonna learn how to spin up a Yeah, Do the same to add workers. We got that management cluster that we do strapped in the first video. Yeah, that's the key to this is to be able to have co resident with So we don't have to go back to the mother ship. So it's just one pane of glass to the bootstrapped cluster of the regional services. and another, you know, detail for those that have sharp eyes. Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want all of the other major players in the cloud arena. Let's jump into our last video in the Siri's, So the first thing we had to be in post, Yeah, many of the machine A name. Much the same is how we did for AWS. nodes and and that the management layer is going to have sufficient horsepower to, are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. that's been put out in the chat, um, that you'll be able to give this a go yourself, Uh, take the opportunity to let your colleagues know if they were in another session I e just interest will feel for you. Use A I'm the one with the gray hair and the glasses. And for the providers in the very near future. I can hardly wait. Let's do it all right to share my video So the first thing is, we need those route credentials which we're going to export on the command That is the tool and you're gonna use to start spinning up downstream It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, and all of the necessary parameters that you would fill in have That's the very first thing you're going to Yeah, for the most part. Let's now that we have our management cluster set up, let's create a first We can check the status of the machine balls as individuals so we can check the glitches, resolve themselves and leave you with a functioning workload cluster within exactly the same thing with resource is on Prem or resource is, All the different bits and pieces And I've got to tell you from having, you know, managed kubernetes And the straps those components for you? Yeah, it actually even displays at the top of the screen. I really like the on board Bafana that comes along with this stack. the best of what Morantes had in the open stack environment for monitoring and logging So the doctor enterprise Container Cloud managers were Yep, the the Cube config elements of the UCP environment. More thoughts on this wonders if we jump onto the next video. Let's maybe the simplest architectures, of the regional cluster and how it connects to the management cluster on their components, There we have it. that we provide that same operational support system capability Just because it's kind of easy to make that was on AWS. Just making sure that the worker label host like you are so yes, It's ah been an interesting journey taken from the mothership Enjoy the rest of the launchpad

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MaryPERSON

0.99+

SeanPERSON

0.99+

Sean O'MaraPERSON

0.99+

BrucePERSON

0.99+

FrankfurtLOCATION

0.99+

three machinesQUANTITY

0.99+

Bill MilksPERSON

0.99+

AWSORGANIZATION

0.99+

first videoQUANTITY

0.99+

second phaseQUANTITY

0.99+

ShawnPERSON

0.99+

first phaseQUANTITY

0.99+

ThreeQUANTITY

0.99+

Two minutesQUANTITY

0.99+

three managersQUANTITY

0.99+

fifth phaseQUANTITY

0.99+

ClarkPERSON

0.99+

Bill MillsPERSON

0.99+

DalePERSON

0.99+

Five minutesQUANTITY

0.99+

NanPERSON

0.99+

second sessionQUANTITY

0.99+

Third phaseQUANTITY

0.99+

SeymourPERSON

0.99+

Bruce Basil MatthewsPERSON

0.99+

Moran TousPERSON

0.99+

five minutesQUANTITY

0.99+

hundredsQUANTITY

0.99+

Vaughn Stewart, Pure Storage | VMworld 2019


 

>> live from San Francisco, celebrating 10 years of high tech coverage. It's the Cube covering Veum World 2019. Brought to you by VM Wear and its ecosystem partners. >> Welcome back, everyone. Live cube coverage here in Mosconi, north of the Emerald 2019. I'm Javert David launch their 10th year covering the emerald. We here with this team Cube alumni Von Stuart, vice president technology at pier Storage. Great to see you guys another year, another privilege to sit >> down and have a little chat. >> Another. Another year that Vienna where doesn't die of something storage doesn't go away every year. Containers is going to kill the end where this is revealing. The EM wears resiliency as virtualization platform is just second to none has been, well, document. We've been talking about it because the operational efficiencies of what they've done has been great. You guys air kicking butt in storage on again, a sector that doesn't go away. You gotta put the data somewhere. Eso stores continues toe do Well, Congratulations. What's the big What's the big secret? Thanks. >> Well, we just shared our cue to >> financial results last week. 28% year on your growth. We are the by far the fastest growing storage company, and I think there's a lot of disruption for the legacy vendors. Right now. They're getting hit on all angles. Next. Gen. If vendors like us followed by the cloud as well this platforms like H. C, I think it's been it's been a tough sledding for similar legacy vendors. >> Talk about your relationship with the end, where and why that's been so important for pure because again, again, resiliency operations. At the end of the day, that's what the rubber hits the road, making developers happy, but operating it's a key. Yeah, if you look at >> so that's a really good question. If you look at our business, Veum, where is the number one platform deployed on top of pure storage platforms? And that's probably the case for most of the storage vendors because of their dominant position in the infrastructure. That means, as VM were evolves their product platforms right. Well, that's the pivotal acquisition Veum or Claude Foundation via McLaren AWS. And as that'll expand, you have to as a partner continued to jointly innovate, sometimes hand in hand. Sometimes, you know, on parallel paths to drive value into that that market for those customers or you're not gonna make it. And our investments of engineering wise are significant. We've had a large number of new capability that we've ruled out through the years that are specific to VM, where that are either integrations or enhancements to our platform. You know, we believe through external data points, we are the number one V balls vendor, which is, you know, which was something that being were launched about 78 years back. That kind of dip, but has risen back up. Um, and >> we were key, >> I think, um, design partner right now with the cloud platforms, the Via MacLeod Foundation as well as, ah, humor coordinative us. >> So, as you know, this is our 10th year VM world. You go back to 2010. There was what I used to call the storage cartel. And you weren't part of it, right? Had early access to the AP eyes you had. So obviously e m c was in there. Um, you were really the on lee sort of newbie to reach escape velocity. Your storage. Now there's basically two independent storage companies over a billion dollars. You guys a net up. Um, so >> when I was at both, >> you saw you saw >> the opportunity and okay, leaned in hard. Yeah, there >> was a time when he's >> paid off. But so why do you think, um, you were able to be one of the rare ones to achieve escape velocity when many people said that will never happen. You'll never see another $1,000,000,000 storage company. And then I'm interested in how you're achieving number one in Viv balls. In a world where it seems like, you know, the ecosystem is getting a little tighter between Dow Wand VM where? But how do you guys thrive in that dynamic? >> I think there's a challenge for all vendors in terms of market and try to get your message through right. If you if you one better does something well, the rest of the market tries too obvious. Get that. We've been fortunate enough that through our channel ecosystem, our system's integrator partners right to actually be able to demonstrate the technology that gain there enthusiasm to drive it into the market and then actually demonstrated to the customers. And so how does that show up? Uh, I think it's fair to say our platforms are more intelligent, they're more automated and they they operated a greater scale. Then then the competitors and you can look at this through one lens and say, Well, it's Veum or a P I says in that Make all the storage the same And it's like it does from a via more operational standpoint, but it doesn't mean how you deliver on that value Prop or what us. A platform deliver above and beyond is at parody, and that's really where we demonstrate a significant difference. Let me give you one example. We have a lot of customers. Ah, a lot customer growth in the last 12 months around Custer's who are deploying eight c i, along with all flash raise. Right? And David Floyd had reached out recently and said, Well, wouldn't one, you know, compete with the other? It's like, Yes, there's overlap. But what we're finding from customers is they're looking to say if my applications need to be more cost effective, easy to manage its scale, we actually want to put it on all flash rain, You say, How could that be? I'll give you one simply example. Do you know what it takes anywhere from 10 x 200 x, less time to upgrade your V and where infrastructure on a shared array. Then if it's on on hyper converged because you don't have to go through the evacuation and rehydration of all your data twice right? And so things like that, they're just really simple that you wouldn't pick up in like a marketing scheme. If you are a customer at scale, you go well. I can't afford 100 man hours. I can afford woman. And so it's It's simple things like that. It's rapid provisioning. It's not having Silas that are optimized for performance or availability or cost. It's about saying, you know your time to implement is one time life cycle on hardware. But it's probably something happens every quarter for the next three years, right? >> So this is your point about >> innovation in the innovative vendors. Your the modernization of storage is planning for these use cases where the old way didn't work. >> Yeah, yeah, you mentioned that you were 10 years now, and one of things that I've said over the last six or seven years being up yours, one of things I think is really interesting about pure is that our founder, John Call Grove, came out of the volume manager and file system space at Veritas, right? He was the founder for those products. He understood the intersection between managing a storage array and your application, and that goes through our ethos of our products, where I think a lot of storage platforms, a start up platforms come from George guys who worked on the Harbour side. And so they take a faster, you know, Piper faster from the media, and they make another box that behaves like the other box from an operational perspective. >> So he said, a C I a compliment or competitors. I'm still not sure which. Maybe it's both and then say, Same question for V. San. Yeah, how do you So, >> um, on air that we've put a lot of investment in and started one with via more around the middle of last year was putting V sand with pure storage flash race together, and what you see that materialized now is when you look at via MacLeod Foundation or via MacLeod in eight of us. The management domains must be visa, and that's so that you can have an instant out of the box controlled, um, management plane that Veum where you know, executes on and then you have workload domains and those could be on ah, hyper converge platform. Or they could be on third party storage. And when you put those on pure, then you again, all the advantages that we bring to bear as an infrastructure with all the same simplicity scale in lifecycle management that you get from from just, you know, the VM where std see manager. And so it works very well together. Now, look, I'm sure what I share with you here. They'll be some folks who are on the V sand team that they themselves are to be like, you know, B s. But that's the nature of our business. One >> of these I want to get your thoughts on this side. Vons. You've always >> been kind of on the cutting edge on all the conversations we've had. I gotta ask you about the container revolution, which not new doctor came out many many years ago. Jerry Chen when he funded those guys and we covered that extensively upset there was a small changed kubernetes is all the rage orchestrating the containers is a pivotal role in all the action happening here. It's big part of how things were with the app side. So the question is, how does continues impact the storage world? How do you see that being integrated in? There's talk of putting Cooper names on bare metal, so you start to see HC. I come back. Devices are important, she started. See hardware become important again with that? >> Well, I love you. Drop of pivotal there, right? First off, kudos to Vienna, where for the acquisition pill, little guys are exceptional. What they don't have is a lot of customers, but the customers they do have our large customers, right? So we've got a fair amount of pivotal on pure customers, and they are all at scale. So I think it's a great acquisition for VM, where by by far the most enterprise class form of containers today, >> and they've always kind of been the fold. Now they're officially in the fold. Yes, formalize it. >> And so now that the road map that was shared in terms of what via Moore looks to do to integrate containers into the Essex I platform itself right, it's managing V, EMS and containers next year. That's perfect in terms of not having customers have to pick or choose between which platform and where you're going to play something, allow them to say you can deploy on whichever format you want. It runs in the same ecosystem and management, and then that trickles down to the gun in your storage layer. So we do a lot of object storage within the container ecosystems. Today, a lot of high performance objects because you know the file sizes of instances or applications is much larger than you know, a document filed that you or I might create online. So there's a big need around performance in that space, along with again management at scale. It's >> interesting we sent about about Pivotal and I, By the way, I like the acquisition, too, because I think it was cheap. Any time you can pick up $4 billion asset for 800 million in cash, you know gets my attention. But Pivotal was struggling in the marketplace. The stock price never even came close to its I po. You know, it's spending patterns were down. Do you feel as though the integration will VM Where will supercharge Pivotal? >> I absolutely agree that I've had this view that the container ecosystem was really, um uh, segmented you had comes that built their products off a container. So save your twitter or your Facebook, right? The platform that your customers and interact interact with is all ran by containers. Then you have an enterprise. You have containers, which was more kind of classic applications. Right? And that would take time for the applications to be deployed. And so what did you see now for Mike stuff, right? See if you can run as a container. Right? Run is a container. As the enterprise app start to roll over, the enterprise will start to evolve from virtual machines, two containers. And so I think it's the timing's right. That's not to dismiss any of where people I think is built the brand right now, which is helping companies build next gen platforms. You know, after big sure that I don't name drop customers references to pull back there. Yeah, I think the time is right. >> I'm interested in how you guys can further capitalized on containers. And we've been playing around with this notion of of data assurance containers, Fring complexity. And so, you know, complexities oftentimes your friend, because you're all about simplifying complexity. But so how do you capitalize on this container trend in the next 3 to 5 years? So you've got storage >> needs for containers that either tend to be ephemeral or persistent. And I think when containers were virtually created, it was always this notion that would be ephemeral. And it's like, Yeah, but where's the data reside? Ultimately, there's been significant growth around data persistence, and we've driven that in terms of leveraging the flecks of all drivers that have been put into the community, driving that into our pure service orchestrator RPS O'Toole, which supports pivotal in kubernetes derivatives. Today again, we've got proven large scale installs on this. So it's it's, um, it's providing the same class of storage. Service is simplicity and elegance in your integrations that we have for Vienna, where we've been doing that across pivotal already. Pivotals. Interesting, right? They don't validate hardware, the only validate software. So they validate our P S O and having that same value prop for that that infrastructure, because they are scale, you never find a small scale containers ecosystem, and I keep referencing that point when you get to scale considerations around. What does it take to allow that environment to to remain online and holly performance are significant considerations and weak cell >> There. We'll talk about your event coming up. You guys have pierced accelerate September 17th and 18th Coming up Osti the VM where ecosystem that you're part of here. Big part of that. You guys have a lot of customers. I know you can reveal any news, but what's expected at this show? What can people who are interested in either attending or my peach in some of the notable things that might be happening >> lot orange? We know that >> one. Number two I know the cubes gonna be there >> for two days will be there for two days. >> So hopefully you guys will get a load of conversations with both our our team, product management, engineering, maybe some of leadership, but also customers. I think customers are always the best statement you can make about how your how you're doing and market. I think you will see from us a number of announcements that I am prohibited to share today, but some really big things that we're gonna introduce the market. So it should be excited for that. And some just a great showing of our partner. Our alliance ecosystem will be there. Obviously, VM will be there in force as well as red hat with the open >> again, there's gonna be a cloudy >> future for you. It's girls would be very analytical. It's going to be there elastics going to be there. So, you know, >> you guys like to do first of these shows. I mean, kind of I don't view it first with an all flesh array, but probably one of the first if not first the evergreen thing ticked off a lot of people like, Why didn't we think of that? You were first with sort of bundling envy. Any in the whole thing. The announcement you guys made with video. That was before anybody else. You know, your whole cloud play you like, you like to be first, So we expect another first next month. Hopefully we >> will deliver, and, uh, you're not gonna get me to leak anything. >> Thanks for the insight, Vice President. Reality Lions, that pier storage. David, let me stay with us for more coverage. Robin Madlock. CMO is coming on and, of course, tomorrow. Michael Dell, Pat Girl singer and more and more great guest senior vice presidents from VM wear from all different groups. We'll be asking the tough questions here in the Cube. Thanks for watching.

Published Date : Aug 26 2019

SUMMARY :

Brought to you by VM Wear and its ecosystem partners. Great to see you guys another year, You gotta put the data somewhere. are the by far the fastest growing storage company, Yeah, if you look at And as that'll expand, you have to as a partner continued to jointly innovate, I think, um, design partner right now with the cloud platforms, the Via MacLeod Foundation as well And you weren't part of it, right? the opportunity and okay, leaned in hard. But so why do you think, um, you were able to be one of the And so things like that, they're just really simple that you wouldn't pick up in like a marketing Your the modernization of storage is planning And so they take a faster, you know, Piper faster from the media, and they make another box that behaves like the other how do you So, in lifecycle management that you get from from just, you know, the VM where std see manager. of these I want to get your thoughts on this side. I gotta ask you about the container revolution, So I think it's a great acquisition for VM, where by by far the and they've always kind of been the fold. And so now that the road map that was shared in terms of what via Moore looks to do to integrate Any time you can pick up $4 billion asset for 800 million in cash, And so what did you see now for Mike stuff, right? And so, you know, containers ecosystem, and I keep referencing that point when you get I know you can reveal any news, Number two I know the cubes gonna be there the best statement you can make about how your how you're doing and market. So, you know, The announcement you guys made with video. Thanks for the insight, Vice President.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Michael DellPERSON

0.99+

Jerry ChenPERSON

0.99+

DavidPERSON

0.99+

David FloydPERSON

0.99+

two daysQUANTITY

0.99+

San FranciscoLOCATION

0.99+

$4 billionQUANTITY

0.99+

Von StuartPERSON

0.99+

MosconiLOCATION

0.99+

VeumORGANIZATION

0.99+

Robin MadlockPERSON

0.99+

$1,000,000,000QUANTITY

0.99+

800 millionQUANTITY

0.99+

10 yearsQUANTITY

0.99+

John Call GrovePERSON

0.99+

VeritasORGANIZATION

0.99+

MacLeod FoundationORGANIZATION

0.99+

MikePERSON

0.99+

2010DATE

0.99+

September 17thDATE

0.99+

Vaughn StewartPERSON

0.99+

Javert DavidPERSON

0.99+

10QUANTITY

0.99+

MacLeodORGANIZATION

0.99+

Claude FoundationORGANIZATION

0.99+

next yearDATE

0.99+

PivotalORGANIZATION

0.99+

TodayDATE

0.99+

ViennaLOCATION

0.99+

firstQUANTITY

0.99+

18thDATE

0.99+

tomorrowDATE

0.99+

10th yearQUANTITY

0.99+

Pat GirlPERSON

0.99+

FacebookORGANIZATION

0.99+

28%QUANTITY

0.99+

twitterORGANIZATION

0.99+

bothQUANTITY

0.98+

GeorgePERSON

0.98+

oneQUANTITY

0.98+

two containersQUANTITY

0.98+

last weekDATE

0.98+

CooperPERSON

0.98+

todayDATE

0.98+

one exampleQUANTITY

0.98+

twiceQUANTITY

0.98+

McLaren AWSORGANIZATION

0.98+

two independent storage companiesQUANTITY

0.97+

CusterORGANIZATION

0.96+

VMORGANIZATION

0.96+

over a billion dollarsQUANTITY

0.95+

VM WearORGANIZATION

0.94+

eightQUANTITY

0.94+

CubeORGANIZATION

0.94+

200 xQUANTITY

0.94+

Veum World 2019EVENT

0.93+

100 manQUANTITY

0.93+

many years agoDATE

0.92+

FirstQUANTITY

0.92+

one timeQUANTITY

0.92+

Via MacLeod FoundationORGANIZATION

0.91+

EssexORGANIZATION

0.87+

MoorePERSON

0.85+

APORGANIZATION

0.85+

about 78 years backDATE

0.85+

seven yearsQUANTITY

0.84+

last yearDATE

0.84+

last 12 monthsDATE

0.83+

OneQUANTITY

0.82+

5 yearsQUANTITY

0.8+

next three yearsDATE

0.79+

SilasTITLE

0.78+

secondQUANTITY

0.78+

VMworld 2019EVENT

0.76+

pier StorageORGANIZATION

0.75+

next monthDATE

0.73+

Fabio Gori, & Kip Compton, Cisco | Cisco Live US 2019


 

>> Live from San Diego, California It's the queue covering Sisqo Live US 2019 Tio by Cisco and its ecosystem. Barker's >> Welcome Back to San Diego. Everybody watching the Cube, the leader in live tech coverage. This day. One of our coverage of Sisqo Live 2019 from San Diego. I'm Dave a lot with my co host to minimum. Lisa Martin is also here. Kip Compton is the senior vice president of Cisco's Cloud Platform and Solutions, and he's joined by Fabio Gori was the senior director of Cloud Solutions Marketing. Gentlemen, thanks so much for coming on the Cube. >> Thanks. Great to be here having us. >> You're very welcome, Fabio. So, Kip, Let's start with you. I want to start with a customer perspective. People are transforming. Cloud is part of that innovation cocktail, if you will. Absolutely. How would you summarize your customers? Cloud strategies? >> Well, I mean, in one word, I'd say Multi cloud, and it's what I've been saying for some time. Is Custer's air really expanding into the cloud and it really expanding into multiple clouds? And what's driving that is the need to take advantage of the innovation in the economics that are offered in the various clouds, and we sit like to say that they're expanding into the cloud because for the vast majority, their coast of our coasters, they have data centers. They're going to continue to have data centers. Nothing's going to keep running in those data centers now. What's happening is they thought it would be easy to start with everyone here. CEO Chuck likes to talk about, however, and thought they just moved to the cloud like moving to another neighborhood. Everything would be great. Well, when they're multiple clouds, you leaving some stuff on him. All of a sudden, what was supposed to be simple and easy becomes quite complex. >> Yeah, I've often said Well, multi club was kind of a symptom of multi vendor. But what you're saying is, essentially, it's it's becoming horses for courses, the workload matching the workload with the best cloud to solve that problem. >> I think it's a feature not above. I think it's here to stay. >> So how is that informing your strategy is Cisco? >> Well, you know, we're very customer responsive, and we see this problem and we look at how we can solve it and what customs have told us is that they want access to the different innovation in these different clouds and the different economic offers in each of these clouds. But they want to do it with less complexity, and they want to do it with less friction. And there's a bunch of areas where they're not looking for innovation. They don't need things work differently in networking. They want one way for networking to work across the multiple clouds and, frankly, to integrate with their own primus. Well. Likewise, for Security. A lot of Custer's air a little freaked out by the idea that there be different security regimes in every cloud that they use and maybe even different than what they already have on him. So they want that to be connected and to work management an application lifecycle. They're worried about that. They're like they don't want it to be different in every single cloud. A map Dynamics is a great example of an asset here. We got strong feedback for our customers that they needed to be able to measure the application performance in a common way across the environments. When imagine going to your CEO and talking about the performance of applications and having different metrics. 2,000,000,000 where it's hosted. It doesn't make any sense in terms of getting business insights. So I've dynamics is another example of something that Custer's one across all of that. So we really see Cisco's role is bringing all of those common capabilities and really reducing the complexity and friction of multi Cobb, enabling our customers to really take the most advantage possible. Multiple cloud. >> So Fabio kept talked about how moving to cloud is a little bit more complex than moving house from one neighborhood to the other. What are some of the key challenges that you guys are seeing? And how specifically is Cisco helping to ameliorate some of those challenges? >> Well, there are some challenges that are squarely in the camp where we can help. Others are related, and probably they're the toughest in clouds to fundamentally acquisition of talent. Right way can help with our custom off course with our partner ecosystem in this case, but a lot of that is really the culture of the company needs to change, right? We keep talking about develops way, keep talking about what does he mean operating this infrastructure in the cloud. It's a whole different ballgame, right? It's a continues integration, continues. Development is actually moving toe agile, kind of softer. The album models. And, you know, I very often do the analogy or what we've seen a few years ago in the data center space where we so actually, the end off the super specialization, like people on Lino in storage, all innit, working on ly computing. And then we saw the rise of people fundamentally expert in in the entire stack. We're seeing the same in the cloud with the rise of the Cloud Architect. These guys now are the ones they're behind building Cloud Centre of excellence. The issue. If you want guidance, where's the control remains into the other team's right. But this is very, very important. So it's overcoming, overcoming the talent gap and knowing how to deal with that on the bottom of that on the other side, so you get a free economy is technology challenges. For instance, embracing Q Burnett is becomes an embracing open source is a big, big challenge, right? You've gotta be able to master this kind of science if you want and trusting partners like, for instance, ourselves and others that will give you a curated versions of the softer image in life. Very often do customer meetings, and I ask how many how many tools to use in production for your Cuban Embassy plantation? And the answer ranges from 20 to 25. It's crazy, right? So imagine if 12 or three of these stools go away. What are you going to do? So you know, it's it's a whole different ball game really going to go into this kind of world. So Kip, we understand >> today, customers are multi cloud and future. It's going to be multi cloud. Think So. >> How do we make >> sure that multi cloud doesn't become least Domine, Denominator Cloud? Or, you know, you really say All I have is this combination of a bunch of pieces like the old multi vendor. How does multi cloud become more powerful than just the sum of its components? Is a good question, and we've really, I mean, way support a lot of different ways of accessing a cloud, Francisco, because we have such a broad Custer base and our goal is really to support our customers. However, they want to work. But we have made a bet in terms of avoiding the lowest common denominator on DH. Some people look ATT, accessing multiple clouds as sort of laying down one software platform and writing their software to one set of AP eyes that they didn't somehow implement in every cloud. And I think that does tend to get you to lowest common denominator because, you know, if you want to be on the Alexis Smart speaker, you have to be on the Lambda Service at a job. Yes, that's it. It doesn't exist anywhere else. And so if you're trying to create a common layer across so your clouds and that's your approach, you have to give up unique capabilities like that. And almost every consumer brand wants to be our needs to be on that election. Smart speaker. So we actually see it is more taking the functions that are not points of innovation, reducing the friction and leaving our customers with the time and energy to focus on taking advantage of their unique capabilities. And Fabio, you're partnering at Cisco with a number of their providers out there. Where are we with the maturity of all this? We were at the Cube con show and you know you're right. There's a lot of different tools. Simple is not what we're discussing, mostly out that show. So what do we solve today? And what kind of things does Cisco and its partners look to be solving kind of in the next 6 to 12 months? >> Partner? Partnering with this big players is absolutely a company priority for us, for Cisco, and one thing that's important is you, said multi vendor at the beginning. That was an interesting common, because if you think about it, multiple out is really business need, right? You want a hardness, innovation wherever it comes from. But then when you work with a specific provider in your reach, critical mass you want tohave integrations with this with this different providers, and that is the hybrid world. So hybrid is more of a technology need to streamline things like networking or security, or the way you storage because the poor things of this nature so that's three. Liza is a big need, and we'll continue, of course, adding more and more from the standpoint of partnerships every every one of the environments in our customers want to uses of interest for us, right to extend their policies to extend our reach. >> So just following up on that partnership, You guys air cloud agnostic, You don't own your own clouds, right? Not selling that. So you were at Google Cloud next to Europe on stage David Gettler, you've got a relationship with as your you got relationship with a W s. Obviously so talking about the importance of partnerships and specific strategy there in terms of your go to market, >> Well, you know, first, all the partnerships or critical I mean, it's you said we're not trying to move the workload Stark filed. And by the way, a lot of our customers has said that something that they value they see us is one of the biggest, most capable companies on the planet. That still is someone. I got sick and ableto work with them on. What's the right answer for their business? Not trying to move everything to one place and those partnerships a critical. So you're going to see us continue Teo building this partnerships. In fact, it's only day one here. I wouldn't be surprised if you saw some news this week on that. >> We were wondering if we're going to see somebody parachute in, that would be exciting. So why Cisco? Uh, ask each of you guys Maybe maybe, kid, you could You could give us the answer from your perspective and an Aussie. The same question. >> Well, from my perspective, it's based on what our customers tell us that again. You know, the things that were very good at things like networking and security are some of the biggest problems that our customs face in taking advantage of clouds and are some of things that they most want common across clouds. So we have a very natural role in this. I actually think back to the founding of Cisco, if you know the story. But it was Sandy Lerner and Limbo zakat Stanford. Their networks couldn't talk each other. You didn't remember back to the days like deck net and apple talk and all these things. It's hard to even recall because this new thing called peace pipe he obviously took over. That was the beginning of Sisko is building the multi protocol router that let those different islands talk each other. In many ways, Custer's see us doing sort of the same thing or want us to do the same thing in a multi cloud world. >> Well, just aside before I ask you, Fabian, a lot of people think that, you know, the microprocessor revolution killed many computers. IPads. Cisco kind of killed many computers to your point. But, Fabio, anything you would add to the sort of wisest >> guy would say, If you want my three seconds elevator peaches, we make multiple easier and more secure. Multiple this complex. So we definitely make it easier through our software. And we have three big buckets if you want there really compelling for for our customers, the 1st 1 is all of our software. Arsenal around weapon on his cloud center work looked a musician manager that helps last summer in building a unified application management kind of soft or sweet across home Prem and any of the public clouds that we've been talking about. The 2nd 1 is, as you said, we build on our DNA, which is, if you want and you heard Gettler today are multi domain kind of architecture, right, which is incredibly relevant in this case, you are not working in security. Fabric really is important there, and the thirties are ability because we don't compete with any other big players to partner with them and solve problems for our customers. So these three buckets are really, really important that deliver. Ah hi business value to >> our customers if I want to come back to something we're talking about is the Customs said the customers don't want a different security regime for each cloud, right? So it's complicated because, first of all, they're trying to struggle with their own security regime anyway, Right? Right? And that's transforming. What is the right right? Sorry security regime in this cloud here. How is it evolving? >> Well, me, What we're doing is we're bringing tools like Te Trae Shen, which now runs on prim and in the clouds. Things like stealth watch what's runs on permanent cloud and simply bringing them security frameworks that are very effective where I think a very capable of well known security vendor, but bringing them the capability to run the same capabilities in there on prem environments in their data centers as well as a multiple public clouds, and that just eliminates the scenes that hackers could maybe get into. It makes common policy possibles. They going to find policy around an application once and have it apply across Balto environments, which not only is easier for them but eliminates potential mistakes that they might make that might leave things open. Joe Hacker. So for us, it's that simple bringing very effective common frameworks for security across all these >> years. You certainly see the awareness of the security imperative moving beyond the SEC ops team. There's no question about that. It's now board level lines of business are worried about. For their digital transformation was data, but our organizations at the point where there operationalize ing security practices and the like, you know, to the extent that they should be >> well, I mean, I think when you say they should be, there's always room for improvement. Okay, but we're seeing just about all of our customers. I mean, as you said, securities is a sea level, if not a board level discussion and just about all of our customers. It's routinely top first or second concern on a survey when Custer's saw about what's concerning them with the clouds. And so we're seeing them really view, you know, security's foundational to what they're doing. >> I mean, it used to be. This sort of failure equals fire mentality. You somebody cracks through, you're fired. And so nobody talked about it. Now I think people realize, look, bad guys are going to get through. It's how you respond to them. Don't you think about how you using analytics, but yeah. So >> when we start just the >> way you were moving quickly >> towards, well, more or less quickly to a zero trust kind ofwork thie action assist you in this area every since the acquisition ofthe duo is performing exceptionally well. And if you want at the top of the security ecosystem in a multi polar world, you find identity because if you don't know who the user or the thing is, they're trying to use a certain application, you're in trouble because perimeter, all security off course is important. But you know that you're going to be penetrated, right? So it boils down to understanding who's doing what and re mediating a soon as possible. So it's a whole different paradigm >> of a security huge tail. When Francisco it's a business growing 21% a year, it's three more than three times the growth of the company. Overall, which is actually still pretty good. Five or 6%. So security rocketship? >> Yeah, Fabio, Just I noticed before we did the interview here that everybody is wearing the T shirts. The cloud takeover is happening here at the definite zone. So give those of us that you know aren't among the 28,000 you know here at the show. A little bit of what's happening from you're >> gonna do something unusual going, gonna turn that question to keep because he was actually on stage >> the second single. Why don't you just get that off? You know, I think it links back to it. Bobby. Always talking about what talent I mean, obviously the most important thing we bring our customers is the technology. We are a technology company, but so many of our customers were asking us to help them with this talent cap. And I think the growth of definite I mean, we're actually sitting here in the definite zone. It's got its own area Here. It's Sisk alive. It's gotten bigger every single year. Here it's just go live. The growth of definite is a sign of how important talent issue is as well as the new certifications that we announce we expanded our certification program to include software conjuncture with Dev. Net. So now people be able to get professional certifications Francisco not just on networking but on software capabilities and skills. And this is something both our partners, our customers have told us. They're really looking for now in terms of the takeover, it's something fun that the definite crew does. I think you're doing five of them during this week. I was really excited, Suzy. We asked us to be the first Eso es the opportunity. Kick it off. It does include beer. So that's one of the nice things. It includes T shirts, both things that I think are prevalent in the developer community. I'll say, Andi, just have an hour where the focus is on cloud technology. So we got everyone in cloud T shirts, a bunch of the experts for my product enduring teams on hand. We had some special presentations, were just many an hour focused on cloud >> Well, and I love that you're doing that definite zone. We've always been super impressed with this whole notion of infrastructures code. I think I've said many times of all the traditional enterprise cos you know computer companies, if you will hae t companies Cisco has done a better job of anybody than making its infrastructure programmable. We're talking about security before it's critical. If you're still tossing stuff over to the operations team, you're gonna be have exposures. Whereas you guys are in a position now and you talk talent, you're transitioning. You know the role of the C C I. A. And now is becoming essentially a developer of infrastructure is code, and it's a very powerful absolutely. I think we're >> helping our partners and our customers transform. Justus were transforming. I think it's kind of a symbiotic relationship that's super important to us. >> It's also important you think about the balancing act between agility, cost, called security or even data assurance. There. Tradeoffs involved the nobs. You have to turn, but you can. You can you achieve all three, you know, to optimize your business. >> Look, there may always be trade offs, but it's not sort of a zero sum game. All those we sing customers who've automated that through things like C I. D. Move Teo, you know, a different place in a much better place where They're not necessarily making trade offs on security to get better agility if they fully off if they fully automated their deployment chains. So they know that there are no mistakes there. They know that they have the ability to roll out fixes if they need to. They know that they're containers, for instance. They're being scanned from a security perspective, very every time they deploy them. They're actually able to build automated infrastructures that are more agile and more secure so that it's pretty exciting. >> So it involves the automated change management and date assurance talking about containers. That's interesting. Spinning up containers. You want to spend it down frequently. So the bad guys that makes it harder for them to get through. >> You talk about BM sprawling, right? Yeah, right. The Janus sprawling biggest issues out there. And by the way, you know, as you automate this infrastructure, rightly so you mention infrastructures code that you can do the other magic, which is introducing machine learning artificial intelligence. And today they get learn such Gupta gave school. Harold, thank you. Have a terrific demonstration off. You know, finding Rocco's analysis for very, very complex kind of problems that will take forever in the old fashion world. Now, all of a sudden you have the management system. In this case, the nation tells you actually where the problem is, and if you value there that you click a button and instantaneously you deploy, you know, new policies and configuration. That's a dream come true. Literally, you may say, probably we're the last ones to the party in terms of infrastructure players, the industry means. But we're getting there very quickly, and this is a whole new set of possibilities now, >> way talking the cube a lot, and I think it's really relevant for what I'm hearing about your strategies. This cloud is about bringing the cloud operating model to your data wherever your data lives. And that seems to be kind of underscore your your strategy. Absolutely. It's so edge cloud on Prem hybrid, you guys, Your strategy is really to enable customers to bring that operating model wherever they need to. Absolutely right >> that transparency is a big deal. I mean, application anywhere, eating. Did I anywhere? That's a world where we're going to >> guys thoughts. Final thoughts on Sisqo live this year. No, it's only day one gets a customer meetings tonight, but initial impression San Diego >> Well, it's It's a well, it's always great to be in San Diego on DH. It's a great facility, and we know our customers really enjoy San Diego is Well, I think we'll have a great customer appreciation event on Wednesday night. Um, but, you know, I was struck. Uh, you just have to the keynote. I mean, the world solutions was buzzing, and there seems to be is always a lot of energy. It's just go live. But somehow so far this season, maybe even a little bit more energy. I know we've got a number of announcements coming this week across a bunch different areas, including clouds. So we're excited for next few days. >> Well, you got the double whammy first half. We were in February when Barcelona guys don't waste any time. You come right back. And June, your final thoughts value. >> Oh, it's just so exciting to speak with customers and partners. Over here, you can touch their excitement. People love to come together and get old. The news, you know, in one place it's this tremendous amount of energy here. >> Keep copter Fabio Gori. Thanks so much for coming on The Cube. Appreciate it. Thank you for having your walkabout, keeper. Right, everybody. We'll be back with our next guest. David Out. A student of Aunt Lisa Martin. We're live from Cisco Live 2019 in San Diego, right back.

Published Date : Jun 11 2019

SUMMARY :

Live from San Diego, California It's the queue covering Kip Compton is the senior vice president of Cisco's Cloud Platform and Solutions, Great to be here having us. Cloud is part of that innovation cocktail, if you will. Well, when they're multiple clouds, you leaving some stuff on him. the best cloud to solve that problem. I think it's here to stay. So I've dynamics is another example of something that Custer's one across all of that. What are some of the key challenges that you guys are seeing? but a lot of that is really the culture of the company needs to change, right? It's going to be multi cloud. And I think that does tend to get you to lowest common denominator because, So hybrid is more of a technology need to streamline So you were at Google Cloud next to Europe on stage David Gettler, Well, you know, first, all the partnerships or critical I mean, it's you said we're not trying to move the workload Stark Uh, ask each of you guys Maybe maybe, I actually think back to the founding of Cisco, if you know the Cisco kind of killed many computers to your point. we build on our DNA, which is, if you want and you heard Gettler today are What is the right right? the capability to run the same capabilities in there on prem environments in their data centers and the like, you know, to the extent that they should be And so we're seeing them really view, you know, security's foundational to what they're doing. It's how you respond to them. And if you want at the top of the security ecosystem in a multi polar world, you find identity of a security huge tail. us that you know aren't among the 28,000 you know here at the show. So now people be able to get professional certifications Francisco not just on networking but on cos you know computer companies, if you will hae t companies Cisco kind of a symbiotic relationship that's super important to us. You have to turn, but you can. They know that they have the ability to roll out fixes if they need So it involves the automated change management and date assurance talking about containers. And by the way, you know, as you automate this infrastructure, rightly so you mention infrastructures This cloud is about bringing the cloud operating model to your data wherever your data lives. I mean, application anywhere, eating. No, it's only day one gets a Um, but, you know, I was struck. Well, you got the double whammy first half. Oh, it's just so exciting to speak with customers and partners. Thank you for having your walkabout,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

David GettlerPERSON

0.99+

Fabio GoriPERSON

0.99+

Kip ComptonPERSON

0.99+

FabioPERSON

0.99+

FabianPERSON

0.99+

CiscoORGANIZATION

0.99+

FiveQUANTITY

0.99+

San DiegoLOCATION

0.99+

JuneDATE

0.99+

FebruaryDATE

0.99+

Wednesday nightDATE

0.99+

EuropeLOCATION

0.99+

HaroldPERSON

0.99+

BobbyPERSON

0.99+

12QUANTITY

0.99+

6%QUANTITY

0.99+

KipPERSON

0.99+

Joe HackerPERSON

0.99+

San Diego, CaliforniaLOCATION

0.99+

ChuckPERSON

0.99+

firstQUANTITY

0.99+

2,000,000,000QUANTITY

0.99+

fiveQUANTITY

0.99+

20QUANTITY

0.99+

threeQUANTITY

0.99+

bothQUANTITY

0.99+

each cloudQUANTITY

0.99+

DavePERSON

0.99+

second singleQUANTITY

0.99+

BarcelonaORGANIZATION

0.99+

25QUANTITY

0.99+

28,000QUANTITY

0.99+

SuzyPERSON

0.98+

this weekDATE

0.98+

thirtiesQUANTITY

0.98+

AndiPERSON

0.98+

todayDATE

0.98+

this yearDATE

0.98+

first halfQUANTITY

0.98+

Sandy LernerPERSON

0.98+

one wayQUANTITY

0.98+

eachQUANTITY

0.97+

an hourQUANTITY

0.97+

one wordQUANTITY

0.97+

21% a yearQUANTITY

0.97+

1st 1QUANTITY

0.97+

tonightDATE

0.97+

oneQUANTITY

0.96+

C I. D. Move TeoTITLE

0.96+

GettlerORGANIZATION

0.95+

CusterORGANIZATION

0.95+

ArsenalORGANIZATION

0.95+

2nd 1QUANTITY

0.95+

SECORGANIZATION

0.95+

three bucketsQUANTITY

0.95+

Cisco Live 2019EVENT

0.94+

three secondsQUANTITY

0.94+

both thingsQUANTITY

0.93+

last summerDATE

0.93+

one software platformQUANTITY

0.92+

three more than three timesQUANTITY

0.92+

OneQUANTITY

0.91+

Gunnar Hellekson & Andrew Hecox, Red Hat | Red Hat Summit 2019


 

>> live from Boston, Massachusetts. It's the queue covering your red hat some twenty nineteen lots. You buy bread hat. >> We'LL come back. Live here on the Cube as we continue with the coverage here in Boston, Massachusetts at the Boston Convention and Exposition Center had Summit two thousand nineteen stew Minimum. John Wall's a big keynote night, By the way, we're looking forward to that. We have a preview of that coming up in our next segment. Also walled wall interviews tomorrow morning from a number of our keynote presenters tonight. But right now we're joined by Gunnar Hellickson, whose director product management for rela Red hat. Gunnar. Nice to see you, sir. Good to see you And Andrew. He cocks Whose director Product Management of insights at Red Hat. Andrew, how are you doing today? >> Doing great. Happy to be here. >> Show off to a good start for you guys. Everything good to go? >> Yeah, it's been great. Uh, I got a great response from customers. Great response from analysts. There was real excited about the really >> Andrew. Yeah, we've had overflow it. All of our sessions on its insights, the hosted service. It's also nice to go alive and not get any >> pages that it's all good there, right? Yeah. So on the rail laid side. Big announcement today, right? It's gone public now available. Ah, lot of excitement. A lot of buzz around that, and insights has been added to that. So what is that doing now for your kind of your your suite of services and what you are now concerned? Sure. Absolute more about than you were yesterday. Well, >> I think one of the benefits we've had and making this changes it can create a virtuous loop. So insights as a service works by looking at the data that we have from running environment and seeing what is successful in what is not successful. So by having a smaller group of customers were would deliver the service using a good experience, but has a number of customers increases. That means we can deliver more value because we have a better understanding of what the world looks so for us, even though we've had a really great growth rate, being able to accelerate that by putting it inside of the rail subscription means we're gonna have access even more opportunities. Teo, look. Att Customer data find new insights and deliver even more value to them. >> So, Gunnar, you know, analytics is a piece that I'm hoping you can explain to our audience some of the some of the new pieces. Yeah, that that should be looking at. >> Yeah, sure. So So, with the insights tool down available to rent enterprise, the next customers they are getting a sentry said, there's there's a virtuous loop right where the more people that use it, the smarter the system gets and the benefit for the end user is now they get. I like to think of it is coaching so often there are security fixes, their opportunities for performance tuning. There's configuration fixes you could make, which may not be immediately obvious unless you've read through all the manuals right on DSO. How much better is it that Andrew Service can now come into a real a real customer and say, Hey, have you noticed that you might want to make this performance fix or hey, you might have forgotten this. So security fixed and it really makes the day to day life for the administrator much easier on also allows them to scale and manage many more systems much more efficiently. >> Yeah, I'm curious. You know, there's certain people. Was like, Wait, no, I understand my environment. You know, I you know, am I up for sharing what I'm doing versus everyone else? What's that? Feedback? You know, you've been what are some of the kind of misperceptions you want to make sure people understand? You >> know >> what it is and what it isn't >> a customer. Talk to you too. Phrases a very funny way. He's like, Well, >> I don't need this from my team. Might you know those guys right out >> of my level? I think, actually, our customers, they feel the scale that they have to operate on. So they're managing a lot more stuff. But I think the real pressure, his line of business is expecting things faster. So if they can't turn around, then they're lined the business. They're going to go get technologies somewhere else. And so, for our customers, the ability to automate pieces of their work flow, including ensuring it too safe configuration. It's optimized. That's a really key things I've never actually heard someone say. I know what I'm. Why did once have one person say they know what they're doing? They didn't need our help. But I think everyone else, they they get the value of analytics. >> You brought up the word, you know, scale. It's, You know, I worked in operations for six years in the group I had is like, Okay, next quarter, next year, you're gonna have more to do or less to do. Are you going? More or less? Resource is we understand what the answer is for most of those. So if I can of automation, if I can't have you no smart tooling today, I'm not going to able to keep up. You know, we talk about at the core of digital transformation is data needs to drive what we're doing. Otherwise, you know you're going to be left behind. >> Yeah. Yeah, that's right. And so and so how graded it is to finally have. You know, for fifteen years we've been getting support. Ticket's been reading knowledge based articles. We've got all this technical expertise on this architectural expertise, and that's not always easy to deliver to customers, right? It's It's still, you know, we're self our company, so we could deliver them software. But it's that additional coaching, Ben, additional expertise is the kind of difficult to deliver without having a vehicle like insights available. >> So how does it in terms of let's, like, really, um, roll out the new product? Everyone's You know, it's hopefully being well, not. Hopefully it is being used right now, and now you start seeing hiccups in the system. You see some speed bumps along the way. What are you seeing holistically? That an individual user is not? Or what's the value, too, to gathering this concensus and providing Mia's maybe just a single user with an insight into my situation? >> Yeah, that's the way I'd like to think about it is if you're a customer and you have a critical issue, causes downtime and impact your business, that's that's really terrible, and you're probably gonna learn from that. You're not going to do the same thing again, at least hopefully. But the customer next door or your competitors next or partner next door. They don't generate that experience or learned from that experience, so I think of insights, his way of knowledge recapture. So something happens once in one place. The system acts as a hub for that information, so once we see that we can capture the information that was discovered at one customer site, and we can proactively alert all of our customers to avoid that scenario. So it really lets us re use knowledge that we're generating. It's Gunnar said. This expertise we're generating inside the company were already doing all these activities, but it lets us recapture that energy and sick it back out to the rest of our customers much more efficiently than we ever could before. >> And you can and you could deal what you deal went on one. So if I if I'm a unique or have a unique problem, you could help me identify that, then you keep it in a reservoir. Basically, that could be tapped into when other instances occur. And you could see we, you know, this happened. This particular situation occurred in this situation and boom. Here's the cause. Here's the proper. Here's the fix >> on everything we do with insights is totally so. We learned from different experiences, but it's totally Taylor to each environment, So it's not just like a whole bunch of knowledge based articles. It looks at exact configuration for each customer, not only verifies that they're really going to hit the issue, Not just they, you know they might or something, but they're really going to hit it, but also generates automation to fix the issue. So we generate custom Ansel playbooks, which is an automation language that red hat obviously is invested in, and our customers and community love that is specific to their environment. So they could go from discovery to fix in the safest and fastest way possible. >> Yeah, you went. I was. You know, I'm hearing automation and, of course, immediately think about answerable there. So see, it seems there is that tight integration. They just play across the other. How does that dynamic >> work? Sure, So insights is tightly integrated in the sense of think of, answerable his arms and legs like there. They can go do things for you. But that doesn't come with a brain, necessarily the brain is our customers, right? So instable, So easy to use that you can put in the hands of knowledge experts inside of different companies, and they can automate part of their job. Their TVs. That's fantastic. What we're doing with insights, though they say got the red hat brain as well, though And so we're going to connect the red at breaking in. And so we're using tools like answerable to help collect the information that we need to analyze environment and then tools like answerable to go resolve the issues once we've identified what's there? So we see there's is totally complementary pieces of the portfolio. >> So, God, we've been talking about customers about you on the inside. What are you getting out of this? Ultimately, in terms of product improvement and whatever it orations that you're going to bring on because of these insights that your gathering, how soon? You kind of hope you roll it out. Thanks. Fine. Okay. Yeah, that's right. Yeah, yeah, yeah, yeah. Hope you don't get much from Andrew, but it's inevitable that, you know, there's going to be something that needs attention. >> Well, I mean, this is just part and parcel of regular product management practice, right? I mean, you look at your support tickets. You look at what customers are worried about. You look at what? The escalation czar, and that helps you. I think one change that we have gone through is thie. Analysis of all that activity has been largely anecdotal. like always remember the last and loudest person it was yelling at you, right? And this on tools like tools like insights allow us to be much more data driven as we're making different product management decisions. All >> right. Um, yes. So what should we be looking forward, Teo, give us a little bit of where things go from here? >> Sure. No good s o. You know, I think we'LL see the service generally. As I said, as we get more people connected, the service itself increases in quality in terms of recommendations in the breath of recommendations were also started to do some interesting worked. Open it up to partners. So so far, it's really been a red hat oriented Here's red hats knowledge. But it turns out that our partners want our stuff, their stuff, to run successfully on top of our platforms. That's a huge value for them. So, for example, way have nine new recommendations that will provide for sequel server when running on rally that we generated in partnership with Microsoft. And that's certainly the type of thing that we want to keep investing Maura and I think is really impactful for Custer. Um, because they see vendors actually working together to create a solution for them instead of us, just each doing our own thing in different ways. So that's one change that we're really excited about. >> Going forward. Yeah. You know, I think focusing on the focusing on the coaching for specific workloads is going to be really important. I mean, optimizing the operative system is great. I mean, your job rating system nor Adela fixing the operating system. But customers really had The opening system is an instrumental step towards actually operating something that that is critical of customers business. And so, to the extent that we can connect infrastructure providers, IVs and all the entire partner ecosystem, together with the indigenous operating system rules, we can give customers really very nice of you in a very nice set of, well, coaching on on their full stack of the planet. >> And that's the insight they're all looking for, right? Literally what they're looking for, gentlemen. Thank you. Thank you. The time we appreciate, uh, your time here today and good luck with continued pack sessions. That goes well for you. Both appreciate back with more where it read. Had summit where in Boston. And you are watching the Cube >> live from Boston, Massachusetts. It's the queue covering your red. Have some twenty nineteen. You buy bread? >> No, that on the ground. Get back a lot of commotion.

Published Date : May 7 2019

SUMMARY :

It's the queue covering Good to see you And Andrew. Happy to be here. Show off to a good start for you guys. Yeah, it's been great. It's also nice to go alive and not get any So on the rail laid side. That means we can deliver more value because we have a better understanding of what the world looks so for us, So, Gunnar, you know, analytics is a piece that I'm hoping you can explain to our audience So security fixed and it really makes the day to day life You know, I you know, am I up for sharing Talk to you too. Might you know those guys right out And so, for our customers, the ability to automate So if I can of automation, if I can't have you no smart tooling today, Ben, additional expertise is the kind of difficult to deliver without having a vehicle like insights available. You see some speed bumps along the way. Yeah, that's the way I'd like to think about it is if you're a customer and you have a critical issue, And you can and you could deal what you deal went on one. and our customers and community love that is specific to their environment. You know, I'm hearing automation and, of course, immediately think about answerable there. So instable, So easy to use that you can put in the hands of You kind of hope you roll it out. I mean, you look at your support tickets. So what should we be looking forward, Teo, give us a little bit of where And that's certainly the type of thing that we want to keep investing Maura and And so, to the extent that we can connect infrastructure providers, And that's the insight they're all looking for, right? It's the queue covering No, that on the ground.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GunnarPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Gunnar HellicksonPERSON

0.99+

AndrewPERSON

0.99+

BostonLOCATION

0.99+

six yearsQUANTITY

0.99+

John WallPERSON

0.99+

TeoPERSON

0.99+

fifteen yearsQUANTITY

0.99+

Andrew HecoxPERSON

0.99+

tomorrow morningDATE

0.99+

Gunnar HelleksonPERSON

0.99+

Boston, MassachusettsLOCATION

0.99+

next yearDATE

0.99+

next quarterDATE

0.99+

todayDATE

0.99+

yesterdayDATE

0.99+

Red HatORGANIZATION

0.99+

tonightDATE

0.99+

each customerQUANTITY

0.99+

BenPERSON

0.99+

one personQUANTITY

0.98+

Red hatORGANIZATION

0.98+

oneQUANTITY

0.98+

one placeQUANTITY

0.98+

BothQUANTITY

0.98+

AnselORGANIZATION

0.97+

walledPERSON

0.97+

single userQUANTITY

0.96+

each environmentQUANTITY

0.96+

CusterORGANIZATION

0.96+

eachQUANTITY

0.96+

twenty nineteen lotsQUANTITY

0.96+

Red Hat Summit 2019EVENT

0.9+

AdelaPERSON

0.9+

nine new recommendationsQUANTITY

0.89+

Andrew ServiceORGANIZATION

0.85+

Boston Convention and Exposition CenterLOCATION

0.84+

red hatORGANIZATION

0.83+

twenty nineteenQUANTITY

0.83+

onceQUANTITY

0.8+

one customer siteQUANTITY

0.79+

playbooksCOMMERCIAL_ITEM

0.68+

MiaPERSON

0.68+

GodPERSON

0.67+

two thousand nineteenQUANTITY

0.64+

wallPERSON

0.64+

DSOORGANIZATION

0.6+

MauraORGANIZATION

0.57+

redORGANIZATION

0.49+

CubePERSON

0.49+

Eric Brewer, Google Cloud | Google Cloud Next 2019


 

>> fly from San Francisco. It's the Cube covering Google Cloud next nineteen, brought to you by Google Cloud and its ecosystem partners. >> Welcome back. This is Day three of Google Cloud. Next, you're watching the Cube, the leader in live tech coverage. The cube goes out to the events. We extract the signal from the noise. My name is Dave Volante. I'm here with my co host to minimum. John Farrier has been here >> all week. Wall to wall >> coverage, three days. Check out cube dot net for all the videos. Silicon angle dot com For all the news, Eric Brewer is here is the vice president of Infrastructure and a Google fellow. Dr Breuer, Thanks for coming on The Cube. >> Happy to be here to see >> you. So tell us the story of sort of infrastructure and the evolution at Google. And then we'll talk about how you're you're taking what you've learned inside a googol and helping customers apply it. >> Yeah, one or two things about Google is it essentially makes no use of virtual machines internally. That's because Google started in nineteen ninety eight, which is the same year that VM where started it was kind of brought the modern virtual machine to bear. And so good infrastructure tends to be built really on kind of classic Unix processes on communication. And so scaling that up, you get a system that works a lot with just prophecies and containers. So kind of when I saw containers come along with Doctor who said, Well, that's a good model for us and we could take what we know internally, which was called Boring a big scheduler and we could turn that into Cooper Netease and we'LL open source it. And suddenly we have kind of a a cloud version of Google that works the way we would like it to work a bit more about the containers and AP eyes and services rather than kind of the low level infrastructure. >> Would you refer from from that comment that you essentially had a cleaner sheet of paper when when containers started to ascend, I >> kind of feel like it's not an accident. But Google influenced Lena Lennox's use of containers right, which influenced doctors use of containers, and we kind of merged the two concepts on. It became a good way to deploy applications that separates the application from the underlying machine instead of playing a machine and OS and application together, we'd actually like to separate those and say we'LL manage the Western machine and let's just deploy applications independent of machines. Now we can have lots of applications for machine improved realization. Improve your productivity. That's kind of way we're already doing internally what was not common in the traditional cloud. But it's actually a more productive way to work, >> Eric. My backgrounds and infrastructure. And, you know, I was actually at the first doctor. Calm back in twenty fourteen, only a few hundred of us, you know, right across the street from where we were here. And I saw the Google presentation. I was like, Oh, my gosh, I lived through that wave of virtual ization, and the nirvana we want is I want to just be able to build my application, not worry about all of those underlying pieces of infrastructure we're making progress for. We're not there. How are we doing as an industry as a whole? And, you know, get Teo, say it's where are we? And what Google looking that Cooper, Netease and all these other pieces to improve that. What do you still see is the the the room for growth. >> Well, it's pretty clear that you Burnett is one in the sense that if you're building new applications for enterprise, that's currently the way you would build them now. But it doesn't help you move your legacy stuff on it for, say, help you move to the cloud. It may be that you have worth loads on Crim that you would like to modernize their on V EMS or bare metal, their traditional kind of eighties APS in Java or whatever. And how does Cooper Netease affect those? That's that's actually still place where I think things are evolving. The good news now is much easier to mix kind of additional services and new services using SDO and other things on GC people contain arising workloads. But actually it would say most people are actually just do the new stuff in Cooper Netease and and wrapped the old stuff to make it look like a service that gets you pretty far. And then over time you khun containerized workloads that you really care about. You want to invest in and what's new with an so so you can kind of make some of those transitions on fram. Ifyou'd like separate from moving to the cloud and then you can decide. Oh, this workload goes in the cloud. This work load. I need to keep on priming for awhile, but I still want to modernize it of a lot more flexibility. >> Can you just parts that a little bit for us? You're talking about the migration service that that's that's coming out? Or is it part of >> the way the Val Estrada work, which is kind of can take a V M A. Converted to a container? It's a newer version of that which really kind of gives you a A manifest, essentially for the container. So you know what's inside it. You can actually use it as in the modern way. That's migration tool, and it's super useful. But I kind of feel like even just being able to run high call the Communities on Crim is a pretty useful step because you get to developer velocity, you get released frequency. You get more the coupling of operations and development, so you get a lot of benefits on treme. But also, when you move to cloud, you could go too geeky and get a you know, a great community experience whenever you're ready to make that transition. >> So it sounds like that what you described with Santos is particularly on from pieces like an elixir to help people you know more easily get to a cloud native environment and then, ultimately, Brigitte to the >> class. That's kind of like we're helping people get cloud native benefits where they are right now. On a day on their own time. Khun decide. You know not only when to move a workload, but even frankly, which cloud to move it to right. We prefer, obviously moved to Google Cloud, and we'LL take our chances because I think these cattle native applications were particularly good at. But it's more important that they are moving to this kind of modern platform but helps them, and it increases our impact on the Indus. Sory to have this happen. >> Help us understand the nuance there because there's obvious benefits of being in the public cloud. You know, being able to rent infrastructure op X versus cap packs and manage services, etcetera. But to the extent that you could bring that cloud experience, Tio, you're on premises to your data. That's what many people want to have that hybrid experience for sure. But but other than that, the obvious benefits that I get from a public cloud, what are the other nuances of actually moving into the public cloud from experience standpoint in the business value perspective? >> Well, one question is, how much rewriting do you have to do because it's a big transition? Moved a cloud that's also big transition to rewrite some of your applications. So in this model, we're actually separating those two steps, and you can do them in either order. You can lift and shift to move to cloud and then modernize it, but it's also perfectly fine. I'm gonna modernize on Graham, read my do my rewrites in a safe controlled environment that I understand this low risk for me. And then I'm going to move it to the cloud because now I have something that's really ready for the cloud and has been thought through carefully that way on that having those two options is actually an important change. With Anthony >> Wavered some stats. I think Thomas mentioned them that eighty percent of the workloads are still on prams way here. That all the time. And some portion of those workloads are mission critical workloads with a lot of custom code that people really don't want to necessarily freeze. Ah, and a lot of times, if you gonna migrate, you have to free. So my question is, can I bring some of those Antos on other Google benefits to on Prem and not have to freeze the code, not have to rewrite just kind of permanently essentially, uh, leave those there and it take my other stuff and move it into the cloud? Is that what people are doing? And can I >> work? Things mix. But I would say the beachhead is having well managed Cooper and his clusters on Prem. Okay, you can use for new development or a place to do your read rights or partial read writes. You convicts V EMS and mainframes and Cooper Netease. They're all mix herbal. It's not a big problem, especially this to where it could make him look like they're part of the same service >> on framework, Right? >> S o. I think it's more about having the ability to execute modern development on prim and feel like you're really being able to change those acts the way you want and on a good timeline. >> Okay, so I've heard several times this week that Santos is a game changer. That's how Google I think is looking at this. You guys are super excited about it. So one would presume then that that eighty percent on Prem is gonna just gonna really start to move. What your thoughts on that? >> I think the way to think about it is all the customs you talked to actually do want to move there were close to cloud. That's not really the discussion point anymore. It's more about reasons they can't, which could be. They already have a data center. They fully paid for two. There's regulatory issues they have to get resolved to. This workload is too messy. They don't want to touch it at all. The people that wrote it are here anymore. There's all kinds of reasons and so it's gone. I feel like the essence of it is let's just interacted the customer right now before they make a decision about their cloud on DH, help them and in exchange for that, I believe we have a much better chance to be their future clown, right? Right, Because we're helping them. But also, they're starting to use frameworks that were really good at all. Right, if they're betting on coordinates containers, I like our chances for winning their business down the road. >> You're earning their trust by providing those those capabilities. >> That's really the difference. We can interact with those eighty percent of workloads right now and make them better. >> Alright. So, Eric, with you, the term we've heard a bunch this meat, we because we're listening customers where we're meeting them where they are now. David Iran analyst. So we could tell customers they suck out a lot stuff. You should listen to Google. They're really smart, and they know how to do these things, right? Hopes up. Tell us some of those gaps there is to the learnings you've had. And we understand. You know, migrations and modernization is a really challenging thing, you know? What are some of those things that customers can do toe >> that's on the the basic issues. I would say one thing you get you noticed when using geeky, is that huh? The os has been passed for me magically. All right, We had these huge security issues in the past year, and no one on G had to do anything right. They didn't restart their servers. We didn't tell them. Oh, you get down time because we have to deal with these massive security tax All that was magically handled. Uh, then you say, Oh, I want to upgrade Cooper Netease. Well, you could do that yourself. Guess what? It's not that easy to do. Who Burnett is is a beast, and it's changing quickly every quarter. That's good in terms of velocity and trajectory, and it's the reason that so many people can participate at the same time. If you're a group trying to run communities on Prem, it's not that easy to do right, So there's a lot of benefit Justin saying We update Custer's all the time. Wear experts at this way will update your clusters, including the S and the Cuban A's version, and we can give you modern ing data and tell you how your clusters doing. Just stuff. It honestly is not core to these customers, right? They want to focus on there advertising campaign or their Their oil and gas were close. They don't want to focus on cluster management. So that's really the second thing >> they got that operating model. If I do Antos in my own data center of the same kind of environment, how do we deal with things like, Well, I need to worry about change management testing at all my other pieces Most of the >> way. The general answer to that is, you use many clusters. You could have a thousand clusters on time. If you want that, there's good reason to do that. But one reason is, well, upgrade the clusters individually so you could say, Let's make this cluster a test cluster We'LL upgrade it first and we'LL tell you what broke. If anything, if you give us tests we can run the test on then once we're comfortable that the upgrade is working, we'LL roll it out to all your clusters. Automatic thing with policy changes. You want to change your quota management or access control. We can roll up that change in a progressive way so that we do it first on clusters that are not so critical. >> So I gotta ask a question. You software guy, Uh and you're approaching this problem from a real software perspective. There are no box. I don't see a box on DH there. Three examples in the marketplace as your stack er, Oracle Clouded customer and Amazon Outpost Where there's a box. A box from Google. Pure software. Why no box? Do you need a box? The box Guys say you gotta have that. You have a box? Yes, you don't have a box, >> There's it's more like I would say, You don't have to have a box >> that's ever box. Okay, that's >> because again all these customers sorting the data center because they already have the hardware, right. If they're going to buy new hardware, they might as well move to cloud the police for some of the customers. And it turns out we can run on. Most of their hardware were leveraging VM wear for that with the partnership we announced here. So that's generally works. But that being said, we also now partnerships with Dell and others about if you want a box Cisco, Dell, HP. You can Actually, we'LL have offerings that way as well, and there's certainly good reason to do that. You can get up that infrastructure will know it works well. It's been tested, but the bottom line is, uh, we're going to do both models. >> Yeah, okay. So I could get a full stack from hardware through software. Yet through the partnerships on there's Your stack, >> Right And it'll always come from Partners were really working with a partner model for a lot of these things because we honestly don't have enough people to do all the things we would like to do with these customers. >> And how important is it that that on Prem Stack is identical from homogeneous with what's in the public cloud? Is it really? It sounds like you're cooking growing, but their philosophies well, the software components have to be >> really at least the core pieces to be the same, like Uber Netease studio on a policy management. If youse open source things like my sequel or Kafka or elastic, those auto operate the same way as well, right? So that when you're in different environments, you really kind of get the feeling of one environment one stroll plane used. Now that being said, if you want to use a special feature like I want to use big query that's only available on Google Cloud right, you can call it but that stuff won't be portable. Likewise is something you want to use on Amazon. You can use it, and that part will be portable. But at least you'LL get the most. Your infrastructure will be consistent across the platforms. >> How should we think about the future? You guys, I mean, just without giving away, you know, confidential information, obviously not going to do that, but just philosophically, Were you going when you talk to customers? What should their mindset be? How should they repeat preparing for the future? >> Well, I think it's a few bets were making. So you know, we're happy to work on kind of traditional cloud things with Bush machines and discs and lots of classic stuff that's still important. It's still needed. But I would say a few things that are interesting that we're pushing on pretty hard won in general. This move to a higher level stack about containers and AP eyes and services, and that's Cuba nowadays and SDO and its genre. But then the other thing I think interesting is we're making a pretty fundamental bit on open source, and it's a it's a deeper bad, then others air making right with partnerships with open source companies where they're helping us build the manage version of there of their product on. So I think that's that's really going to lead to the best experience for each of those packages, because the people that developed that package are working on it right, and we will share revenue with them. So it's it's, uh, Cooper. What is open source? Tension flows open. Source. This is kind of the way we're going to approach this thing, especially for a hybrid and mostly cloud where they're really in my mind is no other way to do multi cloud other than open source because it's the space is too fast moving. You're not going to say, Oh, here's a standard FBI for multi cloud because whatever a pair you define is going to be obsolete in a quarter or two, right? What we're saying is, the standard is not particular standard per se. It's the collection of open source software that evolves together, and that's how you get consistency across the environment is because the code is the same and in fact there is a standard. But we don't even know what it is exactly right. It's it's implicit in the code, >> Okay, but so any other competitors say, Okay, we love open source, too, will embrace open stores. What's different about Google's philosophy? >> Well, first of all, you could just look at a very high level of contribution back into the open source packages, not just the ones that were doing. You can see we've contributed things like the community's trademark so that that means it's actually not a Google thing anymore. Belonged to the proud Native Reading Foundation. But also, the way we're trying to partner with open source projects is really to give them a path to revenue. All right, give them a long term future on DH. Expectation is, that makes the products better. And it also means that, uh, we're implicitly preferred partner because we're the ones helping them. All >> right, Eric, One of things caught our attention this week really kind of extending containers with things like cloud code and cloud run. You speak a little bit to that and you know directionally where that's going, >> Yeah, crowd runs one of my favorite releases of this week. Both the one God code is great, also, especially, it's V s code integration which is really nice for developers. But I would say the cloud run kind of says we can take you know, any container that has a kind of a stateless thing inside and http interface and make it something we can run for you in a very clean way. What I mean by that is you pay per call and in particular Well, listen twenty four seven and case it call comes But if no call comes, we're going to charge you zero, right? So we'll eat the cost of listening for your package to arrive. But if a packet arrives for you, we will magically make sure you're there in time to execute it on. If you get a ton of connections, we'll scale you up. We could have a thousand servers running your cloud run containers. And so what you get is a very easy deployment model That is a generalization. Frankly, of functions, you can run a function, but you also run not only a container with kind of a managed run time ap engine style, but also any arbitrary container with your own custom python and image processing libraries. Whatever you want, >> here are our last guest at Google Cloud next twenty nineteen. So thank you. And so put a bow on the show this year. Obviously got the bigger, better shiny er Mosconi Center. It's awesome. Definitely bigger crowd. You see the growth here, but but tie a bow. Tell us what you think. Take us home. >> I have to say it's been really gratifying to see the reception that anthrax is getting. I do think it is a big shift for Google and a big shift for the industry. And, uh, you know, we actually have people using it, so I kind of feel like we're at the starting line of this change. But I feel like it's it's really resonated well this week, and it's been great to watch the reaction. >> Everybody wants their infrastructure to be like Google's. This is one of the people who made it happen. Eric, Thanks very much for coming in the Cube. Appreciate. Pleasure. All right, keep right, everybody. We'Ll be back to wrap up Google Cloud next twenty nineteen. My name is David. Dante. Student meant John Furry will be back on set. You're watching. The cube will be right back

Published Date : Apr 11 2019

SUMMARY :

Google Cloud next nineteen, brought to you by Google Cloud and The cube goes out to the events. Wall to wall Eric Brewer is here is the vice president of Infrastructure and a Google fellow. And then we'll talk about how you're you're taking what you've learned inside And so scaling that up, you get a system that works a lot with just prophecies and That's kind of way we're Calm back in twenty fourteen, only a few hundred of us, you know, right across the street from where we were here. the old stuff to make it look like a service that gets you pretty far. But I kind of feel like even just being able to run high call the Communities But it's more important that they are moving to this kind of modern platform but helps But to the extent that you could bring that cloud experience, Tio, Well, one question is, how much rewriting do you have to do because it's Ah, and a lot of times, if you gonna migrate, you have to free. Okay, you can use for new development or a place to do your read rights S o. I think it's more about having the ability to execute modern development is gonna just gonna really start to move. I think the way to think about it is all the customs you talked to actually do That's really the difference. you know? Cuban A's version, and we can give you modern ing data and tell you how your clusters doing. Most of the The general answer to that is, you use many clusters. The box Guys say you gotta have that. Okay, that's It's been tested, but the bottom line is, uh, we're going to do both models. So I could get a full stack from hardware through software. we honestly don't have enough people to do all the things we would like to do with these customers. really at least the core pieces to be the same, like Uber Netease studio on a policy This is kind of the way we're going to approach this Okay, but so any other competitors say, Okay, we love open source, too, will embrace open stores. Well, first of all, you could just look at a very high level of contribution back into the open You speak a little bit to that and you know directionally where that's And so what you get is a very easy deployment model That is a generalization. Tell us what you think. And, uh, you know, we actually have people using it, so I kind of feel like we're at the starting line This is one of the people who made it happen.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolantePERSON

0.99+

Eric BrewerPERSON

0.99+

CiscoORGANIZATION

0.99+

John FarrierPERSON

0.99+

DellORGANIZATION

0.99+

DavidPERSON

0.99+

EricPERSON

0.99+

Native Reading FoundationORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

HPORGANIZATION

0.99+

AnthonyPERSON

0.99+

BreuerPERSON

0.99+

JustinPERSON

0.99+

ThomasPERSON

0.99+

oneQUANTITY

0.99+

Lena LennoxPERSON

0.99+

GoogleORGANIZATION

0.99+

zeroQUANTITY

0.99+

twoQUANTITY

0.99+

AmazonORGANIZATION

0.99+

David IranPERSON

0.99+

two optionsQUANTITY

0.99+

eighty percentQUANTITY

0.99+

one questionQUANTITY

0.99+

Three examplesQUANTITY

0.99+

pythonTITLE

0.99+

DantePERSON

0.99+

John FurryPERSON

0.99+

two conceptsQUANTITY

0.99+

both modelsQUANTITY

0.99+

three daysQUANTITY

0.99+

two stepsQUANTITY

0.98+

this yearDATE

0.98+

eachQUANTITY

0.98+

second thingQUANTITY

0.98+

UberORGANIZATION

0.98+

this weekDATE

0.98+

one reasonQUANTITY

0.98+

SDOTITLE

0.98+

twenty fourteenQUANTITY

0.97+

KafkaTITLE

0.97+

twenty four sevenQUANTITY

0.97+

two thingsQUANTITY

0.97+

GrahamPERSON

0.97+

first doctorQUANTITY

0.97+

next nineteenDATE

0.96+

Day threeQUANTITY

0.96+

FBIORGANIZATION

0.96+

CooperPERSON

0.96+

Google CloudTITLE

0.96+

past yearDATE

0.95+

CubaLOCATION

0.95+

Cooper NeteaseORGANIZATION

0.95+

BothQUANTITY

0.95+

JavaTITLE

0.94+

TeoPERSON

0.94+

next twenty nineteenDATE

0.93+

firstQUANTITY

0.92+

CooperORGANIZATION

0.91+

OracleORGANIZATION

0.9+

Prem StackTITLE

0.9+

Amazon OutpostORGANIZATION

0.85+

BrigittePERSON

0.82+

CloudTITLE

0.82+

Jason McGee, IBM | IBM Think 2019


 

>> Live from San Francisco. It's the cube covering IBM thing twenty nineteen brought to you by IBM. >> Welcome back to the Cube here in Mosconi North at IBM. Think twenty nineteen. I'm stupid. And my CLO host for the segment is Day Volante. We have four days, a water wall. Coverage of this big show happened. Welcome back to the program. Jason McGee, who is an IBM fellow, and he's the vice president. CTO of Cloud Platform at IBM. Jason, Great to see a >> guy to have fair. >> All right, So, Jason, we spoke with you at Que Con Way. We're saying it's a slightly different audience. A little bit bigger here. Not as many hoodies and jeans and T shirts a little bit more of a business crowd were still talking about clouds. So let's talk about your kind of your role here at the show. What's gonna keep you busy all week? >> S o? I mean, obviously, cloud is a huge part of what's going on. I think talking a lot about both public and private, about hybrid and some are multi called management capabilities. You know, my role as the leader called Platform. I'm talking a lot about platform as a service and communities and containers in the studio and kind of all the new technologies that people are using to help build the next generation of applications. >> All right, so we've had a few interviews today already talk about some of the multi cloud pieces. We had Sandberg on alien talk about eternity. So first you're gonna help correct the things that he got >> anything. Gang >> and service measures have been a really hot conversation the last year or so SDO envoy and the like t talk to us about where IBM fits into this discussion of service meshes. >> Yeah, so you know, I think >> we've been on this kind of journey as an industry of last year's to build anew at platform on DH service meshes kind of fit the part of the problem, which is, How does everything talk to each other and how to actually control that and get visibility into it? You know, IBM has had a founding role in that project. My team at IBM and Google got together with the guys, a lift to create it. Theo, what I'm most excited about, I think a twenty nineteen is that's that technology is really transitioning into something people are using in production and their applications. It's becoming more of kind of the default stack that people are using Really helping them do security invisibility control over their applications? >> Yeah. What? One thing that I heard just from the community and wonder if you could tell me is, you know, is dio itself. The governance model is still not fully into CNC s. Yeah, I heard a little bit, hasn't he? On some envoy? Of course. Out there in the like. So, you know, where are we? What needs to happen to kind of >> move forward? Yeah, you're right. So we're not there quite yet. We're pushing hard to make that happen. Certainly. From an IBM perspective, we absolutely believe that CNC F is the right home for Osteo as you mentioned some of the pieces like Envoy or they're ready. You know, C N c f has done such a tremendous job over the last eighteen months. Really rallying all the core technologies that make up this new coordinate A platform that we're building on costo is no out there's one. Oh, it's been sure people are using it. You know, that last step needs to happen to get into the community. >> So I have to ask you So things move so fast in this world, you go back to the open stack days, and that was going to change the world. And then Dakar Containers. And then Cooper netease, usto I can't help but thinking, Okay, This isn't the end of the line. What's Jason? What's the underlying trend here that's going on in the coding world? Yeah, sure. I'll put it in, maybe in >> my own lens. Given my history, you nominal WebSphere app server guy. You know that in the first half of my career I built that Andi, >> I think the fundamental >> problem solving is actually exactly the same. It's like, how do you build a platform that's app developers focus on building their APS, and I'll focus on all the plumbing and the infrastructure for running those aps. We did that twenty years ago in Java with APP servers, and we're doing it now with cloud, and we're doing it on top of containers. Things like usto like, while they're important in their own right there really actually Mohr important because they're just part of this bigger puzzle that we're putting together. And I think for the average suffer developer, they shouldn't really have to care about. What part of this deal will part is is Cuban eighties. And which part is K native like all that needs to come together into a single platform that they can use to build their APS and run them security. Right? And and I think it's Seo is just recognizing that next piece. You know, I think we've all agreed on containers and communities. We all talk about it all the time, and it's tio Is that next layer I catalyze securing >> control things. Yeah. So you teed it up nicely because we want out. Developers just be able to worry about the application. So you mentioned K native. The whole server list trend is one where you know the idea, of course, is I shouldn't have to worry about the infrastructure layer it just be taking care of me. We've talked about it for pass for a number of years. There are various ways to do it. So at, uh, Cube Colin and we've been looking for about the last year. Now you know, Where does you No, Crew, Burnett, ease and surveillance. How do they fit together? And K Native looks to be a pieces. Toe bridge. Some of those barrels? Absolutely. Where are we and what? What? What's? What's IBM doing there? >> So I think >> you rightly say that they should fit together like they're all part of this continuum of how developers build APS. And, you know, if you look at server, less applications, you know, there's the servos to mention I'm personally not a big service terminology fan. I think they're Maura about event oriented computing. And how do you have a good model for event oriented systems today? With Cuba Netease, anise Teo, I think we've built the base platform, I think, with a native what we're doing is bringing server lists and also just kind of twelve factor applications into the fold in a more formal way on when we get all those pieces together and we integrate them. I think then developers really unleashed to just build their application, whatever way it makes the most sense for what they're doing. And some things like server lists of Anna Marie. And it's going to be easier. And some problems. Straight containers will be an easier way to do >> it. You know, you say you don't like survivalists you like event better a function. So so explain that to the audience, like Why? Why should we care? And why is that different? How is that different? Yeah, I think, for >> a couple things. First off, the idea of server lists applies much more broadly than just what we think of this kind of function based program. You know, like any system that does a good job of managing and masking the infrastructure below me, you could consider a surveillance system, right? So when you just say server Lis, it's kind of like secondhand for functions. I'd rather we just kind of say, functions because that's actually a different programming model where you kind of trigger off of events and you write a functional piece of code and the system takes care of those details. You could argue that caught foundries, a server list system in the sense that you just as a developer anyway, you just see if push your code and it just runs and its scales and it does whatever you need, right? So part of my mission, you know, part of what I look at a lot is how do we bring all these things together in a way that is easy for the developer to stay focused. It steals a great example. You know, one of things were announcing this week is managed osteo support as part of our community service. What does that really mean? It means the developer can use the capability Viste without worrying about How do I install in Rennes D'oh, which they don't really care about? They just really care about how they get value out of its capability. >> Yeah, that's one of the things that having watched all these crew Benetti system and the like is how many companies really need to understand how to build this and run that because can I just get it delivered to me as a service? And therefore that you know that whole you know what I want out of cloud? I want a simple model to be able to consume, Not necessarily. I want to build the stuff that's important to me and not the rest of you. >> And I think if you look at the industry, there's really, I think, kind of two dominant consumption models that have actually emerged for people really using these things, there's public cloud platforms you're delivering things as a service. And then there's kind of platform software stacks like open shifts like I've been called private, which take all of these pieces and bring them together. And I think for most developers, they'll consume in one of those two ways because they don't really want the task of how to assemble all these pieces together. >> Tio, go back to the service piece like what? One distinction I heard made is okay. If I can really scale it down to zero, if I don't need to make it, then that can be serve a list. But there there's alternatives coming out there like what K native has. If I want to run this in my own environment, it's not turbulence because I do need toe. It might be functions, but I need to manage this environment. The infrastructure is my responsibility, not some >> service provider, right? And I think if you'll get server list to me, I was personally, I always think of it in kind of two scenarios. There's like surveillance as ah program remodel in a technology and surveillance as a business model, right? As a consumption model for payment. I think this programming model parts applicable in lots of cases, including private clouds. And in Custer, the business model parties, I think, frankly, unique to public. I'll thing that says I can just pay for the milliseconds of CPU, Compute that amusing and nothing more. >> That's a good thing for consumers. For >> the consumer, it's actually good thing for cloud providers because it gives us a way Tio reuse our infrastructure and creative ways, Right? But I think first and foremost, we have to get Mohr adoption of it as a programming model that developers used to build their applications and do it combined with other things. Because I think most realistic APs aren't gonna all be cirrhosis or all B Cooper nineties. They're going to be something. >> Yeah, right. It's like everything else. It's it's you know, what percent into the applications? Will this takeover? We had this discussion with virtual ization. We've been having this discussion with cloud and certain list, of course, is is pretty early in that environment. K native did I hear is there's some announcement this week that IBM >> so Soak a native, obviously is a project is kind of much earlier in its maturation and something like Castillo is. But we're making that available as part of our Republican private cards as well, Really? So people can get started with the ideas of K native. They can have an easy way to get that environment stood up, and they can start building those applications on DSO. That's now something that, you know, we're kind of bringing out as we work in the community to actually mature the project itself. >> Excellent. One of the things everybody's, of course, keeping an eye on. I saw Arvin Christian talking about the clouds. Tragedy is how red hat fits into all this. So we know you can't talk about kind of post acquisition. But red hats involved in K native. They're involved in a lot of the >> services and developers you gotta be exciting for. Yeah, >> it is. And obviously, like, Look, we've been partners for many years, you know, in on the open source side of things. We've worked closely with Red Hat for a long time. We actually view the world in very similar ways. You know, like you said, we're working on a native together. We've been working on Open West Feather. We obviously work in Cuban eighties together. So personally, I'm pretty excited about them coming in IBM. Assuming that acquisition goes through, they, you know, they fit into our strategy really well. And I think we'll just kind of enhance what we've all been working to build. >> All right, Jason, what else? What's looking? You talk about the maturity of these solutions, give us, um, guide post for the people watching the industry that we should be looking at as twenty nineteen rolls through >> us. So I think there's a >> couple things that, you know, I think this unified application platform notion that we've been kind of touching on here, I think will really come into its own in twenty nineteen. And and I would really love to see people kind of embraced that idea that we don't need. Three container stacks were not tryingto build these seven things. You know, one of things I'm kind of excited about with a native is by bringing server lists and twelve factor into Cuba Netease. It allows each of those frameworks to be kind of the best they can be at their part of the problem space and not solved unrelated problems. You know, I looked at the kind of server less versus coop camps, you know, the purest. And both think all problems will be solved in their camp. Which means they tried to solve all problems. Like, how do I do state full systems and server, Wes. And how do I bring in storage and solve all these things that maybe containers is better at. So I think this unification that I see happening will allow us to have really high efficiency, twelve factor and surveillance in the context of Koob and will change how people are able to use these platforms. I think twenty nineteen is really about adoption of all of this stuff. You know, we still are really early, frankly, in the kind of container adoption landscape, and I think most people in the broader industry or just kind of getting their feet wet they all agree that they're all trying, but they're just starting, and he knows a lot of interesting work. >> Jason, are there any anything that air holding people back? Anything that you You know what? What do you see is some of the things that might help accelerate some of this adoption? >> Yeah, I think one of the things that's >> holding people back is just the diversity of options that exists in the cognitive space means you guys have all probably rising like the C in C F landscape chart. I've never seen so many icons on something in my life. That's really frightening for the average enterprise. To look at a picture like that and go like which of these things are going to be useful, which are going to exist in a year like how Doe, I bet, make that sort >> of those things. So I think that's actually >> help people back a lot. I think that kind of agreement around communities that happened in the last eighteen months or so was really liberating, for a lot of people have helped them kind of move forward there. I think if we can all agree on a few more pieces around this deal, reckon native like it'll really help kind of unlock people and get them trying actually doing it. And I don't think it's anything more than picking a project and starting. I think a lot of enterprises over analyze everything, and they just need to pick something and go and learn. And they'll >> so pick some narrow use case pick, pick an app, pick >> a use case and go do it right and you'll learn and you'll figure out how it works for you. And then you do the second and the fourth in the tenth. And before you know it, you're on your way. That's what we did at IBM ourselves, and you know, now we're running our whole entire public out on top of communities. >> Jason and any any warnings from that kind of experience that you trade to users? A CZ. They looked forward. >> Yeah, we had a >> lot of learnings from music. One is we could run a heck of a lot more diverse work less than we thought when we started. You know, we're running databases where any data warehouses, running machine learning. We're running Blockchain. We're running every kind of application you didn't think could ever work on containers on containers s so one of the lessons Wass. It's much more flexible than you think. It isthe right. The >> other thing is you >> really have to rethink everything. Like the way you do compliance, the way you do security, the way you monitor the system. Like all of those things I need to change because the underlying kind of container system enables you to solve them in such a powerful way. And so if you go into it just thinking, Oh, I'm just going to change this one part of how I do aps and the rest will change. I think you'll find in a year that you're changing the whole operating model around your environment. >> Well, Jason, rethink everything we're here at IBM. Thing up twenty nineteen. Thinks is always for catching up with Thanks for everything going on for David. Want a, um, stew? Minutemen got three more days of live coverage here for Mosconi North. If you hear, stop by and say hi or reach out to us on the interwebs. Thanks so much for watching the cues.

Published Date : Feb 12 2019

SUMMARY :

IBM thing twenty nineteen brought to you by IBM. And my CLO host for the segment is Day Volante. All right, So, Jason, we spoke with you at Que Con Way. I think talking a lot about both public So first you're gonna help correct the things that he got envoy and the like t talk to us about where IBM fits into this discussion It's becoming more of kind of the default stack that people are using you know, is dio itself. You know, that last step needs to happen to get into the community. So I have to ask you So things move so fast in this world, you go back to the open stack You know that in the first half of my career And I think for the average suffer developer, Now you know, Where does you No, Crew, Burnett, ease and surveillance. And how do you have a good model for event oriented systems today? it. You know, you say you don't like survivalists you like event better a function. You could argue that caught foundries, a server list system in the sense that you just as a developer anyway, And therefore that you know that whole you know what I want And I think if you look at the industry, there's really, I think, kind of two dominant consumption models If I can really scale it down to zero, if I don't need to make it, then that can be serve a list. And I think if you'll get server list to me, I was personally, I always think of it in kind of two That's a good thing for consumers. But I think first and foremost, we have to get Mohr adoption of it as a It's it's you know, what percent into the applications? That's now something that, you know, So we know you can't talk about kind of post acquisition. services and developers you gotta be exciting for. And obviously, like, Look, we've been partners for many years, you know, You know, I looked at the kind of server less versus coop camps, you know, the purest. cognitive space means you guys have all probably rising like the C in C F landscape chart. So I think that's actually And I don't think it's anything more than picking And then you do the second and the fourth in the tenth. Jason and any any warnings from that kind of experience that you trade to users? We're running every kind of application you didn't think could ever work on containers on containers s so one Like the way you do compliance, the way you do security, If you hear, stop by and say hi or reach out to us on the interwebs.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JasonPERSON

0.99+

IBMORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Jason McGeePERSON

0.99+

San FranciscoLOCATION

0.99+

DavidPERSON

0.99+

two waysQUANTITY

0.99+

Arvin ChristianPERSON

0.99+

four daysQUANTITY

0.99+

Mosconi NorthLOCATION

0.99+

fourthQUANTITY

0.99+

tenthQUANTITY

0.99+

twenty years agoDATE

0.99+

bothQUANTITY

0.99+

last yearDATE

0.99+

eachQUANTITY

0.99+

JavaTITLE

0.99+

oneQUANTITY

0.99+

zeroQUANTITY

0.99+

TheoPERSON

0.99+

firstQUANTITY

0.99+

secondQUANTITY

0.98+

seven thingsQUANTITY

0.98+

two scenariosQUANTITY

0.98+

todayDATE

0.98+

three more daysQUANTITY

0.98+

SandbergPERSON

0.97+

anise TeoPERSON

0.97+

FirstQUANTITY

0.97+

OneQUANTITY

0.97+

WebSphereTITLE

0.97+

Three containerQUANTITY

0.96+

twentyQUANTITY

0.96+

this weekDATE

0.96+

Red HatORGANIZATION

0.96+

twelveQUANTITY

0.95+

single platformQUANTITY

0.95+

first halfQUANTITY

0.94+

Day VolanteTITLE

0.94+

RepublicanORGANIZATION

0.94+

CooperPERSON

0.94+

Dakar ContainersORGANIZATION

0.93+

2019DATE

0.93+

twelve factorQUANTITY

0.93+

Cuba NeteaseTITLE

0.92+

twenty nineteenQUANTITY

0.91+

twenty nineteenQUANTITY

0.91+

One thingQUANTITY

0.91+

last eighteen monthsDATE

0.91+

MauraPERSON

0.9+

two dominant consumption modelsQUANTITY

0.9+

One distinctionQUANTITY

0.9+

CastilloPERSON

0.89+

K nativeORGANIZATION

0.89+

coupleQUANTITY

0.84+

CTOPERSON

0.84+

ninetiesDATE

0.84+

Cube ColinPERSON

0.83+

Rennes D'ohORGANIZATION

0.79+