Dipak Prasad, Dell Technologies Cloud | Dell Technologies World 2020
>>from around the globe. It's the Cube with digital coverage of Dell Technologies. World digital experience brought to you by Dell Technologies. Hey, Welcome back, everybody. Jeffrey here with the Cube. Welcome back to our ongoing coverage of Dell Technology. World 2020. The digital experience, Uh, not in person like nothing this year, 2020. But the digital experience allows to do a lot of things that you couldn't do in person. And we're excited to have our next guest. He is Deepak Prasad, the director of product management for Dell Technologies. Cloud deep. Uh, great to see you. >>Hello, Jeff. Nice to meet you as well. >>You too. So let's let's back up, like, 10,000 square feet, cause you know, Cloud came in with a big giant rage. I guess it's been a while now with AWS and Public Cloud. And people are putting their depth tests on there. And, you know, we've seen this explosion of public cloud, and then we have hybrid cloud and multi cloud. And then, you know, basically people figured out that not everything can go to a public cloud. A lot of stuff. Shouldn't some stuffs gonna stay in data centers? for all different reasons, >>but >>basically it's horses for courses. So we're a little ways into this. How are you guys, Adele, really thinking about Cloud and helping your customers think about what cloud is beyond, you know, kind of the hype. >>Well, that's a great question, Jeff. At Dell, we think of Cloud really as an operating model and as an operating experience rather than a destination. So it's interesting that you bring up Public Cloud and Private Cloud, but we take a step back and think of what does that experience really represent? So if you think off, uh, you know what defines that cloud operating model? It's, ah, democratization of technology. Access off resource is through a p. I s through self service portals ability to pay as you go in a very simplified commerce experience and the agility of cloud. You know, the promise off instant availability of infinite scalability. Now, if if you look at you know the landscape around this until now, that has only been delivered in a consistent way by public cloud vendors, which leads people to believe that really cloud is the destination, not an operating model. But we think that we are capable of bringing those experiences those tenets off the cloud operating model to the on premises experience and really taking location out of the conversation. So this really allows our customers to focus mawr on their workloads than visions. They want to drive, and then they can fit there, uh, requirements their application requirements to the location where those resource is our regardless of having toe worry about it. This is public or private. They will get the same operating experience. They will get the same scalability, the same simplified commerce, the same access Thio resource is >>right. Well, let's talk about some of some of those things because, as you said, there's a lot of behaviors that are involved in cloud and cloud operating. You know, one of the behaviors that I think gave the public cloud an early leg up was just simply provisioning, right? Simply, if somebody needs some capacity, they need some horsepower to get interesting. It would be tested in the early days. No, they didn't have to provision. They didn't have to put in an order with I t and wait for so long to get a box assigned to them or purchased or whatever, right? They just swipe the credit card and went, How have you kind of help People have that kind of ease of use ease of, uh, he's of spin up piece of creation on what the right verb is because I think that's a really core piece of what enabled early cloud adoption. >>No, absolutely, you're spot on. And that was a big part of it that if somebody needed resource is instead of waiting for weeks and months, they could go on and and sign up for those resource and get almost instantaneous access. And we believe that what we're doing in this area is really transforming the business. Today. We can deliver resource is to customers in their data center in 14 days and really are aggressively looking to cut that down further. So what this really means is not just shipping Resource is in 14 days, but actually delivering a cloud experience in the customer's data center or of cola location, whatever, you know, location of their choice in 14 days and making that available to the customers, not just through the traditional procurement process. But we're actually very proud to announce the cloud Council, the Dell Technologies Cloud Council, through which customers can, in a self service way, order those ordered those resource is and have it show up and be operational in their environment in 14 days. So we're really bringing that speed of cloud to the on premise experience, >>right? So how how does it actually work? Do you pre? Do you pre ship some amount of capacity beyond what you believe is currently needed just to kind of forward que you will, if you will capacity. How does it work from from both the implementation strategy in terms of the actual compute and storage capacity, as well as on kind of the purchasing peace? Because those air to kind of very >>different work flows? No, that's a That's a great question. So for us, our strength are really in supply chain management that allows us to build capabilities across the world in areas from where we can ship the customers almost on the on demand basis. So as soon as we get in order that the customer needs a probably probably cloud deployment in a certain location, were able to mobilize those resource is from those locations and have it instance she hated in customers environ. So it's really built a strength off over the years off optimizing supply chain, if you will, and just bring taking that to the next level off. >>Okay, so we don't, >>uh environment we said. Yeah, >>no problem. I was gonna say the another great characteristics of cloud right is is spinning up, which we hear about all the time versus spinning down and write. The easiest example is always use. If you're running, you know, some promotion. If your pizza hut you're running a promotion for the Super Bowl, obviously, right? Your demand for that thing is gonna be huge. You want to spin up to be able to take advantage of all the people cash in their coupon, and then when the Super Bowls over, >>you >>want to spend those resource is down because you're not going to necessarily need that capacity. How do you guys accomplish that type of flexibility in your solution? >>So in our subscription model, we have different ways to address customer environment. So we allow customers to start very small and then and then grow the subscription as the requirements growth and the key thing of our subscription, which is really unique, is the ability to quote Terminate. So, for example, if if a customer started off on the three year subscription with the, uh resource is for, say, 100 virtual machines and somewhere along the way they needed to add resource is for 50 more virtual machines, so they will pay for the 150 virtual machines. But that extra 50 virtual machines does not create an orphan or a child subscription. At the end of three years, everything terminates together, so it really gives them flexibility with, you know, ability to start small and not have to worry about vendor lock in. And now we started off with sort of a reserved instance type off subscription model. But we're definitely bringing usage based models as well, which allows more, even more flexibility with respect to speeding up and speeding down. Right. >>And then what are some of the real specific reasons that people go for this type of solution versus a public cloud where some of the rial inherent advantages of doing this within my own infrastructure, my own data center, my own, you know, kind of virtual four walls, if you will. >>Yeah, you know, we strongly believe that the decision should really be guided by workload requirements. There's certain workloads that work really well in on premises environment. For example, you could take virtual desktop environments V. D. I. That works really well from a performance standpoint in In on premise, environment versus a public cloud environment. Similarly, there are other workloads were not public cloud deniers that that are best suited for public cloud. But it's really it should be something that's that comes from understanding your application. Understanding the leighton see requirements, understanding the data requirements for those applications. You know, what are your egress? Uh, issues. Or, you know, uh, the profile off the workload that you're trying to implement That should really be the driving force in where the workload this place >>and then, uh, tell us a little bit about the partnership with VM Ware because that's a huge asset that you have, you know, now you know, basically side by side and you can leverage the technology as well as a lot of the assets that are envy. And where how does that change? The way you guys have taken the Dell Cloud platform to market >>it really is a a differentiating factor for us. From a technology standpoint, it allows us to bring the best of both worlds best off off the hardware infrastructure as well as the best off the cloud. Stack the cloud software infrastructure together in one cohesive and and well developed package. So, uh, the Dell Technologies Cloud Platform from a technology standpoint is implemented with our VX rail appliances, which is a hyper converge infrastructure as well as VM ware clad foundation from a software standpoint. Now the code developed and jointly engineered capabilities allow for unique, unique feature off. Remember Cloud Foundation, where it can do lifecycle management off the entire stack, both the hardware and the software from a single interface. So it understands Vieques rails and understands the different form where levels and the X, where manager software versions etcetera. And then it would automatically select what is the best and well tested and supported software bundle that could be deployed without causing, you know, typical issues with version mismatches and trying to chase down different hardware compatibility, matrices, etcetera. All of those are eliminated, so it's a integrated lifecycle management experience. That's great. E. I'm sorry I have >>a little bit, a little bit of a lot of here, so I I apologize. >>I >>was just gonna say you've been at this for a while. Your product, you know, product management. So you're really thinking about speeds and feeds and you're thinking about roadmap and futures? I wonder if you can share your perspective on this evolution from kind of this race of to pure public cloud to this. This big discussion I think we had packed Elson. You're talking about a hybrid cloud back at being where 2013. So then, you know kind of this hybrid cloud and multi cloud and really kind of this maturation of this space as we as we've progressed for Ah, while now probably 10 years. >>Yeah. Yeah. And, uh, majority of our customers live in a multi cloud world. They have resource is that they consumed from one or more multi hyper sorry, uh, public cloud vendors and they have one or more on premise vendors as well, For their resource is and managing that complex environment across multiple providers with different skill set different tools, different sls. While it sounds really interesting to, you know, have workload drive your your deployment and place the workloads where they're best suited. It does prevent. It does present a challenge off managing a complex and and getting even more complex by the day, multi cloud environment. And that's where we think we have an advantage. Uh, based on some of the work that we're doing with the Dell Technologies Cloud console to bring a true multi cloud experience to our customers. Not one of the benefits of not being a, you know, a public cloud provider is that we are agnostic toe. All public cloud providers were fully accepting that certain workloads need to live in those environments. And through our cloud council, we will make it easy for customers to manage not only their on premises, assets and on premises. Cloud resource is, but also cloud resource is that reside in multiple public cloud vendors? >>That's good. Yeah, because it helps, right, because they've got stuff everywhere. It's like that, you know, there is no del technology, right? There's a lot of there's a lot of people that work there. There's a lot of project. There's a lot of, you know, kind of pieces to that puzzle. I wonder too. If you could share your perspective on kind of application modernization, right, That's always another big, you know, kind of topic. You should You should you take those old legacy APS. And could you should you try to rebuild them in, um, or cloud native way using containers and and all this flexibility and deploy them or, you know, which one. Should you just leave alone right there, running fine. They've been running fine for a while. They've got some basic core functionality that may be do or don't need toe to kind of modernize if you will. And maybe those resources should be spent on building in a new applications and new kind of areas of competitive differentiation. When you're working with their clients, how do you tell them to think about at modernization? >>Yeah, we looked at it from a business requirement standpoint. Off how what end goals. A customer trying to achieve through that application. And in some cases, you know, on you cover the spectrum, right there. Some cases modernization just means swapping out the hardware and putting it, putting that application on a more modern, more powerful hardware. At the other end, it z you know, going toe assassin model off, you know, everything available through through a cloud application. And in between those two extremities, there's, you know, virtualization that is re factoring this continual ization and micro services based implementation. But it comes down to understanding why that application is meant to deliver for who and what business requirements and business objectives that fulfills. That's how we use as a guiding principle on how to position application modernization to customers. >>All right, that's super helpful, because I'm sure that's a big topic. And, you know, there's probably certain APS that you just should not. You just shouldn't touch. You should probably just even Malone. They're running just fine. Let them do their thing. All >>right, fine. I'm sorry. No. Is this interesting? I was a conversation with the customer just earlier today where they have a portion off their infrastructure of some applications that they absolutely wanted to leave alone and and just change out the underlying hardware. But there are other applications where they really want to adopt, continue ization and re factor those out, rewrite those applications so that they can have more scalability and more flexibility around that. So it really is is determined by the needs. Yeah. >>Um so last question, del Tech world this year was a digital experience, like all the other shows that we've seen here in 2020 just But it's a huge event, right? A big, big show, and we're excited to be back to cover it again. But I'm curious if there's some special announcements within such a big show. Sometimes things get lost a little bit here in there, but any special announcements You want to make sure that get highlighted that people may have missed within this kind of see if content over the last several days >>22 major things that that I'm very excited to share with you One is Dell Technologies Cloud platform. We actually discussing and talking about Dell Technologies cloud platform in the concept off instant capacity blocks. So in the past, we talked about it with respect to notes. Uh, you know, adult technology cloud platform. You can have, you know, so many notes in it to power your your on premises. Cloud resource is but really have changed the conversation and look into how cloud customers air consuming those resource is and we really want to drive focus to that and introduced the concept of instance Capacity blocks instances are think of it as a workload profile, you know, CPU and memory put together and then, uh, in different combinations in a pre defined way to address different workload needs. So this really changes the conversation for our customers that they don't have to worry about designing or or speaking out the hardware platforms, but really understand how many resource is they need, how many, how much you know, processing power, how much memory, how much stories they need and they define their requirements was in those terms, and we will deliver those instance capacity blocks to them in their data centers. So behind the scenes is built by best in class. Uh, you know, hardware from Vieques rails and best in class software from being where, but it's really delivered in terms off instant capacity blocks. The second interesting thing that I wanna share with you and I profession a few times is Dell Technologies Cloud console. We're building this single pane of glass to manage our customers entire journey from on premises to multi cloud hybrid cloud with consistency off. How you can discover services how you can order services and how you can grow your the manager footprint. So those are a couple things from adult technology standpoint that we're really excited to share with people. >>Well, congratulations. I know you've been busting your tail for for quite a while on these types of projects, and it's nice to be able to finally release him out to the world. >>Well, it's just my pleasure. Alright. Thank you very much. >>Well, thank you for stopping by again. Congratulations. And will continue the ongoing coverage of Dell Technology World 2020. The digital experience. I'm Jeff Frick. He's to Park Prasad. You're watching the Cube. See you next time. Thanks for watching.
SUMMARY :
But the digital experience allows to do a lot of things that you couldn't do in person. So let's let's back up, like, 10,000 square feet, cause you know, you know, kind of the hype. I s through self service portals ability to pay as you go in a Well, let's talk about some of some of those things because, as you said, there's a lot of behaviors that are involved in cloud whatever, you know, location of their choice in 14 days and making that of capacity beyond what you believe is currently needed just to kind of forward So it's really built a strength off over the years off optimizing uh environment we said. Your demand for that thing is gonna be huge. How do you guys accomplish that you know, ability to start small and not have to worry about vendor lock in. my own data center, my own, you know, kind of virtual four walls, if you will. Yeah, you know, we strongly believe that the decision should really be guided The way you guys have taken the Dell Cloud platform to market software bundle that could be deployed without causing, you know, typical issues with version mismatches So then, you know kind of this hybrid cloud and multi cloud and really kind of this maturation of not being a, you know, a public cloud provider is that we are There's a lot of, you know, you know, on you cover the spectrum, right there. And, you know, there's probably certain APS that by the needs. like all the other shows that we've seen here in 2020 just But it's a huge event, You can have, you know, so many notes in it to power your your on premises. and it's nice to be able to finally release him out to the world. Thank you very much. Well, thank you for stopping by again.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Jeffrey | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Deepak Prasad | PERSON | 0.99+ |
Dipak Prasad | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
2013 | DATE | 0.99+ |
three year | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Adele | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
100 virtual machines | QUANTITY | 0.99+ |
10,000 square feet | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Super Bowls | EVENT | 0.99+ |
Super Bowl | EVENT | 0.99+ |
150 virtual machines | QUANTITY | 0.99+ |
Dell Technologies Cloud Council | ORGANIZATION | 0.99+ |
50 virtual machines | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
Dell Technology | ORGANIZATION | 0.99+ |
14 days | QUANTITY | 0.99+ |
14 days | QUANTITY | 0.99+ |
Park Prasad | PERSON | 0.98+ |
three years | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
two extremities | QUANTITY | 0.98+ |
del technology | ORGANIZATION | 0.97+ |
single interface | QUANTITY | 0.97+ |
both worlds | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
Dell Technologies Cloud | ORGANIZATION | 0.95+ |
Public Cloud | ORGANIZATION | 0.94+ |
cloud Council | ORGANIZATION | 0.94+ |
50 more virtual machines | QUANTITY | 0.93+ |
this year | DATE | 0.92+ |
VM Ware | ORGANIZATION | 0.92+ |
22 major things | QUANTITY | 0.91+ |
Vieques | ORGANIZATION | 0.91+ |
this year | DATE | 0.9+ |
earlier today | DATE | 0.9+ |
Malone | ORGANIZATION | 0.89+ |
single pane | QUANTITY | 0.82+ |
Elson | ORGANIZATION | 0.8+ |
APS | ORGANIZATION | 0.78+ |
egress | ORGANIZATION | 0.74+ |
Technology World 2020 | EVENT | 0.69+ |
Cloud Platform | TITLE | 0.67+ |
One | QUANTITY | 0.66+ |
couple things | QUANTITY | 0.64+ |
Cloud | TITLE | 0.62+ |
Technologies | COMMERCIAL_ITEM | 0.61+ |
Thio | ORGANIZATION | 0.57+ |
Public Cloud | TITLE | 0.56+ |
Private | TITLE | 0.52+ |
days | DATE | 0.52+ |
last | DATE | 0.5+ |
Foundation | ORGANIZATION | 0.44+ |
World | EVENT | 0.4+ |
Cube | ORGANIZATION | 0.39+ |
Cloud | ORGANIZATION | 0.39+ |
Kubernetes on Any Infrastructure Top to Bottom Tutorials for Docker Enterprise Container Cloud
>>all right, We're five minutes after the hour. That's all aboard. Who's coming aboard? Welcome everyone to the tutorial track for our launchpad of them. So for the next couple of hours, we've got a SYRIZA videos and experts on hand to answer questions about our new product, Doctor Enterprise Container Cloud. Before we jump into the videos and the technology, I just want to introduce myself and my other emcee for the session. I'm Bill Milks. I run curriculum development for Mirant us on. And >>I'm Bruce Basil Matthews. I'm the Western regional Solutions architect for Moran Tissue esa and welcome to everyone to this lovely launchpad oven event. >>We're lucky to have you with us proof. At least somebody on the call knows something about your enterprise Computer club. Um, speaking of people that know about Dr Enterprise Container Cloud, make sure that you've got a window open to the chat for this session. We've got a number of our engineers available and on hand to answer your questions live as we go through these videos and disgusting problem. So that's us, I guess, for Dr Enterprise Container Cloud, this is Mirant asses brand new product for bootstrapping Doctor Enterprise Kubernetes clusters at scale Anything. The airport Abu's? >>No, just that I think that we're trying Thio. Uh, let's see. Hold on. I think that we're trying Teoh give you a foundation against which to give this stuff a go yourself. And that's really the key to this thing is to provide some, you know, many training and education in a very condensed period. So, >>yeah, that's exactly what you're going to see. The SYRIZA videos we have today. We're going to focus on your first steps with Dr Enterprise Container Cloud from installing it to bootstrapping your regional child clusters so that by the end of the tutorial content today, you're gonna be prepared to spin up your first documentary prize clusters using documented prize container class. So just a little bit of logistics for the session. We're going to run through these tutorials twice. We're gonna do one run through starting seven minutes ago up until I guess it will be ten fifteen Pacific time. Then we're gonna run through the whole thing again. So if you've got other colleagues that weren't able to join right at the top of the hour and would like to jump in from the beginning, ten. Fifteen Pacific time. We're gonna do the whole thing over again. So if you want to see the videos twice, you got public friends and colleagues that, you know you wanna pull in for a second chance to see this stuff, we're gonna do it all. All twice. Yeah, this session. Any any logistics I should add, Bruce that No, >>I think that's that's pretty much what we had to nail down here. But let's zoom dash into those, uh, feature films. >>Let's do Edmonds. And like I said, don't be shy. Feel free to ask questions in the chat or engineers and boosting myself are standing by to answer your questions. So let me just tee up the first video here and walk their cost. Yeah. Mhm. Yes. Sorry. And here we go. So our first video here is gonna be about installing the Doctor Enterprise Container Club Management cluster. So I like to think of the management cluster as like your mothership, right? This is what you're gonna use to deploy all those little child clusters that you're gonna use is like, Come on it as clusters downstream. So the management costs was always our first step. Let's jump in there >>now. We have to give this brief little pause >>with no good day video. Focus for this demo will be the initial bootstrap of the management cluster in the first regional clusters to support AWS deployments. The management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case, eight of us and the Elsie um, components on the UCP Cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a big strap note on this dependencies on handling with download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the adduce environment. The fourth configuring the deployment, defining things like the machine types on the fifth phase. Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node, just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now we're just checking through AWS to make sure that the account we want to use we have the correct credentials on the correct roles set up and validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just going to check that we can, from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next, we're going to run it. I'm in. Deploy it. Changing into that big struck folder. Just making see what's there. Right now we have no license file, so we're gonna get the license filed. Oh, okay. Get the license file through the more antis downloads site, signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, Once we've done that, we can now go ahead with the rest of the deployment. See that the follow is there. Uh, huh? That's again checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. All right, The next big step is valid in all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running an AWS policy create. So it is part of that is creating our Food trucks script, creating the mystery policy files on top of AWS, Just generally preparing the environment using a cloud formation script you'll see in a second will give a new policy confirmations just waiting for it to complete. Yeah, and there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created Today I am console. Go to that new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media Access key I D and the secret access key. We went, Yeah, usually then exported on the command line. Okay. Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Yes. Okay, that's the key. Secret X key. Right on. Let's kick it off. Yeah, So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you, and as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the west side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS. At the end of the process, that cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Okay. Local clusters boat just waiting for the various objects to get ready. Standard communities objects here Okay, so we speed up this process a little bit just for demonstration purposes. Yeah. There we go. So first note is being built the best in host. Just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for a W s to create the instance. Okay. Yes. Here, beauty there. Okay. Mhm. Okay. Yeah, yeah. Okay. On there. We got question. Host has been built on three instances for the management clusters have now been created. We're going through the process of preparing. Those nodes were now copying everything over. See that? The scaling up of controllers in the big Strap cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Yeah. Yeah, just waiting for key. Clark. Uh huh. Start to finish up. Yeah. No. What? Now we're shutting down control this on the local bootstrap node on preparing our I. D. C. Configuration. Fourth indication, soon as this is completed. Last phase will be to deploy stack light into the new cluster the last time Monitoring tool set way Go stack like to plan It has started. Mhm coming to the end of the deployment Mountain. Yeah, America. Final phase of the deployment. Onda, We are done. Okay, You'll see. At the end they're providing us the details of you. I log in so there's a keeper clogging. You can modify that initial default password is part of the configuration set up with one documentation way. Go Councils up way can log in. Yeah, yeah, thank you very much for watching. >>Excellent. So in that video are wonderful field CTO Shauna Vera bootstrapped up management costume for Dr Enterprise Container Cloud Bruce, where exactly does that leave us? So now we've got this management costume installed like what's next? >>So primarily the foundation for being able to deploy either regional clusters that will then allow you to support child clusters. Uh, comes into play the next piece of what we're going to show, I think with Sean O'Mara doing this is the child cluster capability, which allows you to then deploy your application services on the local cluster. That's being managed by the ah ah management cluster that we just created with the bootstrap. >>Right? So this cluster isn't yet for workloads. This is just for bootstrapping up the downstream clusters. Those or what we're gonna use for workings. >>Exactly. Yeah. And I just wanted to point out, since Sean O'Mara isn't around, toe, actually answer questions. I could listen to that guy. Read the phone book, and it would be interesting, but anyway, you can tell him I said that >>he's watching right now, Crusoe. Good. Um, cool. So and just to make sure I understood what Sean was describing their that bootstrap er knows that you, like, ran document fresh pretender Cloud from to begin with. That's actually creating a kind kubernetes deployment kubernetes and Docker deployment locally. That then hits the AWS a p i in this example that make those e c two instances, and it makes like a three manager kubernetes cluster there, and then it, like, copies itself over toe those communities managers. >>Yeah, and and that's sort of where the transition happens. You can actually see it. The output that when it says I'm pivoting, I'm pivoting from my local kind deployment of cluster AP, I toothy, uh, cluster, that's that's being created inside of AWS or, quite frankly, inside of open stack or inside of bare metal or inside of it. The targeting is, uh, abstracted. Yeah, but >>those air three environments that we're looking at right now, right? Us bare metal in open staff environments. So does that kind cluster on the bootstrap er go away afterwards. You don't need that afterwards. Yeah, that is just temporary. To get things bootstrapped, then you manage things from management cluster on aws in this example? >>Yeah. Yeah. The seed, uh, cloud that post the bootstrap is not required anymore. And there's no, uh, interplay between them after that. So that there's no dependencies on any of the clouds that get created thereafter. >>Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, be a temporary container that would bootstrap all the other containers. Go away. It's, uh, so sort of a similar, similar temporary transient bootstrapping model. Cool. Excellent. What will convict there? It looked like there wasn't a ton, right? It looked like you had to, like, set up some AWS parameters like credentials and region and stuff like that. But other than that, that looked like heavily script herbal like there wasn't a ton of point and click there. >>Yeah, very much so. It's pretty straightforward from a bootstrapping standpoint, The config file that that's generated the template is fairly straightforward and targeted towards of a small medium or large, um, deployment. And by editing that single file and then gathering license file and all of the things that Sean went through, um, that that it makes it fairly easy to script >>this. And if I understood correctly as well that three manager footprint for your management cluster, that's the minimum, right. We always insist on high availability for this management cluster because boy do not wanna see oh, >>right, right. And you know, there's all kinds of persistent data that needs to be available, regardless of whether one of the notes goes down or not. So we're taking care of all of that for you behind the scenes without you having toe worry about it as a developer. >>No, I think there's that's a theme that I think will come back to throughout the rest of this tutorial session today is there's a lot of there's a lot of expertise baked him to Dr Enterprise Container Cloud in terms of implementing best practices for you like the defaulter, just the best practices of how you should be managing these clusters, Miss Seymour. Examples of that is the day goes on. Any interesting questions you want to call out from the chap who's >>well, there was. Yeah, yeah, there was one that we had responded to earlier about the fact that it's a management cluster that then conduce oh, either the the regional cluster or a local child molester. The child clusters, in each case host the application services, >>right? So at this point, we've got, in some sense, like the simplest architectures for our documentary prize Container Cloud. We've got the management cluster, and we're gonna go straight with child cluster. In the next video, there's a more sophisticated architecture, which will also proper today that inserts another layer between those two regional clusters. If you need to manage regions like across a BS, reads across with these documents anything, >>yeah, that that local support for the child cluster makes it a lot easier for you to manage the individual clusters themselves and to take advantage of our observation. I'll support systems a stack light and things like that for each one of clusters locally, as opposed to having to centralize thumb >>eso. It's a couple of good questions. In the chat here, someone was asking for the instructions to do this themselves. I strongly encourage you to do so. That should be in the docks, which I think Dale helpfully thank you. Dale provided links for that's all publicly available right now. So just head on in, head on into the docks like the Dale provided here. You can follow this example yourself. All you need is a Mirante license for this and your AWS credentials. There was a question from many a hear about deploying this toe azure. Not at G. Not at this time. >>Yeah, although that is coming. That's going to be in a very near term release. >>I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. Very bracing. Cool. Okay. Any other thoughts on this one does. >>No, just that the fact that we're running through these individual pieces of the steps Well, I'm sure help you folks. If you go to the link that, uh, the gentleman had put into the chat, um, giving you the step by staff. Um, it makes it fairly straightforward to try this yourselves. >>E strongly encourage that, right? That's when you really start to internalize this stuff. OK, but before we move on to the next video, let's just make sure everyone has a clear picture in your mind of, like, where we are in the life cycle here creating this management cluster. Just stop me if I'm wrong. Who's creating this management cluster is like, you do that once, right? That's when your first setting up your doctor enterprise container cloud environment of system. What we're going to start seeing next is creating child clusters and this is what you're gonna be doing over and over and over again. When you need to create a cluster for this Deb team or, you know, this other team river it is that needs commodity. Doctor Enterprise clusters create these easy on half will. So this was once to set up Dr Enterprise Container Cloud Child clusters, which we're going to see next. We're gonna do over and over and over again. So let's go to that video and see just how straightforward it is to spin up a doctor enterprise cluster for work clothes as a child cluster. Undocumented brands contain >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster, the scaling of the cluster and how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the U I so you can switch. Project Mary only has access to development. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Nan Yes, this H Keys Associate ID for Mary into her team on the cloud credentials that allow you to create access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences, Right? Let's now set up semester search keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name, we copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our local machine. A simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, you go to the clusters tab. We hit the create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one release version five point seven is the current release Onda Attach. Mary's Key is necessary Key. We can then check the rest of the settings, confirming the provider Any kubernetes c r D r I p address information. We can change this. Should we wish to? We'll leave it default for now on. Then what components? A stack light I would like to deploy into my Custer. For this. I'm enabling stack light on logging on Aiken. Sit up the retention sizes Attention times on. Even at this stage, at any customer alerts for the watchdogs. E consider email alerting which I will need my smart host details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I commend side on the route device size. There we go, my three machines obviously creating. I now need to add some workers to this custom. So I go through the same process this time once again, just selecting worker. I'll just add to once again, the AM is extremely important. Will fail if we don't pick the right, Am I for a boon to machine in this case and the deployment has started. We can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here we've created the VPC. We've created the sub nets on We've created the Internet gateway. It's unnecessary made of us and we have no warnings of the stage. Yeah, this will then run for a while. We have one minute past waken click through. We can check the status of the machine bulls as individuals so we can check the machine info, details of the machines that we've assigned, right? Mhm Onda. See any events pertaining to the machine areas like this one on normal? Yeah. Just watch asked. The community's components are waiting for the machines to start. Go back to Custer's. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway on the stage. The machines have been built on assigned. I pick up the U. S. Thank you. Yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. Mhm. No speeding things up a little bit. This whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured. Mhm, mhm. And then we go. Cluster has been deployed. So once the classes deployed, we can now never get around our environment. Okay, Are cooking into configure cluster We could modify their cluster. We could get the end points for alert alert manager on See here The griffon occupying and Prometheus are still building in the background but the cluster is available on you would be able to put workloads on it the stretch to download the cube conflict so that I can put workloads on it. It's again three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster Mhm all right Now that the build is fully completed, we can check out cluster info on. We can see that Allow the satellite components have been built. All the storage is there, and we have access to the CPU. I So if we click into the cluster, we can access the UCP dashboard, right? Shit. Click the signing with Detroit button to use the SSO on. We give Mary's possible to use the name once again. Thing is, an unlicensed cluster way could license at this point. Or just skip it on. There. We have the UCP dashboard. You can see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon, a data just being automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Mhm. So we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah. General dashboard of Cuba navies cluster one of this is configurable. You can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster, all right to scale the cluster on to add a notice. A simple is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger disks and you'll see that worker has been added from the provisioning state on shortly. We will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workouts are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button, validating which release you would like to update to. In this case, the next available releases five point seven point one. Here I'm kicking the update by in the background We will coordinate. Drain each node slowly go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Girl, we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already on In a few minutes we'll see that there are great has been completed. There we go. Great. Done. Yeah. If you work loads of both using proper cloud native community standards, there will be no impact. >>Excellent. So at this point, we've now got a cluster ready to start taking our communities of workloads. He started playing or APs to that costume. So watching that video, the thing that jumped out to me at first Waas like the inputs that go into defining this workload cost of it. All right, so we have to make sure we were using on appropriate am I for that kind of defines the substrate about what we're gonna be deploying our cluster on top of. But there's very little requirements. A so far as I could tell on top of that, am I? Because Docker enterprise Container Cloud is gonna bootstrap all the components that you need. That s all we have is kind of kind of really simple bunch box that we were deploying these things on top of so one thing that didn't get dug into too much in the video. But it's just sort of implied. Bruce, maybe you can comment on this is that release that Shawn had to choose for his, uh, for his cluster in creating it. And that release was also the thing we had to touch. Wanted to upgrade part cluster. So you have really sharp eyes. You could see at the end there that when you're doing the release upgrade enlisted out a stack of components docker, engine, kubernetes, calico, aled, different bits and pieces that go into, uh, go into one of these commodity clusters that deploy. And so, as far as I can tell in that case, that's what we mean by a release. In this sense, right? It's the validated stack off container ization and orchestration components that you know we've tested out and make sure it works well, introduction environments. >>Yeah, and and And that's really the focus of our effort is to ensure that any CVS in any of the stack are taken care of that there is a fixes air documented and up streamed to the open stack community source community, um, and and that, you know, then we test for the scaling ability and the reliability in high availability configuration for the clusters themselves. The hosts of your containers. Right. And I think one of the key, uh, you know, benefits that we provide is that ability to let you know, online, high. We've got an update for you, and it's fixes something that maybe you had asked us to fix. Uh, that all comes to you online as your managing your clusters, so you don't have to think about it. It just comes as part of the product. >>You just have to click on Yes. Please give me that update. Uh, not just the individual components, but again. It's that it's that validated stack, right? Not just, you know, component X, y and Z work. But they all work together effectively Scalable security, reliably cool. Um, yeah. So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old universal control plane. Doctor Enterprise. On top of that, Sean had the classic comment there, you know? Yeah. Yeah. You'll see a little warnings and errors or whatever. When you're setting up, UCP don't handle, right, Just let it do its job, and it will converge all its components, you know, after just just a minute or two. But we saw in that video, we sped things up a little bit there just we didn't wait for, you know, progress fighters to complete. But really, in real life, that whole process is that anything so spend up one of those one of those fosters so quite quite quick. >>Yeah, and and I think the the thoroughness with which it goes through its process and re tries and re tries, uh, as you know, and it was evident when we went through the initial ah video of the bootstrapping as well that the processes themselves are self healing, as they are going through. So they will try and retry and wait for the event to complete properly on. And once it's completed properly, then it will go to the next step. >>Absolutely. And the worst thing you could do is panic at the first warning and start tearing things that don't don't do that. Just don't let it let it heal. Let take care of itself. And that's the beauty of these manage solutions is that they bake in a lot of subject matter expertise, right? The decisions that are getting made by those containers is they're bootstrapping themselves, reflect the expertise of the Mirant ISS crew that has been developing this content in these two is free for years and years now, over recognizing humanities. One cool thing there that I really appreciate it actually that it adds on top of Dr Enterprise is that automatic griffon a deployment as well. So, Dr Enterprises, I think everyone knows has had, like, some very high level of statistics baked into its dashboard for years and years now. But you know our customers always wanted a double click on that right to be able to go a little bit deeper. And Griffon are really addresses that it's built in dashboards. That's what's really nice to see. >>Yeah, uh, and all of the alerts and, uh, data are actually captured in a Prometheus database underlying that you have access to so that you are allowed to add new alerts that then go out to touch slack and say hi, You need to watch your disk space on this machine or those kinds of things. Um, and and this is especially helpful for folks who you know, want to manage the application service layer but don't necessarily want to manage the operations side of the house. So it gives them a tool set that they can easily say here, Can you watch these for us? And Miran tas can actually help do that with you, So >>yeah, yeah, I mean, that's just another example of baking in that expert knowledge, right? So you can leverage that without tons and tons of a long ah, long runway of learning about how to do that sort of thing. Just get out of the box right away. There was the other thing, actually, that you could sleep by really quickly if you weren't paying close attention. But Sean mentioned it on the video. And that was how When you use dark enterprise container cloud to scale your cluster, particularly pulling a worker out, it doesn't just like Territo worker down and forget about it. Right? Is using good communities best practices to cordon and drain the No. So you aren't gonna disrupt your workloads? You're going to just have a bunch of containers instantly. Excellent crash. You could really carefully manage the migration of workloads off that cluster has baked right in tow. How? How? Document? The brass container cloud is his handling cluster scale. >>Right? And And the kubernetes, uh, scaling methodology is is he adhered to with all of the proper techniques that ensure that it will tell you. Wait, you've got a container that actually needs three, uh, three, uh, instances of itself. And you don't want to take that out, because that node, it means you'll only be able to have to. And we can't do that. We can't allow that. >>Okay, Very cool. Further thoughts on this video. So should we go to the questions. >>Let's let's go to the questions >>that people have. Uh, there's one good one here, down near the bottom regarding whether an a p I is available to do this. So in all these demos were clicking through this web. You I Yes, this is all a p. I driven. You could do all of this. You know, automate all this away is part of the CSC change. Absolutely. Um, that's kind of the point, right? We want you to be ableto spin up. Come on. I keep calling them commodity clusters. What I mean by that is clusters that you can create and throw away. You know, easily and automatically. So everything you see in these demos eyes exposed to FBI? >>Yeah. In addition, through the standard Cube cuddle, Uh, cli as well. So if you're not a programmer, but you still want to do some scripting Thio, you know, set up things and deploy your applications and things. You can use this standard tool sets that are available to accomplish that. >>There is a good question on scale here. So, like, just how many clusters and what sort of scale of deployments come this kind of support our engineers report back here that we've done in practice up to a Zeman ia's like two hundred clusters. We've deployed on this with two hundred fifty nodes in a cluster. So were, you know, like like I said, hundreds, hundreds of notes, hundreds of clusters managed by documented press container fall and then those downstream clusters, of course, subject to the usual constraints for kubernetes, right? Like default constraints with something like one hundred pods for no or something like that. There's a few different limitations of how many pods you can run on a given cluster that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. >>Yeah, E. I mean, I don't think that we constrain any of the capabilities that are available in the, uh, infrastructure deliveries, uh, service within the goober Netease framework. So were, you know, But we are, uh, adhering to the standards that we would want to set to make sure that we're not overloading a node or those kinds of things, >>right. Absolutely cool. Alright. So at this point, we've got kind of a two layered our protection when we are management cluster, but we deployed in the first video. Then we use that to deploy one child clustering work, classroom, uh, for more sophisticated deployments where we might want to manage child clusters across multiple regions. We're gonna add another layer into our architectural we're gonna add in regional cluster management. So this idea you're gonna have the single management cluster that we started within the first video. On the next video, we're gonna learn how to spin up a regional clusters, each one of which would manage, for example, a different AWS uh, US region. So let me just pull out the video for that bill. We'll check it out for me. Mhm. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectures of you how to set up the management environment, prepare for the deployment deployment overview and then just to prove it, to play a regional child cluster. So, looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case AWS on the LCN components on the D you speak Cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need a regional cluster? Different platform architectures, for example aws who have been stack even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager we also Machine Manager were held. Mandel are managed as well as the actual provider logic. Mhm. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. And you see, it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster similar to what we're going to deploy now, also only has three managers once again, no workers. But as a comparison, here's a child cluster This one has three managers, but also has additional workers associate it to the cluster. All right, we need to connect. Tell bootstrap note. Preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine. All right. A few things we have to do to make sure the environment is ready. First thing we're going to see go into route. We'll go into our releases folder where we have the kozberg struck on. This was the original bootstrap used to build the original management cluster. Yeah, we're going to double check to make sure our cube con figures there once again, the one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything is working. A condom. No damages waken access to a swell. Yeah. Next we're gonna edit the machine definitions. What we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I. So that's found under the templates AWS directory. We don't need to edit anything else here. But we could change items like the size of the machines attempts. We want to use that The key items to ensure where you changed the am I reference for the junta image is the one for the region in this case AWS region for utilizing this was no construct deployment. We have to make sure we're pointing in the correct open stack images. Yeah, okay. Set the correct and my save file. Now we need to get up credentials again. When we originally created the bootstrap cluster, we got credentials from eight of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we're just exporting the AWS access key and I d. What's important is CAAs aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our cube conflict that we want to use for the management cluster. When we looked at earlier Yeah, now we're exporting that. Want to call the cluster region Is Frank Foods Socrates Frankfurt yet trying to use something descriptive It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed. Um, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at W s and waiting for that bastard and no to get started. Please. The best you nerd Onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy. Dr. Enterprise, this is probably the longest face. Yeah, seeing the second that all the nerds will go from the player deployed. Prepare, prepare. Yeah, You'll see their status changes updates. He was the first night ready. Second, just applying second already. Both my time. No waiting from home control. Let's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running the date of the U. S. All my stay. Ah, now we're playing Stockland. Switch over is done on. Done. Now I will build a child cluster in the new region very, very quickly to find the cluster will pick. Our new credential has shown up. We'll just call it Frankfurt for simplicity a key and customs to find. That's the machine. That cluster stop with three managers. Set the correct Am I for the region? Yeah, Do the same to add workers. There we go test the building. Yeah. Total bill of time Should be about fifteen minutes. Concedes in progress. It's going to expect this up a little bit. Check the events. We've created all the dependencies, machine instances, machines, a boat shortly. We should have a working cluster in Frankfurt region. Now almost a one note is ready from management. Two in progress. Yeah, on we're done. Clusters up and running. Yeah. >>Excellent. So at this point, we've now got that three tier structure that we talked about before the video. We got that management cluster that we do strapped in the first video. Now we have in this example to different regional clustering one in Frankfurt, one of one management was two different aws regions. And sitting on that you can do Strap up all those Doctor enterprise costumes that we want for our work clothes. >>Yeah, that's the key to this is to be able to have co resident with your actual application service enabled clusters the management co resident with it so that you can, you know, quickly access that he observation Elson Surfboard services like the graph, Ana and that sort of thing for your particular region. A supposed to having to lug back into the home. What did you call it when we started >>the mothership? >>The mothership. Right. So we don't have to go back to the mother ship. We could get >>it locally. Yeah, when, like to that point of aggregating things under a single pane of glass? That's one thing that again kind of sailed by in the demo really quickly. But you'll notice all your different clusters were on that same cluster. Your pain on your doctor Enterprise Container Cloud management. Uh, court. Right. So both your child clusters for running workload and your regional clusters for bootstrapping. Those child clusters were all listed in the same place there. So it's just one pane of glass to go look for, for all of your clusters, >>right? And, uh, this is kind of an important point. I was, I was realizing, as we were going through this. All of the mechanics are actually identical between the bootstrapped cluster of the original services and the bootstrapped cluster of the regional services. It's the management layer of everything so that you only have managers, you don't have workers and that at the child cluster layer below the regional or the management cluster itself, that's where you have the worker nodes. And those are the ones that host the application services in that three tiered architecture that we've now defined >>and another, you know, detail for those that have sharp eyes. In that video, you'll notice when deploying a child clusters. There's not on Lee. A minimum of three managers for high availability management cluster. You must have at least two workers that's just required for workload failure. It's one of those down get out of work. They could potentially step in there, so your minimum foot point one of these child clusters is fine. Violence and scalable, obviously, from a >>That's right. >>Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want to my last video. There's another question here about, like where these clusters can live. So again, I know these examples are very aws heavy. Honestly, it's just easy to set up down on the other us. We could do things on bare metal and, uh, open stack departments on Prem. That's what all of this still works in exactly the same way. >>Yeah, the, uh, key to this, especially for the the, uh, child clusters, is the provision hers? Right? See you establish on AWS provision or you establish a bare metal provision or you establish a open stack provision. Or and eventually that list will include all of the other major players in the cloud arena. But you, by selecting the provision or within your management interface, that's where you decide where it's going to be hosted, where the child cluster is to be hosted. >>Speaking off all through a child clusters. Let's jump into our last video in the Siri's, where we'll see how to spin up a child cluster on bare metal. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. So why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent. Provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and supports high performance workloads like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another opera visor. Lay it between so continue on the theme Why Communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p G A s G p us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. Uh, we can handle utilization in the scheduling. Better Onda we increase the performances and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project will add the bare metal hosts, including the host name. I put my credentials I pay my address the Mac address on then provide a machine type label to determine what type of machine it is for later use. Okay, let's get started. So well again. Was the operator thing. We'll go and we'll create a project for our machines to be a member off helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. So the first thing we had to be in post, Yeah, many of the machine A name. Anything you want, que experimental zero one. Provide the IAP my user name type my password. Okay. On the Mac address for the common interface with the boot interface and then the i p m I i p address These machines will be at the time storage worker manager. He's a manager. Yeah, we're gonna add a number of other machines on will. Speed this up just so you could see what the process looks like in the future. Better discovery will be added to the product. Okay. Okay. Getting back there we have it are Six machines have been added, are busy being inspected, being added to the system. Let's have a look at the details of a single note. Yeah, you can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. I see. Okay, let's go and create the cluster. Yeah, So we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So we'll credit custom. We'll give it a name, but if it were selecting bare metal on the region, we're going to select the version we want to apply. No way. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of dress range on update the address range that we want to use for the cluster. Check that the sea ideal blocks for the Cuban ladies and tunnels are what we want them to be. Enable disabled stack light. Yeah, and soothe stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here. We're focused on building communities clusters, so we're gonna put the count of machines. You want managers? We're gonna pick the label type manager and create three machines is the manager for the Cuban eighties. Casting Okay thing. We're having workers to the same. It's a process. Just making sure that the worker label host level are I'm sorry. On when Wait for the machines to deploy. Let's go through the process of putting the operating system on the notes validating and operating system deploying doctor identifies Make sure that the cluster is up and running and ready to go. Okay, let's review the bold events waken See the machine info now populated with more information about the specifics of things like storage and of course, details of a cluster etcetera. Yeah, yeah, well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build? And that brings us to the end of this particular demo. You can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>All right, so there we have it, deploying a cluster to bare metal. Much the same is how we did for AWS. I guess maybe the biggest different stepwise there is there is that registration face first, right? So rather than just using AWS financials toe magically create PM's in the cloud. You got a point out all your bare metal servers to Dr Enterprise between the cloud and they really come in, I guess three profiles, right? You got your manager profile with a profile storage profile which has been labeled as allocate. Um, crossword cluster has appropriate, >>right? And And I think that the you know, the key differentiator here is that you have more physical control over what, uh, attributes that love your cat, by the way, uh, where you have the different attributes of a server of physical server. So you can, uh, ensure that the SSD configuration on the storage nodes is gonna be taken advantage of in the best way the GP use on the worker nodes and and that the management layer is going to have sufficient horsepower to, um, spin up to to scale up the the environments, as required. One of the things I wanted to mention, though, um, if I could get this out without the choking much better. Um, is that Ah, hey, mentioned the load balancer and I wanted to make sure in defining the load balancer and the load balancer ranges. Um, that is for the top of the the cluster itself. That's the operations of the management, uh, layer integrating with your systems internally to be able to access the the Cube Can figs. I I p address the, uh, in a centralized way. It's not the load balancer that's working within the kubernetes cluster that you are deploying. That's still cube proxy or service mesh, or however you're intending to do it. So, um, it's kind of an interesting step that your initial step in building this, um and we typically use things like metal L B or in gen X or that kind of thing is to establish that before we deploy this bear mental cluster so that it can ride on top of that for the tips and things. >>Very cool. So any other thoughts on what we've seen so far today? Bruce, we've gone through all the different layers. Doctor enterprise container clouds in these videos from our management are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. Closing thoughts before we take just a very short break and run through these demos again. >>You know, I've been very exciting. Ah, doing the presentation with you. I'm really looking forward to doing it the second time, so that we because we've got a good rhythm going about this kind of thing. So I'm looking forward to doing that. But I think that the key elements of what we're trying to convey to the folks out there in the audience that I hope you've gotten out of it is that will that this is an easy enough process that if you follow the step by steps going through the documentation that's been put out in the chat, um, that you'll be able to give this a go yourself, Um, and you don't have to limit yourself toe having physical hardware on prim to try it. You could do it in a ws as we've shown you today. And if you've got some fancy use cases like, uh, you you need a Hadoop And and, uh, you know, cloud oriented ai stuff that providing a bare metal service helps you to get there very fast. So right. Thank you. It's been a pleasure. >>Yeah, thanks everyone for coming out. So, like I said we're going to take a very short, like, three minute break here. Uh, take the opportunity to let your colleagues know if they were in another session or they didn't quite make it to the beginning of this session. Or if you just want to see these demos again, we're going to kick off this demo. Siri's again in just three minutes at ten. Twenty five a. M. Pacific time where we will see all this great stuff again. Let's take a three minute break. I'll see you all back here in just two minutes now, you know. Okay, folks, that's the end of our extremely short break. We'll give people just maybe, like one more minute to trickle in if folks are interested in coming on in and jumping into our demo. Siri's again. Eso For those of you that are just joining us now I'm Bill Mills. I head up curriculum development for the training team here. Moran Tous on Joining me for this session of demos is Bruce. Don't you go ahead and introduce yourself doors, who is still on break? That's cool. We'll give Bruce a minute or two to get back while everyone else trickles back in. There he is. Hello, Bruce. >>How'd that go for you? Okay, >>Very well. So let's kick off our second session here. I e just interest will feel for you. Thio. Let it run over here. >>Alright. Hi. Bruce Matthews here. I'm the Western Regional Solutions architect for Marantz. Use A I'm the one with the gray hair and the glasses. Uh, the handsome one is Bill. So, uh, Bill, take it away. >>Excellent. So over the next hour or so, we've got a Siris of demos that's gonna walk you through your first steps with Dr Enterprise Container Cloud Doctor Enterprise Container Cloud is, of course, Miranda's brand new offering from bootstrapping kubernetes clusters in AWS bare metal open stack. And for the providers in the very near future. So we we've got, you know, just just over an hour left together on this session, uh, if you joined us at the top of the hour back at nine. A. M. Pacific, we went through these demos once already. Let's do them again for everyone else that was only able to jump in right now. Let's go. Our first video where we're gonna install Dr Enterprise container cloud for the very first time and use it to bootstrap management. Cluster Management Cluster, as I like to describe it, is our mother ship that's going to spin up all the other kubernetes clusters, Doctor Enterprise clusters that we're gonna run our workloads on. So I'm gonna do >>I'm so excited. I can hardly wait. >>Let's do it all right to share my video out here. Yeah, let's do it. >>Good day. The focus for this demo will be the initial bootstrap of the management cluster on the first regional clusters. To support AWS deployments, the management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case AWS and the Elsom components on the UCP cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a bootstrap note on its dependencies on handling the download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the ideas environment, the fourth configuring the deployment, defining things like the machine types on the fifth phase, Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node. Just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now, we're just checking through aws to make sure that the account we want to use we have the correct credentials on the correct roles set up on validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just gonna check that we can from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next we're going to run it. Yeah, I've been deployed changing into that big struck folder, just making see what's there right now we have no license file, so we're gonna get the license filed. Okay? Get the license file through more antis downloads site signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, since we've done that, we can now go ahead with the rest of the deployment. Yeah, see what the follow is there? Uh huh. Once again, checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. Alright. Next big step is violating all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running in AWS policy create. So it is part of that is creating our food trucks script. Creating this through policy files onto the AWS, just generally preparing the environment using a cloud formation script, you'll see in a second, I'll give a new policy confirmations just waiting for it to complete. And there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created. Good day. I am console. Go to the new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media access Key I. D and the secret access key, but usually then exported on the command line. Okay, Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Okay, thanks. Is key. So you could X key Right on. Let's kick it off. So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you. Um, as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the AWS side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS at the end of the process. That cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Yeah, okay. Local clusters boat. Just waiting for the various objects to get ready. Standard communities objects here. Yeah, you mentioned Yeah. So we've speed up this process a little bit just for demonstration purposes. Okay, there we go. So first note is being built the bastion host just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for AWS to create the instance. Okay. Yeah. Beauty there. Movies. Okay, sketch. Hello? Yeah, Okay. Okay. On. There we go. Question host has been built on three instances for the management clusters have now been created. Okay, We're going through the process of preparing. Those nodes were now copying everything over. See that scaling up of controllers in the big strapped cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Right? Okay. Just waiting for key. Clark. Uh huh. So finish up. Yeah. No. Now we're shutting down. Control this on the local bootstrap node on preparing our I. D. C configuration, fourth indication. So once this is completed, the last phase will be to deploy stack light into the new cluster, that glass on monitoring tool set, Then we go stack like deployment has started. Mhm. Coming to the end of the deployment mountain. Yeah, they were cut final phase of the deployment. And we are done. Yeah, you'll see. At the end, they're providing us the details of you. I log in. So there's a key Clark log in. Uh, you can modify that initial default possible is part of the configuration set up where they were in the documentation way. Go Councils up way can log in. Yeah. Yeah. Thank you very much for watching. >>All right, so at this point, what we have we got our management cluster spun up, ready to start creating work clusters. So just a couple of points to clarify there to make sure everyone caught that, uh, as advertised. That's darker. Enterprise container cloud management cluster. That's not rework loans. are gonna go right? That is the tool and you're gonna use to start spinning up downstream commodity documentary prize clusters for bootstrapping record too. >>And the seed host that were, uh, talking about the kind cluster dingy actually doesn't have to exist after the bootstrap succeeds eso It's sort of like, uh, copies head from the seed host Toothy targets in AWS spins it up it then boots the the actual clusters and then it goes away too, because it's no longer necessary >>so that bootstrapping know that there's not really any requirements, Hardly on that, right. It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, as you just said, it's just a kubernetes in docker cluster on that piece. Drop note is just gonna get torn down after the set up finishes on. You no longer need that. Everything you're gonna do, you're gonna drive from the single pane of glass provided to you by your management cluster Doctor enterprise Continue cloud. Another thing that I think is sort of interesting their eyes that the convict is fairly minimal. Really? You just need to provide it like aws regions. Um, am I? And that's what is going to spin up that spending that matter faster. >>Right? There is a mammal file in the bootstrap directory itself, and all of the necessary parameters that you would fill in have default set. But you have the option then of going in and defining a different Am I different for a different region, for example? Oh, are different. Size of instance from AWS. >>One thing that people often ask about is the cluster footprint. And so that example you saw they were spitting up a three manager, um, managing cluster as mandatory, right? No single manager set up at all. We want high availability for doctrine Enterprise Container Cloud management. Like so again, just to make sure everyone sort of on board with the life cycle stage that we're at right now. That's the very first thing you're going to do to set up Dr Enterprise Container Cloud. You're going to do it. Hopefully exactly once. Right now, you've got your management cluster running, and they're gonna use that to spend up all your other work clusters Day today has has needed How do we just have a quick look at the questions and then lets take a look at spinning up some of those child clusters. >>Okay, e think they've actually been answered? >>Yeah, for the most part. One thing I'll point out that came up again in the Dail, helpfully pointed out earlier in surgery, pointed out again, is that if you want to try any of the stuff yourself, it's all of the dogs. And so have a look at the chat. There's a links to instructions, so step by step instructions to do each and every thing we're doing here today yourself. I really encourage you to do that. Taking this out for a drive on your own really helps internalizing communicate these ideas after the after launch pad today, Please give this stuff try on your machines. Okay, So at this point, like I said, we've got our management cluster. We're not gonna run workloads there that we're going to start creating child clusters. That's where all of our work and we're gonna go. That's what we're gonna learn how to do in our next video. Cue that up for us. >>I so love Shawn's voice. >>Wasn't that all day? >>Yeah, I watched him read the phone book. >>All right, here we go. Let's now that we have our management cluster set up, let's create a first child work cluster. >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster the scaling of the cluster on how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the u I. So you can switch Project Mary only has access to development. Uh huh. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Man. Yes, this H keys, Associate ID for Mary into her team on the cloud credentials that allow you to create or access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences. Right. Let's now set up some ssh keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name. We copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our machine. A very simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, we got the clusters tab we had to create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one released version five point seven is the current release Onda Attach. Mary's Key is necessary key. We can then check the rest of the settings, confirming the provider any kubernetes c r D a r i p address information. We can change this. Should we wish to? We'll leave it default for now and then what components of stack light? I would like to deploy into my custom for this. I'm enabling stack light on logging, and I consider the retention sizes attention times on. Even at this stage, add any custom alerts for the watchdogs. Consider email alerting which I will need my smart host. Details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I convinced side on the route. Device size. There we go. My three machines are busy creating. I now need to add some workers to this cluster. So I go through the same process this time once again, just selecting worker. I'll just add to once again the am I is extremely important. Will fail if we don't pick the right. Am I for a Clinton machine? In this case and the deployment has started, we can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen. Cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here. We've created the VPC. We've created the sub nets on. We've created the Internet Gateway. It's unnecessary made of us. And we have no warnings of the stage. Okay, this will then run for a while. We have one minute past. We can click through. We can check the status of the machine balls as individuals so we can check the machine info, details of the machines that we've assigned mhm and see any events pertaining to the machine areas like this one on normal. Yeah. Just last. The community's components are waiting for the machines to start. Go back to customers. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway. And at this stage, the machines have been built on assigned. I pick up the U S. Yeah, yeah, yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. No speeding things up a little bit this whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured mhm and then we go. Cluster has been deployed. So once the classes deployed, we can now never get around. Our environment are looking into configure cluster. We could modify their cluster. We could get the end points for alert Alert Manager See here the griffon occupying and Prometheus are still building in the background but the cluster is available on You would be able to put workloads on it at this stage to download the cube conflict so that I can put workloads on it. It's again the three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster. All right, Now that the build is fully completed, we can check out cluster info on. We can see that all the satellite components have been built. All the storage is there, and we have access to the CPU. I. So if we click into the cluster, we can access the UCP dashboard, click the signing with the clock button to use the SSO. We give Mary's possible to use the name once again. Thing is an unlicensed cluster way could license at this point. Or just skip it on. Do we have the UCP dashboard? You could see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon. A data just been automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Um, so we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah, a general dashboard of Cuba Navies cluster. What If this is configurable, you can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster. All right to scale the cluster on to add a No. This is simple. Is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger group disks and you'll see that worker has been added in the provisioning state. On shortly, we will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node we would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workloads are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button validating which release you would like to update to this case. This available releases five point seven point one give you I'm kicking the update back in the background. We will coordinate. Drain each node slowly, go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Who we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already. Yeah, and in a few minutes, we'll see that the upgrade has been completed. There we go. Great. Done. If you work loads of both using proper cloud native community standards, there will be no impact. >>All right, there. We haven't. We got our first workload cluster spun up and managed by Dr Enterprise Container Cloud. So I I loved Shawn's classic warning there. When you're spinning up an actual doctor enterprise deployment, you see little errors and warnings popping up. Just don't touch it. Just leave it alone and let Dr Enterprises self healing properties take care of all those very transient temporary glitches, resolve themselves and leave you with a functioning workload cluster within victims. >>And now, if you think about it that that video was not very long at all. And that's how long it would take you if someone came into you and said, Hey, can you spend up a kubernetes cluster for development development A. Over here, um, it literally would take you a few minutes to thio Accomplish that. And that was with a W s. Obviously, which is sort of, ah, transient resource in the cloud. But you could do exactly the same thing with resource is on Prem or resource is, um physical resource is and will be going through that later in the process. >>Yeah, absolutely one thing that is present in that demo, but that I like to highlight a little bit more because it just kind of glides by Is this notion of, ah, cluster release? So when Sean was creating that cluster, and also when when he was upgrading that cluster, he had to choose a release. What does that didn't really explain? What does that mean? Well, in Dr Enterprise Container Cloud, we have released numbers that capture the entire staff of container ization tools that will be deploying to that workload costume. So that's your version of kubernetes sed cor DNs calico. Doctor Engineer. All the different bits and pieces that not only work independently but are validated toe work together as a staff appropriate for production, humanities, adopted enterprise environments. >>Yep. From the bottom of the stack to the top, we actually test it for scale. Test it for CVS, test it for all of the various things that would, you know, result in issues with you running the application services. And I've got to tell you from having, you know, managed kubernetes deployments and things like that that if you're the one doing it yourself, it can get rather messy. Eso This makes it easy. >>Bruce, you were staying a second ago. They I'll take you at least fifteen minutes to install your release. Custer. Well, sure, but what would all the other bits and pieces you need toe? Not just It's not just about pressing the button to install it, right? It's making the right decision. About what components work? Well, our best tested toe be successful working together has a staff? Absolutely. We this release mechanism and Dr Enterprise Container Cloud. Let's just kind of package up that expert knowledge and make it available in a really straightforward, fashionable species. Uh, pre Confederate release numbers and Bruce is you're pointing out earlier. He's got delivered to us is updates kind of transparent period. When when? When Sean wanted toe update that cluster, he created little update. Custer Button appeared when an update was available. All you gotta do is click. It tells you what Here's your new stack of communities components. It goes ahead. And the straps those components for you? >>Yeah, it actually even displays at the top of the screen. Ah, little header That says you've got an update available. Do you want me to apply? It s o >>Absolutely. Another couple of cool things. I think that are easy to miss in that demo was I really like the on board Bafana that comes along with this stack. So we've been Prometheus Metrics and Dr Enterprise for years and years now. They're very high level. Maybe in in previous versions of Dr Enterprise having those detailed dashboards that Ravana provides, I think that's a great value out there. People always wanted to be ableto zoom in a little bit on that, uh, on those cluster metrics, you're gonna provides them out of the box for us. Yeah, >>that was Ah, really, uh, you know, the joining of the Miranda's and Dr teams together actually spawned us to be able to take the best of what Morantes had in the open stack environment for monitoring and logging and alerting and to do that integration in in a very short period of time so that now we've got it straight across the board for both the kubernetes world and the open stack world. Using the same tool sets >>warm. One other thing I wanna point out about that demo that I think there was some questions about our last go around was that demo was all about creating a managed workplace cluster. So the doctor enterprise Container Cloud managers were using those aws credentials provisioned it toe actually create new e c two instances installed Docker engine stalled. Doctor Enterprise. Remember all that stuff on top of those fresh new VM created and managed by Dr Enterprise contain the cloud. Nothing unique about that. AWS deployments do that on open staff doing on Parramatta stuff as well. Um, there's another flavor here, though in a way to do this for all of our long time doctor Enterprise customers that have been running Doctor Enterprise for years and years. Now, if you got existing UCP points existing doctor enterprise deployments, you plug those in to Dr Enterprise Container Cloud, uh, and use darker enterprise between the cloud to manage those pre existing Oh, working clusters. You don't always have to be strapping straight from Dr Enterprises. Plug in external clusters is bad. >>Yep, the the Cube config elements of the UCP environment. The bundling capability actually gives us a very straightforward methodology. And there's instructions on our website for exactly how thio, uh, bring in import and you see p cluster. Um so it it makes very convenient for our existing customers to take advantage of this new release. >>Absolutely cool. More thoughts on this wonders if we jump onto the next video. >>I think we should move press on >>time marches on here. So let's Let's carry on. So just to recap where we are right now, first video, we create a management cluster. That's what we're gonna use to create All our downstream were closed clusters, which is what we did in this video. Let's maybe the simplest architectures, because that's doing everything in one region on AWS pretty common use case because we want to be able to spin up workload clusters across many regions. And so to do that, we're gonna add a third layer in between the management and work cluster layers. That's gonna be our regional cluster managers. So this is gonna be, uh, our regional management cluster that exists per region that we're going to manage those regional managers will be than the ones responsible for spending part clusters across all these different regions. Let's see it in action in our next video. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectural overview, how to set up the management environment, prepare for the deployment deployment overview, and then just to prove it, to play a regional child cluster. So looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case, AWS on the L C M components on the d you speak cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need original cluster? Different platform architectures, for example AWS open stack, even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager. We also machine manager. We're hell Mandel are managed as well as the actual provider logic. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. When you see it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster, similar to what we're going to deploy now. Also only has three managers once again, no workers. But as a comparison is a child cluster. This one has three managers, but also has additional workers associate it to the cluster. Yeah, all right, we need to connect. Tell bootstrap note, preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine Mhm. All right, A few things we have to do to make sure the environment is ready. First thing we're gonna pseudo into route. I mean, we'll go into our releases folder where we have the car's boot strap on. This was the original bootstrap used to build the original management cluster. We're going to double check to make sure our cube con figures there It's again. The one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything's working, condone, load our images waken access to a swell. Yeah, Next, we're gonna edit the machine definitions what we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I So that's found under the templates AWS directory. We don't need to edit anything else here, but we could change items like the size of the machines attempts we want to use but the key items to ensure where changed the am I reference for the junta image is the one for the region in this case aws region of re utilizing. This was an open stack deployment. We have to make sure we're pointing in the correct open stack images. Yeah, yeah. Okay. Sit the correct Am I save the file? Yeah. We need to get up credentials again. When we originally created the bootstrap cluster, we got credentials made of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we just exporting AWS access key and I d. What's important is Kaz aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our Q conflict that we want to use for the management cluster when we looked at earlier. Yeah, now we're exporting that. Want to call? The cluster region is Frankfurt's Socrates Frankfurt yet trying to use something descriptive? It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at us and waiting for the past, you know, to get started. Please the best your node, onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy Dr Enterprise, he says. Probably the longest face we'll see in a second that all the nodes will go from the player deployed. Prepare, prepare Mhm. We'll see. Their status changes updates. It was the first word ready. Second, just applying second. Grady, both my time away from home control that's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running a data for us? Yeah, almost a on. Now we're playing Stockland. Thanks. Whichever is done on Done. Now we'll build a child cluster in the new region very, very quickly. Find the cluster will pick our new credential have shown up. We'll just call it Frankfurt for simplicity. A key on customers to find. That's the machine. That cluster stop with three manages set the correct Am I for the region? Yeah, Same to add workers. There we go. That's the building. Yeah. Total bill of time. Should be about fifteen minutes. Concedes in progress. Can we expect this up a little bit? Check the events. We've created all the dependencies, machine instances, machines. A boat? Yeah. Shortly. We should have a working caster in the Frankfurt region. Now almost a one note is ready from management. Two in progress. On we're done. Trust us up and running. >>Excellent. There we have it. We've got our three layered doctor enterprise container cloud structure in place now with our management cluster in which we scrap everything else. Our regional clusters which manage individual aws regions and child clusters sitting over depends. >>Yeah, you can. You know you can actually see in the hierarchy the advantages that that presents for folks who have multiple locations where they'd like a geographic locations where they'd like to distribute their clusters so that you can access them or readily co resident with your development teams. Um and, uh, one of the other things I think that's really unique about it is that we provide that same operational support system capability throughout. So you've got stack light monitoring the stack light that's monitoring the stack light down to the actual child clusters that they have >>all through that single pane of glass that shows you all your different clusters, whether their workload cluster like what the child clusters or usual clusters from managing different regions. Cool. Alright, well, time marches on your folks. We've only got a few minutes left and I got one more video in our last video for the session. We're gonna walk through standing up a child cluster on bare metal. So so far, everything we've seen so far has been aws focus. Just because it's kind of easy to make that was on AWS. We don't want to leave you with the impression that that's all we do, we're covering AWS bare metal and open step deployments as well documented Craftsman Cloud. Let's see it in action with a bare metal child cluster. >>We are on the home stretch, >>right. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. Yeah, so why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and support high performance workouts like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another hyper visor layer in between. So continuing on the theme Why communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p g A s G p, us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. We can handle utilization in the scheduling better Onda. We increase the performance and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project. Will add the bare metal hosts, including the host name. I put my credentials. I pay my address, Mac address on, then provide a machine type label to determine what type of machine it is. Related use. Okay, let's get started Certain Blufgan was the operator thing. We'll go and we'll create a project for our machines to be a member off. Helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. Yeah. So the first thing we had to be in post many of the machine a name. Anything you want? Yeah, in this case by mental zero one. Provide the IAP My user name. Type my password? Yeah. On the Mac address for the active, my interface with boot interface and then the i p m i P address. Yeah, these machines. We have the time storage worker manager. He's a manager. We're gonna add a number of other machines on will speed this up just so you could see what the process. Looks like in the future, better discovery will be added to the product. Okay, Okay. Getting back there. We haven't Are Six machines have been added. Are busy being inspected, being added to the system. Let's have a look at the details of a single note. Mhm. We can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. Okay, it's going to create the cluster. Mhm. Okay, so we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So credit custom. We'll give it a name. Thank you. But he thought were selecting bare metal on the region. We're going to select the version we want to apply on. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of the dress range update the address range that we want to use for the cluster. Check that the sea idea blocks for the communities and tunnels are what we want them to be. Enable disabled stack light and said the stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here we're focused on building communities clusters. So we're gonna put the count of machines. You want managers? We're gonna pick the label type manager on create three machines. Is a manager for the Cuban a disgusting? Yeah, they were having workers to the same. It's a process. Just making sure that the worker label host like you are so yes, on Duin wait for the machines to deploy. Let's go through the process of putting the operating system on the notes, validating that operating system. Deploying Docker enterprise on making sure that the cluster is up and running ready to go. Okay, let's review the bold events. We can see the machine info now populated with more information about the specifics of things like storage. Yeah, of course. Details of a cluster, etcetera. Yeah, Yeah. Okay. Well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build, and that brings us to the end of this particular do my as you can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>Here we have a child cluster on bare metal for folks that wanted to play the stuff on Prem. >>It's ah been an interesting journey taken from the mothership as we started out building ah management cluster and then populating it with a child cluster and then finally creating a regional cluster to spread the geographically the management of our clusters and finally to provide a platform for supporting, you know, ai needs and and big Data needs, uh, you know, thank goodness we're now able to put things like Hadoop on, uh, bare metal thio in containers were pretty exciting. >>Yeah, absolutely. So with this Doctor Enterprise container cloud platform. Hopefully this commoditized scooping clusters, doctor enterprise clusters that could be spun up and use quickly taking provisioning times. You know, from however many months to get new clusters spun up for our teams. Two minutes, right. We saw those clusters gets better. Just a couple of minutes. Excellent. All right, well, thank you, everyone, for joining us for our demo session for Dr Enterprise Container Cloud. Of course, there's many many more things to discuss about this and all of Miranda's products. If you'd like to learn more, if you'd like to get your hands dirty with all of this content, police see us a training don Miranda's dot com, where we can offer you workshops and a number of different formats on our entire line of products and hands on interactive fashion. Thanks, everyone. Enjoy the rest of the launchpad of that >>thank you all enjoy.
SUMMARY :
So for the next couple of hours, I'm the Western regional Solutions architect for Moran At least somebody on the call knows something about your enterprise Computer club. And that's really the key to this thing is to provide some, you know, many training clusters so that by the end of the tutorial content today, I think that's that's pretty much what we had to nail down here. So the management costs was always We have to give this brief little pause of the management cluster in the first regional clusters to support AWS deployments. So in that video are wonderful field CTO Shauna Vera bootstrapped So primarily the foundation for being able to deploy So this cluster isn't yet for workloads. Read the phone book, So and just to make sure I understood The output that when it says I'm pivoting, I'm pivoting from on the bootstrap er go away afterwards. So that there's no dependencies on any of the clouds that get created thereafter. Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, The config file that that's generated the template is fairly straightforward We always insist on high availability for this management cluster the scenes without you having toe worry about it as a developer. Examples of that is the day goes on. either the the regional cluster or a We've got the management cluster, and we're gonna go straight with child cluster. as opposed to having to centralize thumb So just head on in, head on into the docks like the Dale provided here. That's going to be in a very near term I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. No, just that the fact that we're running through these individual So let's go to that video and see just how We can check the status of the machine bulls as individuals so we can check the machine the thing that jumped out to me at first Waas like the inputs that go into defining Yeah, and and And that's really the focus of our effort is to ensure that So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old of the bootstrapping as well that the processes themselves are self healing, And the worst thing you could do is panic at the first warning and start tearing things that don't that then go out to touch slack and say hi, You need to watch your disk But Sean mentioned it on the video. And And the kubernetes, uh, scaling methodology is is he adhered So should we go to the questions. Um, that's kind of the point, right? you know, set up things and deploy your applications and things. that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. to the standards that we would want to set to make sure that we're not overloading On the next video, we're gonna learn how to spin up a Yeah, Do the same to add workers. We got that management cluster that we do strapped in the first video. Yeah, that's the key to this is to be able to have co resident with So we don't have to go back to the mother ship. So it's just one pane of glass to the bootstrapped cluster of the regional services. and another, you know, detail for those that have sharp eyes. Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want all of the other major players in the cloud arena. Let's jump into our last video in the Siri's, So the first thing we had to be in post, Yeah, many of the machine A name. Much the same is how we did for AWS. nodes and and that the management layer is going to have sufficient horsepower to, are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. that's been put out in the chat, um, that you'll be able to give this a go yourself, Uh, take the opportunity to let your colleagues know if they were in another session I e just interest will feel for you. Use A I'm the one with the gray hair and the glasses. And for the providers in the very near future. I can hardly wait. Let's do it all right to share my video So the first thing is, we need those route credentials which we're going to export on the command That is the tool and you're gonna use to start spinning up downstream It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, and all of the necessary parameters that you would fill in have That's the very first thing you're going to Yeah, for the most part. Let's now that we have our management cluster set up, let's create a first We can check the status of the machine balls as individuals so we can check the glitches, resolve themselves and leave you with a functioning workload cluster within exactly the same thing with resource is on Prem or resource is, All the different bits and pieces And I've got to tell you from having, you know, managed kubernetes And the straps those components for you? Yeah, it actually even displays at the top of the screen. I really like the on board Bafana that comes along with this stack. the best of what Morantes had in the open stack environment for monitoring and logging So the doctor enterprise Container Cloud managers were Yep, the the Cube config elements of the UCP environment. More thoughts on this wonders if we jump onto the next video. Let's maybe the simplest architectures, of the regional cluster and how it connects to the management cluster on their components, There we have it. that we provide that same operational support system capability Just because it's kind of easy to make that was on AWS. Just making sure that the worker label host like you are so yes, It's ah been an interesting journey taken from the mothership Enjoy the rest of the launchpad
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mary | PERSON | 0.99+ |
Sean | PERSON | 0.99+ |
Sean O'Mara | PERSON | 0.99+ |
Bruce | PERSON | 0.99+ |
Frankfurt | LOCATION | 0.99+ |
three machines | QUANTITY | 0.99+ |
Bill Milks | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
first video | QUANTITY | 0.99+ |
second phase | QUANTITY | 0.99+ |
Shawn | PERSON | 0.99+ |
first phase | QUANTITY | 0.99+ |
Three | QUANTITY | 0.99+ |
Two minutes | QUANTITY | 0.99+ |
three managers | QUANTITY | 0.99+ |
fifth phase | QUANTITY | 0.99+ |
Clark | PERSON | 0.99+ |
Bill Mills | PERSON | 0.99+ |
Dale | PERSON | 0.99+ |
Five minutes | QUANTITY | 0.99+ |
Nan | PERSON | 0.99+ |
second session | QUANTITY | 0.99+ |
Third phase | QUANTITY | 0.99+ |
Seymour | PERSON | 0.99+ |
Bruce Basil Matthews | PERSON | 0.99+ |
Moran Tous | PERSON | 0.99+ |
five minutes | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Nelson Hsu, Dell EMC | CUBEConversation, November 2019
from the silicon angle media office in Boston Massachusetts it's the queue now here's your host Stu minimun hi and welcome to a special cube conversation here in our Boston area studio I am Stu minimun and we're digging in with Dell EMC on data protection in the multi cloud where era happy to join welcome to the program first time guest Nelson Nelson Hsu who is the director of solutions marketing with Dell EMC Nelson great to see you great to be here thank you sir all right so you and I were both at Q con plus cloud native con with about 12,000 of our friends in the open-source community down in San Diego California you know when you bring us in first it's probably not the first place that people think of when they think of Delhi MC so explain a little bit what the team was doing the announcements there and what you're seeing at the show sure no I appreciate that it was a first time for for Dell technologies it was kind of our coming-out party if you all went into the cloud native realm we've got a tremendous amount of momentum especially OCR on kubernetes between what we've done in the data protection space with our power protect software for kubernetes we've done in our storage room in the work that we've done around container storage interfaces so a lot of that was coming out in introducing that to the Keuka and cognitive count attendees I think it was a really good timing though yeah Elson we've been watching you know the role the developers the discussion of DevOps of course is central what's happening not only at cube con but many of the cloud shows there I know at VMworld you know you see what's happening on with the VMware code team so explain how a kind of the the Dell technologies cloud partnership with vm or how about all that all pulls together for activities that the your organization's doing with that within the DevOps well you know you you know they were right they're right it's all about DevOps it's about the developers it's about the the new world of bringing cloud native applications and driving them into the production environment I think that you know we heard that at vmworld with pack L singer and we're his his pillars of you know build run protect connect are key aspects so you know if you look at that man component protect falls right into that area right because with the growth of data as we're seeing it today the need to manage that in the cloud native realm becomes even more prevalent and important you know we've seen DevOps mature over the last couple years where you see you know we had 8,000 people in Seattle right now we had 12,500 of your best friends and just gonna go out right I'm sure you saw that yeah absolutely huge growth there and I'm glad you brought up to protect thing because when I think about developers we want to reduce the friction for developers to be able to build their apps you think about DevOps is you know keeping agility going but you know where is the data and how do I make sure that you know we know when we go to a cloud world we still need to things about security we still need to think about data management and data protection there so explain for audience how that protect piece fits into the DevOps world well you know for first we should clarify a little bit right because like over the last two years everything's been about security within containers right and that's great because you're protecting the applications and people are worried about about penetration there and and it's been fantastic and I think that today specifically around the aspect of securing the application and now securing the infrastructure is key you know storage has become a very very relevant topic whether it's like persistent volumes taking center stage right when it comes to claim a vApps movie into production because it's about protecting those mission critical workloads and as you just stated you have your applications but at the end of the day your data right is really at the capital right and that's what you really need to focus on it becomes greater and greater importance when you have that holistic discussion about DevOps right and so now we have the aspect of the kubernetes administrator meets the IT administrator all right and having to be able to protect through this application transformation that's being driven by cloud native complexity and that you know tradition was disaggregated from the infrastructure but now as you mature and you look at those production and mission-critical environments you really have to pay attention to how am I going to protect my data the edge to core to cloud and in that cloud native world yeah definitely is one of those areas we found at the conference for many it's a steep learning curve to try to understand you know kubernetes all these cloud native architectures if you come in there with the traditional infrastructure role I was actually something we were discussed more a couple of years ago was they've some of the basic blocking and tackling of networking and storage inside of a container environment but now a lot of discussion is around that application development and therefore we need to make sure that we're having not only the app dev but the infrastructure team all understanding how everything goes together and you know protection of course a critical piece there oh absolutely and and you know if we look at all the different projects that are underway under C and C F I mean it's fantastic right I mean there's so much momentum everyone's now also looking at that infrastructure right I mean last year was all about the surface mesh right so I think that we're at that inflection point and now it's going to be a lot about the storage and protecting that storage if you look at Project Valero right so project Valero wasn't as an open source project under C and C F right being driven by the work that was done by the the you know the the the active form enormous hefty oh right so I got Joe Bereta right you got Greg Milwaukee and the work that they done in the starter house arc well now WMC in specificity of the data protection team is working and contributing hand in hand with the vmware team on velaro and i think you'll see that resonate through the future of tansu and pacific as we go forward great let's connect the dots now between what we're doing is the CMC F cube con show and now we've got AWS reinvented coming up so Amazon might now let us use the word multi-cloud that that context there but absolutely that was the conversation at many of the other shows this year is you know hybrid cloud multi-cloud how customers get their arms around all these environments so you know help us understand how this story that we were just talking about for cloud native environment fits into the broader kind of public cloud discussion oh absolutely so you I think one of the key aspects to that is around consistency right so being able from a data protection perspective be able to protect all that valuable data that you have whether it's in premises where it's in cloud with its multi cloud or hybrid and you want to be able to protect that holistically using the same capability you have from your premises base into or out of or within cloud all right so I want to be able to within AWS be able to protect my data from region to region right so we've got a great offering for VMware cloud on AWS it allows you to protect into and within the cloud itself so you can protect in and extend out to the cloud yeah definitely probably one of the most interesting partnerships I think the industry's been watching the last two years is you know VMware and AWS now you know the dominant virtualization you know in your data center environment and you know the leader in public clouds so looking forward to hearing some proof points at the conference and he gives a little bit of hint as to what we'll be seeing in hearing about at the show well I think you'll hear a lot about that consistency with regards to you know observability orchestration automation automation becomes so key that you take your workflows for data protection from premises to the cloud and having that consistency I think you'll also see some pretty pretty significant numbers coming forth with regards to how much data is being protected in in AWS ok definitely looking forward to that always love looking forward to the customers all right Nelson I want to give you the last word what else should we be looking for your team kind of end of 2019 it going into 2020 well you know I think it all starts with cloud and multi clock all right that's our core focus that's what we're driven to I think you'll see innovation especially in the cloud native space that we have I think you will see further innovation in in the in the cybersecurity in the cyber recovery space around data protection so I think those are really key elements that that you'll see more from yeah absolutely super important discussions around data around security and everything there Nelson thank you so much for joining us here in the cube sue thank you all right be sure to check out silicon angle for exclusive content leading up to and after AWS reinvent of course and check out the cube net if you're not at the table if you are at the show come to the center of the show floor at the Venetian inside the Sands Convention Center you can find myself Dave Volante John Ferrier and our whole team there for three days water wall coverage for our last big show of the year and I'm Stu minimun thank you for watching the Q
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
12,500 | QUANTITY | 0.99+ |
Greg Milwaukee | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
November 2019 | DATE | 0.99+ |
2020 | DATE | 0.99+ |
Joe Bereta | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
8,000 people | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
San Diego California | LOCATION | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
Sands Convention Center | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
DevOps | TITLE | 0.99+ |
Stu minimun | PERSON | 0.99+ |
last year | DATE | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
Nelson | PERSON | 0.98+ |
Boston Massachusetts | LOCATION | 0.98+ |
Stu minimun | PERSON | 0.98+ |
first time | QUANTITY | 0.97+ |
Venetian | LOCATION | 0.97+ |
end of 2019 | DATE | 0.97+ |
WMC | ORGANIZATION | 0.96+ |
first | QUANTITY | 0.95+ |
both | QUANTITY | 0.95+ |
this year | DATE | 0.95+ |
Elson | PERSON | 0.95+ |
vm | ORGANIZATION | 0.95+ |
John Ferrier | PERSON | 0.94+ |
first time | QUANTITY | 0.94+ |
VMworld | ORGANIZATION | 0.94+ |
one | QUANTITY | 0.92+ |
a couple of years ago | DATE | 0.91+ |
CMC | ORGANIZATION | 0.9+ |
Nelson Nelson Hsu | PERSON | 0.9+ |
Valero | ORGANIZATION | 0.9+ |
last couple years | DATE | 0.89+ |
vmware | ORGANIZATION | 0.89+ |
velaro | ORGANIZATION | 0.83+ |
vApps | TITLE | 0.83+ |
Delhi | LOCATION | 0.82+ |
pack L | PERSON | 0.81+ |
about 12,000 of our friends | QUANTITY | 0.78+ |
last two years | DATE | 0.76+ |
C F | TITLE | 0.76+ |
Q con | EVENT | 0.74+ |
first place | QUANTITY | 0.74+ |
C | TITLE | 0.69+ |
VMware cloud | TITLE | 0.62+ |
Keuka | ORGANIZATION | 0.51+ |
Valero | TITLE | 0.49+ |
MC | ORGANIZATION | 0.37+ |
Adam Bergh & Mark Carlton | NetApp Insight 2017
>> Narrator: Live from Las Vegas, it's the Cube. Covering NetApp Insight 2017. Brought to you by NetApp. >> Hello everyone, welcome back. We're live in here Las Vegas with NetApp Insight 2017. This is the Cube's exclusive coverage. I'm John Furrier, the host of Cube. Also co-founder of SiliconANGLE Media. My co-host Keith Townsend, CTO advisor, talking about the channels, talking about services, talking about data fabric. Our next two guests is Mark Carlton, it's the group technical director of Concorde Technology group, and Adam Bergh who's the data center practice director of Presidio. Guys you're on the front lines. Got the A-Team shirts on. Guys you're on the A-Team, which is a very high bar at NetApp, so congratulations. I've had a few on today already. What's exciting is that this whole digital transformation kind of cliche, it's kind of legit. It's happening. No brainer on that. But it's not a buzzword anymore, it's actually happening. Here's from the front lines. Share your perspective on what this means because most of the folks that are adopting data realize that it's not an after thought. It's fundamental, foundational thinking. But they're busy. They got a lot on their plate. They got dev option, the cloud, and on-premise transformation. They got data governance architecture. They got security practices that are being unbundled from IT. Internet of things over the top. All this stuff's happening. It's crazy. >> Yeah I mean you're absolutely right. So this concept of data transforming and data transformational services was sort of a buzz word three years ago, even when NetApp rolled out this concept of the data fabric right? It really was just a buzz word. It was an idea of freely moving your data in and out of multiple clouds. Not having siloed data. Being able to move your data where you need it when you need it. I mean we're really finally at this point in time, this inflection point where this is a reality for our customers. And I actually want to kind of bring up what NetApp announced here today at insight with ONTAP 9.3. So a little history lesson, NetApp has been promising this data fabric where they're able to freely move data in and out of their different portfolio products. And one of that vision was to move data between their SolidFire platform and their ONTAP platform. So there's two major platforms that they have in the all flash world. So with 9.3 and element 10, which was also announced simultaneously, we actually have the ability now to move data between these two platforms to really start to envision this data fabric world. So I'm really excited that we're actually seeing this vision that was kind of laid out by NetApp three and four years ago. >> That's super hard too by the way. It's not easy, but I got to ask you because, again, in the cloud world you see things like kubernetes, certainly containers has been the rage. But the orchestration aspect of cloud native services in apps is key. You're bringing up an issue around the data. Orchestration of data isn't easy. How do you do it? Okay you can, I get the announcement. SolidFire and ONTAP working well together in 9.3. Is it easy? >> Yep. >> Can you share your thoughts on how easy it is or what needs to be done to set up for that (mumbles)? >> We don't really talk about this, but I'm going to because we saw it today. Cloud orchestrator. >> Yep. >> So this is a gorgeous new interface that NetApp's putting out there to bring that reality of in going to click a button and I'm going to deploy a kubernetes workload. I'm going to deploy doc or I'm going to deploy workloads in Azure. I'm going to deploy a workload in ONTAP on-premises. I'm going to deploy a workload in AWS. And I'm going to be able to freely move that data. I've got a button that's going to make this, the data orchestration happen. It's really fundamentally changing something that's very complex into something that's very easy and accessible to most customers. >> And that's, by the way, the premise of multi-cloud too by the way. So you're saying that they're going to be able to orchestrate and move data across clouds? >> Yes. >> Seamlessly? >> Yeah, across clouds. >> That's hard to do. Mark you have a comment on that? >> Yeah and I think that's really given us the flexibility-- >> John: By the way, not a lot of companies do this probably? >> No, no. And that's why NetApp stands out. And this it makes the conversation with customers really easy now today when we're talking to customers. We're not talking about the technology all the time, we're talking about what you want to do. What do you want to do for your business? How do you want to use your data? How do you want to access your data? And the tools that NetApp are starting to bring out around this, and giving us the capability and flexibility to give control back to the customer. To do what they want to do at that time. They don't have to make them decisions now. So and having that so it's orchestrated across the multiple cloud platforms, and be able to move that data to where the data's best placed for what that business needs is a great conversation to have. We couldn't have that a few years ago. We weren't able to, you were talking about this with data. And now when I talk to customers, I talk about the data fabric, but I don't actually mention it. It's just a strategy in my head. So as I'm going through a conversations, I'm starting to under right what are you wanting to do and how you want me to point it out? >> John: It went from pipe dream to reality basically? >> Yeah. >> Alright so let me just get this so I get right 'cause this again, and we've been looking at this. Not a lot of people do it so we're tracking it. Multi-cloud certainly is what customers want. It's hard to get there. So the question is, every cloud's got a different architecture. S3 and Amazon then how you move and stack it from there is different. It's also different on-prem. So you go back and look at like I got Spark on this, Dupe on this, and I'm pipe lining data here. But then they pipeline it differently (mumbles). So you have different clouds, but then on-prem might be different. How does a, if a customer says okay bottom line me. On-prem, I can move data from on prem to the cloud or is it only across clouds? Or both? >> So we can move data freely, anywhere we want it today. >> Including on premise? >> Today. >> Okay. So let me paint you a picture. Traditional architectures, I'm going to talk about something like a flex pod architecture from NetApp in Sisco. That's your traditional, I'm running traditional workloads on premises. I need some of that data now to flow up into AWS. I spin up instantaneously a cloud ONTAP workload. I click a mouse button, I have a snap mirror to Amazon AWS. Wait a minute. I wanted that data over in Azure. I click a mouse button, I've spun up a cloud ONTAP instance over in Azure, and I've snap mirrored my data over there freely. I want that data back into an S3 type bucket down into on-premises, I'm going to set up a storage grid web scale workload. I can bring that data into an object S3 type data workload instantaneously. I have that data-- >> So your abstracting away the complexity of the cloud so I don't have to rewrite code? >> Adam: Absolutely. >> Does it for you? Alright I'm going to throw-- you guys is good. Cracking the host here. You guys are killin me here. Good, your good. Alright here's a tough one. Okay I got a policy question. I got region in Germany. My data's in Germany, but I replicated it in the U.S., and I don't know what's going on over there. How does a customer deal with that because now in cloud you got regional issues. You got GDPR now going on. So your in the UK, you know what I'm talkin about. So I check the box on the policy. I'm okay in Germany, but my data center in Ireland has replicated data. >> Yeah. >> So this is a really conflict in the privacy. How do you manage that? Is that managed? (speakers talk over each other) >> It genuinely goes down to what sort of data, and what are they doing at the time, or what type of data you're collecting. The conversations I'm having with customers around the GDPR as such because in the UK we're talking about it all the time. Every customer is wanting to talk about are they done the road? Where are they? Try to build that foundation and understanding of-- >> Is that the number one thing you're talkin about to customers is GDPR right now? >> GDPR comes up, you see I wouldn't say it comes up in every conversation. I mean it has to. The main reason it has to is because now we've got that privacy by design so you've got to start to understand as you're designing these solutions and you're designing where this data's going to sit-- >> And the deadline is looming right? I mean I don't know the exact date but-- >> May the 28th in 2018, and it's creeping up. Customers are still sat trying to think about GDPR. They-- >> They're procrastinating till, right. >> Yeah. And I'll still walk into meetings and mention GDPR, and people will look at me and go, "Well what's that." >> You're going, "You're screwed." >> Yeah and we're just getting (mumbles). >> Could be an interesting conversation. >> Y2K all over again. >> It is, and as soon as start getting some (mumbles) conversations. But if you look at what Azure's doing around that NWS, and how they're strengthening that message. Some people are moving it to like an Azure cloud platform because of the GDPR capabilities and the security capabilities that it has, and how that-- And that goes for things like the Office 365 suites and those sorts of areas. Because you're able to start moving your data and freely have that movement, and then we go into things like cloud control and how you can back that up and how we can move the data again from NetApp. It's a software element that gives you the capability to backup Office 365 suites from one cloud to another cloud. >> So GDPR, you see, as a big opportunity for cloud providers like Azure. >> So long as it's-- >> They bring something to the table right? >> Yeah they bring different things to the table. They bring, you have elements of data where you need that on-premise solution. You need to have control, and you need to have that restriction about where that data sits. And some of the talks here that are going on at the moment is understanding, again, how critical and how risky is that data? What is it you're keeping, and what is-- How high does that come up in our business value it is? So if that's going to be your on-premise solution, then maybe other data that can go push out into the cloud. But I would say Azure, the AWS suites, and Google they are really pushing down that security. What you can do, how you can protect it, how you can protect that data, and you've got the capabilities of things like LSR or GSR on having that global reach or that local repositories for the object storage. So you can to control by policies, you can write into this country, but you are not allowed to go to this country and you're not allowed to go to that one. And cloud does give you that to a certain element, but also then you have to step back into maybe search the thing that-- >> So does that make cloud orchestrated more valuable or does it still got more work to do because under what Adam was saying is that the point and click is a great way to provision. >> Man: Mhm. >> Right? You can move onto other things pretty quickly. So in your scenario about the country nuances, does cloud orchestrator handle that too or? >> So the cloud orchestrator will, I mean the promise is that you will be able to pick and choose where you want your data to live. When you want it to move it tomorrow, you know you pick the data center, you pick the geo, you pick your AWS availability zone, and that's where you move your data. You'll have a drop down box that will show you a list of AWS availability zones where your data will live. So if you have specific requirements, specific compliancies that you need to abide by, that will be baked into the application. And if specific requirements change, you can change with it very, very easily. >> John: You can manage a policy to an interface. >> Managing the policy's very easily. And the point being is that we can no longer build silos where your data is stuck in the space that it is. Because of some things like GDPR in Europe or other regulations, you need to have the ability to move that data when you need to. Maybe even at a moment's notice. >> So I got to ask. This is obviously a pressing time in our country, obviously the attacks happened in Vegas. So a lot of people aren't going to make the trip here, have not made the trip, some people stayed at home. So I'd love to ask you guys if you can just take a minute for each of you to share what's exciting that's happening here. Because you know this is a cool announcement. Cloud orchestrator is getting a lot of good buzz. I've been watching the feedback on Twitter from some of the influencers and some of the practitioners. We had a previous guest mention it. What's ah-ha moment here for folks that should know about what's happening that might have missed it because they couldn't make it? >> So I don't know. For me the ah-ha moment was when they said NetApp was finally delivering that the real vision of NVME over fabrics. So we've had a lot of, there's a lot of other storage partners out there that have been talking about NVME as this game changing platform, but really what they're doing's NVME on the backend. Really the promise of NVME is the over the fabric portion of it. NetApp is building into their flagship ONTAP platform a checkbox that says, "I'm going to make this NVME over fabric. "I'm going to make this "storage class memory as a check box." >> John: What's the impact of customers? >> Impact is ultra low latency. Latencies that you can't even achieve with SSDs today. Even with SSDs, NVME on the backend of your controllers. It really is going to enable the high quality analytics. The data services that we just couldn't even achieve at one millisecond latencies, we're down into sub millisecond. .1 millisecond latencies. >> John: So huge performance gains? >> Huge performance gains. It's really going to enable a whole new suite of ideas that we can't even think about. >> And developers will win on this too. It makes data more valuable (mumbles). Mark thoughts on what's exciting here for the folks that couldn't make it? >> I think from my point of view it is that going into orchestration and management point. So leading on from really what Adam was saying then, you were going into developers and how they're going to get the benefit of working with the more performing kit, easier to manage, so they can start to develop that. The orchestration and management and the provisioning and being able to roll out these environments. There's the plugins to some of the areas that we talked about today, and the expansion of that management suite and the ease of that management suit for multiple different users to be able to benefit from it. I want to say from a development and a, or a customers side: the easier we make it to manage, the infrastructure you kind of forget about. Which means you can start to concentrate on the application, how you deliver, what you deliver. And that's really where I see NetApp moving too. It's taken it away from this is the infrastructure and you've got a flexpod, taking it to the next level and going, "Right okay. "Now let's show you what we can do "and how you can use this infrastructure "to be able to benefit your business." And that's one of the big things that I am starting to see. >> The thing I am excited about is the pub initiative. The NetApp.io is the URL. ONTAP, pun intended, you know beer. The developer dev-op story is coming together. I think when you combine some of the Invenio fabric issues is look at the developer pressures to make the infrastructure programmable. That's a huge challenge, and automation's got to be enabled. So I'd love to get your thoughts on how NetApp is positioned visa via what customers want to get to which is, I call self driving infrastructure. Larry Elson calls it self driving databases. But that's pretty much what we want. You want to have under the hood stuff work. But it's the developers and it's using the data in a programmatic way to do automation, hit that machine learning, some of that bounded activity's going to be automated, but then the unbounded data analytics starts to kick in really nicely. >> So element OS is really one of NetApp strategies of what they're calling the next generation data center. And I kind of talk about it with customers as we call it transparent infrastructure to your developers and dev-ops teams. Infrastructure that they don't even have to carry about, care about. That it's highly scalable, highly performant, API driven, cloud like architectures, but on-premise, on-premises so you don't have to worry about cloud sort of data security issues, encryption issues up in the cloud. So you have that cloud like transparent architecture. I mean who knows what hardware runs in the cloud. Do you know what hardware runs in AWS Azure? We don't really care right? >> John: They make their own. >> Yeah we don't care. It works right? It's transparent to the end user, and that's what NetApp is promising really. >> John: Well server-less looks good too right? >> Yeah absolutely. >> Interesting. >> That's really what we're talking about, and that's element OS from NetApp is really the heart of that sort of story. >> Alright so take a step back. You guys are very successful, super smart. Thanks for sharing. It's great conversation, wish we had more time. But the role of the channel is changing. It used to be move boxes through the channel back in the day. That's no longer a storage company. They're a data company, I get that. High level message. I get the positioning. But the reality is you still need to gear to store the stuff on. So still some business there, but the role of the channel and the providers, whether you call em VARS or global (mumbles). You guys in particular have a lot of expertise. The cloud guys are very narrow. They get all the large scale business. But as these solutions start to become vertical, you need data that's specialized to the app, but you want the horizontally scalable benefits of the infrastructure. So you got to balance specialism, which is domain expertise, in a vertical and general, scalable cloud. So that means it's an opportunity for the channel to be basically cloud providers. So the question is, is that happening in your mind? Do you see that playing out because that means bringing technology to the table and using native clouds, not cloud natives, like the native infrastructures of service. 'Cause the action SaaS. Everyone's going to be a SaaS company. >> I mean we're fundamentally turning Presidio in from that traditional, "Hey we're slinging hardware" to a data service is a data management and cloud consulting model where we're even developing our own cloud based tools. Our own cloud based orchestration tools. So we're developing a tool called cloud concierge. So cloud concierge is something that we're not even going to charge for, but what it does it multi-cloud management on-premises, point and click deployment models. Single point of billing infrastructure for multi-cloud charge back and other features like that. So that's where we really see the future of a company like Presidio is something like cloud concierge. >> 'Cause you could bring a lot to the table, so why not build your own tech on top of clouds. >> So we're really becoming a tool company where we're developing our own intellectual property-- >> It's kind of a loaded question, but you guys are on the front lines. It's really kind of, it's more of a directional thing. Mark do you see the same thing in the UK? >> Yeah I was going to say from my point of view we, in our company we deliver infrastructure as a service, platform as a service, backup as a service. So there's lots of different cloud elements that we build within the company. Really that's driven through the conversations, again, we're having with customers. And customers don't, the customers we're talking to and the customers in UK, a lot of them don't jump straight into a cloud opportunity. It's either, like a little bit of data, see what it does, make sure it's the right application. But the, again, that conversation. Because it's changing, our business is having to change. >> Well the purpose of sales channels is to have indirect sales. And companies can't hire people fast enough that actually know the domain specific things. So I see the trend really moving fast along the lines of the specialty channel partners now turning into actual technology partners. >> Yes-- >> So that's going to be a threat to (mumbles) of the world. >> And that's the thing. That's one of the key things. Customers when I talk to them, they're not looking for a partner to sell them something. They're looking for a partner to help them strengthen their IT solutions. >> John: And cross the bridge to the future. >> Yeah. And that's it. And they want a partner they can grow with and keep moving with-- >> Keith you want to get a question in edgewise here? I mean come on buddy. (laughs) >> It was pretty tough. Actually I would like to bring it back to the technology. I'm a technologist at heart. And while this sounds great and magical, one of the practical problems we run into in this type of data mobility is cost and just size of data. So... Let's operationalize this. Bring this down to the ops guy. When, at the end of the month, am I going to see a large egress bill from AWS, Azure. At the end of the month am I going to have the equivalent of bad MPV scores from my internal developers just saying, "Yeah I asked for the data to be moved "from AWS to Azure, "but it was several terabytes and it took several days." So operationalize this for me. Bring it down to the ops perspective. Where is the op cost in this solution. >> NetApp has some really cool technologies around this. I want to talk about one or two real quick. NetApp private storage. This is your own hardware connected to multiple clouds. You want to take that cloud from IBM SoftLayer to Azure to AWS, the data doesn't even have to move. You're basically making a cloud connect through an Equinex data center into multiple clouds. You have the ability to have zero egress charges and multi-cloud hyper scaler access for that for those analytical services. That's one solution. Another one is what's rolling out in the new storage grid web scale 11.0 that NetApp just announced today. It's complete hooks into AWS for all their analytical tools that are prebuilt in AWS. So your data can live on-premises in your own S3 buckets, but you can make API calls into AWS when certain data changes. Where you have the analysis happening in the cloud on your data, but your data never leaves your own physical hardware where you control the data governance of that data. So there are solutions out there that NetApp is really on the forefront of solving these solutions where-- I want my data on-premise. I don't want to pay egress charges, but I still want to take advantage of these amazing services that AWS and Azure are putting together. >> So speedlight. I think we still need to answer that speedlight problem. You know I have, let's say that I go with a CNF like Equinix, and Equinix has data centers across the U.S. and the world practically. But data still has gravity. I can't magically move terabytes of data from one facility, CNF, to another one. What are the limits of the technologies? Where can we go? What are other solutions we need to probably take a look at when it comes to sharing data across geographic regions? >> Yeah so I would say from my point of view, this is when things come into such as our (mumbles) region. And you look at what we're doing with the SJ platforms and how they spread those out because their repositories are moving that data about. And how you can drive that policy driven, you're writing into one place in the background. Then the data is seamlessly moving between different areas. If it's something like a migration where you're actually moving data from one platform to another, there's tools. If you think of things within the MPS solution, which Adam talked about earlier, if it was set within a Equinix building, and you had your express routes and you had your direct connects into the cloud providers that are there, you can use tools that are built into NetApp to actually be able to move that data between those cloud providers or change the VMs and such. It's the virtual machines from a VM platform or hyper V platform, or whichever it'd be to be able to move that using an on command shift tool. So no data is having to move. You're not having to, you've got none of those costs. I think from a management, because of how easy it is to move the data or of the control we have over data now. Using things like OCI and those tools to be able to manage and understand what your costs are, what the drawbacks are, understand where you've got VMs. Do you use that data? A lot of customers don't have that insight. They will go, "I need to move 10 terabytes." Because they think that's what they have. Realistically, 8 terabytes of that data has been sat there, not touched for the last 10 years. And if you move all that 8 terabytes, it's going to cost you money because it's just going to be sat there. You need to move the data that you need to work with. And that's one of the conversations that I have with customers today. It's not about just throwing everything up into the cloud 'cause that's not always the cost effective solution. It's about putting the right data into the right place and the right file solution. So it might be one terabyte needs to go there, but it's what you're going to do with it. Are you going to use it primarily to run analytics again to start to use it to drive the business forward, or is it a terabyte that you're going to sit there and archive. >> Yeah the cheapest data, the cheapest faster data transfer is that transfer you never have to make. So if you don't have to make the data transfer, you'll save money in both time and cost for moving that data. I really appreciate that feedback. >> Guys thanks for coming on the Cube. The A-Team, love when it comes all together. Love the riff on the A-Team. But the bar is high. You guys are really smart. Love the conversation goin back and forth. You guys are answering all the tough questions. Final question for you is, you're on the front lines. The world's changing. What's the advice to your peers out there that are watching? How to attack this environment because how do you win under this pressure? It's a hard game right now, a lot of hard stuff's being done. Whether that's cloud architecting, that's on-prem private cloud, or moving to the cloud. A lot of heavy lifting's going on. It looks easy. I want the magic. I want push button cloud orchestration to consumer apps. Your advice. >> Find a strong partner. So I mean if you're going out there, you're not going to be able to learn everything yourself. You want to have a strong partner that's got a big team. A team that has the breath and scope to deal with some of the big challenges out there that can put together best of breed solutions from multiple vendors. So not just NetApp, not just our cloud partners, but someone who has the breath and depth and scope. Find that right partner that's good for you and your organization. >> John: Mark? >> And I agree in the way of the partnership side of things. That's really what's going to drive customers. In making sure that you've got a partner that you can rely on to be able to move forward. Make sure they can help you understand your business, but you clearly understand what your business is trying to achieve. So it's, I ask people today what's your business? Do you understand your business? Do you understand your customers? And a lot of the time it's yeah. We understand what they do. But they don't understand the business. And it's key to understanding what you need to do, how you need to achieve it, and having a partner that can support you through that phase. >> Awesome, great. Thanks for coming on. I really appreciate it. I would add community as the open source continues to grow, big part of it. Being part of the community, being great partnerships, being transparent. It's the Cube bringing all the data to you here live in Las Vegas for NetApp Insight 2017. I'm John Furrier with Keith Townsend. More live coverage after this short break. >> Woman: Calling all barrier breakers, status quo smashers, world changers.
SUMMARY :
Brought to you by NetApp. because most of the folks that are adopting Being able to move your data where you need it but I got to ask you because, again, but I'm going to because we saw it today. and I'm going to deploy a kubernetes workload. And that's, by the way, That's hard to do. I'm starting to under right what are you wanting to do So the question is, So we can move data freely, I need some of that data now to flow up into AWS. So I check the box on the policy. How do you manage that? because in the UK we're talking about it all the time. The main reason it has to is because May the 28th in 2018, and people will look at me and go, It's a software element that gives you the capability So GDPR, you see, So if that's going to be your on-premise solution, is that the point and click is a great way to provision. So in your scenario about the country nuances, I mean the promise is that you will be able And the point being is that So I'd love to ask you guys if you can just take a minute For me the ah-ha moment was when Latencies that you can't even achieve with SSDs today. It's really going to enable for the folks that couldn't make it? There's the plugins to some of the areas So I'd love to get your thoughts on So you have that cloud like transparent architecture. and that's what NetApp is promising really. is really the heart of that sort of story. So that means it's an opportunity for the channel to be So cloud concierge is something that 'Cause you could bring a lot to the table, but you guys are on the front lines. and the customers in UK, So I see the trend really moving fast And that's the thing. And they want a partner they can grow with Keith you want to get a question in edgewise here? "Yeah I asked for the data to be moved You have the ability to have zero egress charges and Equinix has data centers across the U.S. You need to move the data that you need to work with. So if you don't have to make the data transfer, What's the advice to your peers out there that are watching? Find that right partner that's good for you and having a partner that can support you It's the Cube bringing all the data to you status quo smashers, world changers.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mark Carlton | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Adam Bergh | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
Germany | LOCATION | 0.99+ |
Ireland | LOCATION | 0.99+ |
Larry Elson | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
10 terabytes | QUANTITY | 0.99+ |
8 terabytes | QUANTITY | 0.99+ |
Equinix | ORGANIZATION | 0.99+ |
Vegas | LOCATION | 0.99+ |
NWS | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
U.S. | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
Office 365 | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
IBM | ORGANIZATION | 0.99+ |
GDPR | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Mark | PERSON | 0.99+ |
ONTAP | TITLE | 0.99+ |
Presidio | ORGANIZATION | 0.99+ |
NetApp | TITLE | 0.99+ |
two platforms | QUANTITY | 0.99+ |
one millisecond | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
two major platforms | QUANTITY | 0.98+ |
three years ago | DATE | 0.98+ |
one terabyte | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
two guests | QUANTITY | 0.97+ |
ONTAP 9.3 | TITLE | 0.97+ |
each | QUANTITY | 0.97+ |
one solution | QUANTITY | 0.97+ |