Richard Hartmann, Grafana Labs | KubeCon + CloudNativeCon NA 2022
>>Good afternoon everyone, and welcome back to the Cube. I am Savannah Peterson here, coming to you from Detroit, Michigan. We're at Cuban Day three. Such a series of exciting interviews. We've done over 30, but this conversation is gonna be extra special, don't you think, John? >>Yeah, this is gonna be a good one. Griffon Labs is here with us. We're getting the conversation of what's going on in the industry management, watching the Kubernetes clusters. This is large scale conversations this week. It's gonna be a good one. >>Yeah. Yeah. I'm very excited. He's also got a fantastic Twitter handle, twitchy. H Please welcome Richie Hartman, who is the director of community here at Griffon. Richie, thank you so much for joining us. Thanks >>For having me. >>How's the show been for you? >>Busy. I, I mean, I, I, >>In >>A word, I have a ton of talks at at like maintain a thing and like the covering board searches at the TLC panel. I run forme day. So it's, it's been busy. It, yeah. Monday, I didn't have to run anything. That was quite nice. But there >>You, you have your hands in a lot. I'm not even gonna cover it. Looking at your bio, there's, there's so many different things that you're working on. I know that Grafana specifically had some announcements this week. Yeah, >>Yeah, yeah. We had quite a few, like the, the two largest ones is a, we now have a field Kubernetes integration on Grafana Cloud. So our, our approach is generally extremely open source first. So we try to push stuff into the exporters, like into the open source exporters, into mixes into things which are out there as open source for anyone to use. But that's little bit like a tool set, not a ready made solution. So when we talk integrations, we actually talk about things where you get this like one click experience, You log into your Grafana cloud, you click, I have a Kubernetes, which probably most of us have, and things just work like you in just the data. You have to write dashboards, you have to write alerts, you have to write everything to just get started with extremely opinionated dashboards, SLOs, alerts, again, all those things made by experts, so anyone can use them. And you don't have to reinvent the view for every single user. So that's the one. The other is, >>It's a big deal. >>Oh yeah, it is. Yeah. It is. It, we, we has, its heavily in integrations course. While, I mean, I don't have to convince anyone that perme is a DD factor standard in everything. Cloudnative. But again, it's, it's, it's sometimes a little bit hard to handle or a little bit not easy to get into. So, so smoothing this, this, this path onto onboarding yourself onto this stack and onto those types of solutions. Yes. Is what a lot of people need. Course, if you, if you look at the statistics from coupon, and we just heard this in the governing board session yesterday. Yeah. Like 60% of the people here are first time attendees. So there's a lot of people who just come into this thing and who need, like, this is your path. This is where you should be going. Or at least if you want to go, go there. This is how to get there. >>Here's your runway for takeoff. Yes. Yeah. I think that's a really good point. And I love that you, you had those numbers. I was curious. I, I had seen on Twitter, speaking of Twitter, I had seen, I had seen that, that there were a lot of people here coming for the first time. You're a community guy. Are we at an inflection point where this community is about to continue to scale? >>That's a very good question. Which I can't really answer. So I mean, >>Obviously I bet you're gonna try. >>I covid changed a few things. Yeah. Probably most people, >>A couple things. I mean, you know, casually, it's like such a gentle way of putting that, that was >>Beautiful. I'm gonna say yes, just to explode. All these new ERs are gonna learn Prometheus. They're gonna roll in with a open, open metrics, open telemetry. I love it, >>You know, But, but at the same time, like Cuban is, is ramping back up. But if you look at the, if you look at the registration numbers between Valencia Andro, it was more or less the same. Interesting. Which, so it didn't go onto this, onto this flu trajectory, which it was on like, up to, up to 2019. I expect this to take up again. But also with the economic situation, everything, I, I don't think >>It's, I think the jury's still out on hybrid. I think there's a lot, lot more hybrid. Let's see how the projects are gonna go. That's what I think it's gonna be the tell sign. How many people are in participating? How are the project's advancing? Some of the momentum, >>I mean, from the project level, Most of this is online anyway. Of course. That's how open source, right. I've been working for >>Ages. That's >>Cause you don't have any trouble budget or, or any office or, It's >>Always been that way. >>Yeah, precisely. So the projects are arguably spearheading this, this development and the, the online numbers. I I, I have some numbers in my head, but I'm, I'm not a hundred percent certain to, but they're higher for this time in Detroit than in volunteer as far somewhere. Cool. So that is growing and it's grown in parallel, which also is great. Cause it's much more accessible, much more inclusive. You don't have to have a budget of at least, let's say, I don't know, two to five k to, to fly over the pond and, and attend this thing. You can just do it from your home. So that is, that's a lot more inclusive. And I expect this to, to basically be a second more or less orthogonal growth, growth path. But the best thing about coupon is the hallway track. I'm just meeting people, talking to people and that kind of thing is not really possible with, >>It's, it's great to see people >>In person. No, and it makes such a difference. I mean, yeah. Even and interviewing people in person too. I mean, it does a, it's, it's, and, and this, this whole, I mean cncf, this whole community, every company here is community first. It's how these projects come to be. I think it's awesome. I feel like you got something you're saying to say, Johnny. >>Yeah. And I love some of the advancements. Rich Richie, we talked last time about, you know, open telemetry, open metrics. You're involved in dashboards. Yeah. One of the themes here is ease of use, simplicity, developer productivity. Where do you see the ease of use going from a project standpoint? For me, as you mentions everywhere, it's pretty much, it is, it's almost all corners of the world. Yep. And new people coming in. How, how are you making it easier? What's going on? Give us the update on that. >>So we also, funnily enough at precisely this topic in the TC panel just a few hours ago, about ease of use and about how to, how to make things easier to, to handle how developers currently, like if they just want to get into the cloud native seen, they have like, like we, we did some neck and math, like maybe 10 tools at least, which you have to be somewhat proficient in to just get started, which is honestly horrendous. Yeah. Course. Like with a server, I just had my survey install my thing and it runs, maybe I need a database, but that's roughly it. And this needs to change again. Like it's, it's nice that everything is, is un unraveled. And you have, you, you, you, you don't have those service boundaries which you had before. You can do all the horizontal scaling, you can do all the automatic scaling, all those things that they're super nice. But at the same time, this complexity, which used to be nicely compartmentalized, was deliberately broken up. And so it's becoming a lot harder to, to, like, we, we need to find new ways to compartmentalize this complexity back to, to human understandable levels again, in particular, as we keep onboarding new and new and new, new people, of course it's just not good use of anyone's time to, to just like learn the basics again and again and again. This is something which should be just compartmentalized and automated away. We're >>The three, We were talking to Matt Klein earlier and he was talking about as projects become mature and all over the place and have reach and and usage, you gotta work on the boring stuff. Yes. And when it's boring, that means you have success. Yes. But then you gotta work on the plumbing. What are some of the things that you guys are working on? Because people are relying on the product. >>Oh yeah. So for with my premises head on, the highlight feature is exponential or native or spars. Histograms. There's like three different names for one single concept. If you know Prometheus, you ha you currently have hard bucket boundaries where I say my latency is lower equal two seconds, one second, a hundred milliseconds, what have you. And I can put stuff into those histogram buckets accordingly to those predefined levels, which is extremely efficient, but like on the, on the code level. But it's not very nice for the humans course you need to understand your system before you're able to, to, to choose good cutoff points. And if you, if you, if you add new ones, that's completely fine. But if you want to actually change them, course you, you figured out that you made a fundamental mistake, you're going to have a break in the continue continuity of your observability data. And you cannot undo this in, into the past. So this is just gone native histograms. On the other hand, allow me to, to, okay, I'm not going to get get into the math, but basically you define a single formula, which there comes a good default. If you have good reasons, then you can change it. But if you don't, just don't talk, >>The people are in the math, Hit him up on Twitter. Twitter, h you'll get you that math. >>So the, >>The thing is people want the math, believe me. >>Oh >>Yeah. I mean we don't have time, but hit him up. Yeah. >>There's ProCon in two weeks in Munich and there will be whole talk about like the, the dirty details of all of the stuff. But the, the high level answer is it just does what people would expect it to do. And with very little overhead, you become, you get highly, highly or high resolution histograms, which is really important for a lot of use cases. But this is not just Prometheus with my open metrics head on the 2.0 feature, like the breaking highlight feature of Open Metrics 2.0 will be you guested precisely the same with my open telemetry head on. Low and behold the same underlying technology is being put or has been put into open telemetry. And we've worked for month and month and month and even longer between all different projects to, to assert that we have one single standard which is actually compatible with each other course. One of the worst things which you can have in the cloud ecosystem is if you have soly different things and they break in subtly wrong ways, like it's much better to just not work than to break in a way, which is just a little bit wrong. Of course you won't figure this out until it's too late. So we spent, like with all three hats, we spent insane amounts of time on making this happen and, and making this nice. >>Savannah, one of the things we have so much going on at Cube Con. I mean just you're unpacking like probably another day of cube. We can't go four days, but open time. >>I know, I know. I'm the same >>Open telemetry >>Challenge acceptance open. >>Sorry, we're gonna stay here. All the, They >>Shut the lights off on us last night. >>They literally gonna pull the plug on us. Yeah, yeah, yeah, yeah. They've done that before. It's not the first time we go until they kick us out. We love, love doing this. But Open telemetry is got a lot of news too. So that's, We haven't really talked much about that. >>We haven't at >>All. So there's a lot of stuff going on that, I won't call it boring. That's like code word's. That's cube talk for, for it's working. Yeah. So it's not bad, but there's a lot of stuff going on. Like open telemetry, open metrics, This is the stuff that matters cuz when you go in large scale, that's key. It's just what, missing all the, all the stuff. >>No, >>What are we missing? What are people missing? What's going on in the show that you think that's not actually being reported on? I mean it's a lot of high web assembly for instance got a lot >>Of high. Oh yeah, I was gonna say, I'm glad you're asking this because you, you've already mentioned about seven different hats that you wear. I can only imagine how many hats are actually in your hat cabinet. But you, you are someone with your, with your fingers in a lot of different things. So you can kind of give us a state of the union. Yeah. So go ahead. Let's talk about >>It. So I think you already hit a few good points. Ease of use is definitely one of them. And, and improving the developer experience and not having this like a value of pain. Yeah. That is one of the really big ones. It's going to be interesting cause it is boring. It is janitorial and it needs a different type of persona. A lot of, or maybe not most, but a large fraction of developers like the shiny stuff. And we could see this in Prometheus where like initially the people who contributed this the most where like those restless people who need to fix that one thing, this is impossible, are going to do it. Which changed over the years where the people who now contribute the most are off the janitorial. Like keep things boring, keep things running, still have substantial changes. But but not like more on the maintenance level. >>Yeah. The maintainers. I was just gonna bring that >>Up. Yeah. On the, on the keep things boring while still pushing 'em forward. Yeah. And the thing about ease of use is a lot of this is boring. A lot of this is strategy. A lot of this is toil. A lot of this takes lots of research also in areas where developers are not really good at, like UX for example, and ui like most software developers are really bad at those cause they just think differently from normal humans, I guess. >>So that's an interesting observation that you just made. I we could unpack that on a whole nother show as well. >>So the, the thing is this is going to be interesting for the open source scene course. This needs deliberate investment by companies who assign people to those projects and say, okay, fix that one thing or make it easier to use what have you. That is a lot easier with, with first party products and projects from companies cuz they can invest directly into the thing and they see much more of a value prop. It's, it's kind of normal by now to, to allow developers or even assigned developers onto open source projects. That's not so much the case for the tpms, for the architects, for the UX and your I people like for the documentation people that there's not as much awareness of that this is also driving value for everyone. Yes. And also there's not much as much. >>Yeah, that's a great point. This whole workflow production system of open source, which has grown and keeps growing and we'll keep growing. These be funded. And one of the things we were talking earlier in another session about is about the recession potentially we're hitting and the global issues, macroeconomics that might force some of these projects or companies not to get VC >>Funding. It's such a theme at the show. So, >>So to me, I said it's just not about VC funding. There's other funding mechanisms that's community oriented. There's companies participating, there's other meccas. Richie, if you could have your wishlist of how things could progress an open source, what would you want to see happen in terms of how it's, how things are funded, how things are executed. Cuz developers are going to run businesses. Cuz ultimately if you follow digital transformation to completion, it and developers aren't a department serving the business. They are the business. And that's coming fast. You know, what has to happen in your opinion, if you had the wish magic wand, what would you, what would you snap your fingers to make happen? >>If I had a magic wand that's very different from, from what is achievable. But let, let's >>Go with, Okay, go with the magic wand first. Cause we'll, we'll, we'll we'll riff on that. So >>I'm here for dreams. Yeah, yeah, >>Yeah. I mean I, I've been in open source for more than two, two decades, but now, and most of the open source is being driven forward by people who are not being paid for those. So for example, Gana is the first time I'm actually paid by a company to do my com community work. It's always been on the side. Of course I believe in it and I like doing it. I'm also not bad at it. And so I just kept doing it. But it was like at night on the weekends and everything. And to be honest, it's still at night and in the weekends, but the majority of it is during paid company time, which is awesome. Yeah. Most of the people who have driven this space forward are not in this position. They're doing it at night, they're doing it on the weekends. They're doing it out of dedication to a cause. Yeah. >>The commitment is insane. >>Yeah. At the same time you have companies mostly hyperscalers and either they have really big cloud offerings or they have really big advertisement business or both. And they're extracting a huge amount of value, which has been created in large part elsewhere. Like yes, they employ a ton of developers, but a lot of the technologies they built on and the shoulders of the giants they stand upon it are really poorly paid. And there are some efforts to like, I think the core foundation like which redistribute a little bit of money and such. But if I had my magic wand, everyone who is an open source and actually drives things forwards, get, I don't know, 20% of the value which they create just magically somehow. Yeah. >>Or, or other companies don't extract as much value and, and redistribute more like put more full-time engineers onto projects or whichever, like that would be the ideal state where the people who actually make the thing out of dedication are not more or less left on the sideline. Of course they're too dedicated to just say, Okay, I'm, I'm not doing this anymore. You figure this stuff out and let things tremble and falter. So I mean, it's like with nurses and such who, who just like, they, they know they have something which is important and they keep doing it. Of course they believe in it. >>I think this, I think this is an opportunity to start messaging this narrative because yeah, absolutely. Now we're at an inflection point where there's a big community, there is a shared responsibility in my opinion, to not spread the wealth, but make sure that it's equally balanced and, and the, and I think there's a way to do that. I don't know how yet, but I see that more than ever, it's not just come in, raid the kingdom, steal all the jewels, monetize it, and throw some token token money around. >>Well, in the burnout. Yeah, I mean I, the other thing that I'm thinking about too is it's, you know, it's, it's the, it's the financial aspect of this. It's the cognitive load. And I'm curious actually, when I ask you this question, how do you avoid burnout? You do a million different things and we're, you know, I'm sure the open source community that passion the >>Coach. Yeah. So it's just write code, >>It's, oh, my, my, my software engineering days are firmly over. I'm, I'm, I'm like, I'm the cat herer and the janitor and like this type of thing. I, I don't really write code anymore. >>It's how do you avoid burnout? >>So a i I didn't curse ahead burnout a few years ago. I was not nice, but that was still when I had like a full day job and that day job was super intense and on top I did all the things. Part of being honest, a lot of the people who do this are really dedicated and are really bad at setting boundaries between work >>And process. That's why I bring it up. Yeah. Literally why I bring it up. Yeah. >>I I I'm firmly in that area and I'm, I'm, I don't claim I have this fully figured out yet. It's also even more risky to some extent per like, it's, it's good if you're paid for this and you can do it during your work time. But on the other hand, if it's so nice and like if your hobby and your job are almost completely intersectional, it >>Becomes really, the lines are blurry. >>Yeah. And then yeah, like have work from home. You, you don't even commute anything or anymore. You just sit down at your computer and you just have fun doing your stuff and all of a sudden it's deep at night and you're still like, I want to keep going. >>Sounds like God, something cute. I >>Know. I was gonna say, I was like, passion is something we all have in common here on this. >>That's the key. That is the key point There is a, the, the passion project becomes the job. But now the contribution is interesting because now yeah, this ecosystem is, is has a commercial aspect. Again, this is the, this is the balance between commercialization and keeping that organic production system that's called open source. I mean, it's so fascinating and this is amazing. I want to continue that conversation. It's >>Awesome. Yeah. Yeah. This is, this is great. Richard, this entire conversation has been excellent. Thank you so much for joining us. How can people find you? I mean, I give em your Twitter handle, but if they wanna find out more about Grafana Prometheus and the 1700 things you do >>For grafana grafana.com, for Prometheus, promeus.io for my own stuff, GitHub slash richie age slash talks. Of course I track all my talks in there and like, I don't, I currently don't have a personal website cause I stop bothering, but my, like that repository is, is very, you find what I do over, like for example, the recording link will be uploaded to this GitHub. >>Yeah. Great. Follow. You also run a lot of events and a lot of community activity. Congratulations for you. Also, I talked about this last time, the largest IRC network on earth. You ran, built a data center from scratch. What happened? You done >>That? >>Haven't done a, he even built a cloud hyperscale compete with Amazon. That's the next one. Why don't you put that on the >>Plate? We'll be sure to feature whatever Richie does next year on the cube. >>I'm game. Yeah. >>Fantastic. On that note, Richie, again, thank you so much for being here, John, always a pleasure. Thank you. And thank you for tuning in to us here live from Detroit, Michigan on the cube. My name is Savannah Peterson and here's to hoping that you find balance in your life this weekend.
SUMMARY :
We've done over 30, but this conversation is gonna be extra special, don't you think, We're getting the conversation of what's going on in the industry management, Richie, thank you so much for joining us. I mean, I, I, I run forme day. You, you have your hands in a lot. You have to write dashboards, you have to write alerts, you have to write everything to just get started with Like 60% of the people here are first time attendees. And I love that you, you had those numbers. So I mean, I covid changed a few things. I mean, you know, casually, it's like such a gentle way of putting that, I love it, I expect this to take up again. Some of the momentum, I mean, from the project level, Most of this is online anyway. So the projects are arguably spearheading this, I feel like you got something you're saying to say, Johnny. it's almost all corners of the world. You can do all the horizontal scaling, you can do all the automatic scaling, all those things that they're super nice. What are some of the things that you But it's not very nice for the humans course you need The people are in the math, Hit him up on Twitter. Yeah. One of the worst things which you can have in the cloud ecosystem is if you have soly different things and Savannah, one of the things we have so much going on at Cube Con. I'm the same All the, They It's not the first time we go until they Like open telemetry, open metrics, This is the stuff that matters cuz when you go in large scale, So you can kind of give us a state of the union. And, and improving the developer experience and not having this like a I was just gonna bring that the thing about ease of use is a lot of this is boring. So that's an interesting observation that you just made. So the, the thing is this is going to be interesting for the open source scene course. And one of the things we were talking earlier in So, Richie, if you could have your wishlist of how things could But let, let's So Yeah, yeah, Gana is the first time I'm actually paid by a company to do my com community work. shoulders of the giants they stand upon it are really poorly paid. are not more or less left on the sideline. I think this, I think this is an opportunity to start messaging this narrative because yeah, Yeah, I mean I, the other thing that I'm thinking about too is it's, you know, I'm, I'm like, I'm the cat herer and the janitor and like this type of thing. a lot of the people who do this are really dedicated and are really Yeah. I I I'm firmly in that area and I'm, I'm, I don't claim I have this fully You, you don't even commute anything or anymore. I That is the key point There is a, the, the passion project becomes the job. things you do like that repository is, is very, you find what I do over, like for example, the recording link will be uploaded Also, I talked about this last time, the largest IRC network on earth. That's the next one. We'll be sure to feature whatever Richie does next year on the cube. Yeah. My name is Savannah Peterson and here's to hoping that you find balance in your life this weekend.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Richie Hartman | PERSON | 0.99+ |
Richie | PERSON | 0.99+ |
Matt Klein | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Richard Hartmann | PERSON | 0.99+ |
Richard | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Grafana Labs | ORGANIZATION | 0.99+ |
Prometheus | TITLE | 0.99+ |
Rich Richie | PERSON | 0.99+ |
60% | QUANTITY | 0.99+ |
Griffon Labs | ORGANIZATION | 0.99+ |
two seconds | QUANTITY | 0.99+ |
one second | QUANTITY | 0.99+ |
Munich | LOCATION | 0.99+ |
20% | QUANTITY | 0.99+ |
10 tools | QUANTITY | 0.99+ |
Detroit | LOCATION | 0.99+ |
Monday | DATE | 0.99+ |
Detroit, Michigan | LOCATION | 0.99+ |
Grafana | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Grafana Prometheus | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
five k | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
two | QUANTITY | 0.98+ |
next year | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
two decades | QUANTITY | 0.98+ |
one single concept | QUANTITY | 0.98+ |
GitHub | ORGANIZATION | 0.98+ |
2019 | DATE | 0.98+ |
Grafana cloud | TITLE | 0.98+ |
One | QUANTITY | 0.97+ |
last night | DATE | 0.97+ |
Savannah | PERSON | 0.97+ |
ORGANIZATION | 0.96+ | |
earth | LOCATION | 0.96+ |
four days | QUANTITY | 0.96+ |
over 30 | QUANTITY | 0.95+ |
Johnny | PERSON | 0.95+ |
one click | QUANTITY | 0.95+ |
Grafana Cloud | TITLE | 0.95+ |
CloudNativeCon | EVENT | 0.94+ |
few hours ago | DATE | 0.93+ |
2.0 | OTHER | 0.93+ |
Griffon | ORGANIZATION | 0.93+ |
hundred percent | QUANTITY | 0.92+ |
two weeks | QUANTITY | 0.92+ |
one thing | QUANTITY | 0.91+ |
grafana grafana.com | OTHER | 0.9+ |
more than two | QUANTITY | 0.89+ |
three different names | QUANTITY | 0.88+ |
two largest | QUANTITY | 0.88+ |
promeus.io | OTHER | 0.86+ |
a hundred milliseconds | QUANTITY | 0.86+ |
few years ago | DATE | 0.86+ |
single formula | QUANTITY | 0.85+ |
first | QUANTITY | 0.83+ |
Con. | EVENT | 0.83+ |
IRC | ORGANIZATION | 0.82+ |
Kubernetes | TITLE | 0.81+ |
seven different hats | QUANTITY | 0.8+ |
one single standard | QUANTITY | 0.79+ |
Valencia Andro | ORGANIZATION | 0.79+ |
NA 2022 | EVENT | 0.77+ |
Open Metrics 2.0 | OTHER | 0.74+ |
KubeCon + | EVENT | 0.7+ |
Richard Hartmann, Grafana Labs | KubeCon + CloudNativeCon Europe 2021 - Virtual
>>from around the >>globe. It's the >>cube with coverage of Kublai >>Khan and Cloud Native Con Europe 2021 >>virtual brought to >>you by red hat, the cloud native computing foundation and ecosystem partners. Hello, welcome back to the cubes coverage of coupon 21 Cloud Native Con 21 Virtual, I'm John Ferrier Host of the Cube. We're here with a great gas to break down one of the hottest trends going on in the industry and certainly around cloud native as this new modern architecture is evolving so fast. Richard Hartman, director of community at Griffon, a lab's involved with Prometheus as well um, expert and fun to have on and also is going to share a lot here. Richard, thanks for coming. I appreciate it. >>Thank you >>know, we were chatting before we came on camera about the human's ability to to handle all this new shift uh and the and the future of observe ability is what everyone has been talking about. But you know, some say the reserve abilities, just network management was just different, you know, scale Okay, I can buy that, but it's got a lot more than that. It involves data involves a new architecture, new levels of scale that cloud native has brought to the table that everyone is agreeing on. It scales their new capabilities, thus setting up new architectures, new expectations and new experiences are all happening. Take us through the future of observe ability. >>Mhm. Yes, so um 11 of the things which many people find when they onboard themselves onto the cloud native space is um you can scale along different and new axis, which you couldn't scale along before, uh which is great. Of course, it enables growth, it enables different operating models, it enables you to choose different or more modern engineering trade offs, like the underlying problems are still the same, but you just slice and dice your problems and compartmentalize your services differently. But the problem is um it becomes more spread out and the more classic tooling tends to be built for those more classic um setups and architectures as your architecture becomes more malleable and as you can can choose and pick how to grow it along with which access a lot more directly and you have to um that limits the ability of the humans actually operating that system to understand what is truly going on. Um Obviously everyone is is fully fully all in on A. I. M. L. And all those things. But one of the dirty secrets is you will keep needing domain specific experts who know what they're doing and what that thing should look like, what should be working hard to be working. But enable those people to actually to actually understand the current state of the system and compare this to the desired state of the system. Is highly nontrivial in particular, once you have not machine lifetimes of month or years which he had before, which came down to two sometimes hours and when you go to Microsoft to surveillance and such sometimes even into sub seconds. So a lot of this is about enabling this, this this higher volume of data, this higher scale of data, this higher cardinality of what what you actually attach as metadata on your data and then still be able to carry all this and makes sense of it at scale and at speed because if you just toss it into a data lake and do better analysis like half a day later no one cares about it anymore. It needs to be life it needs or at least the largest part of it needs to be life. You need to be able to alert right now if something is imminently customer facing. >>Well, that's awesome. I love totally agree this new observe ability horizontally scalable, more surface area, more axes, as you point out, changes the data equation on the automation plays a big role in mention machine learning and ai great, great grounds for that. I gotta ask you just well before we move on to the next topic around this is that the most people that come from the old world with the tooling and come from that old school vendor mentality or old soup architecture, old school architecture tend to kind of throw stones at the future and say, well the economics are all wrong and the performance metrics. So I want to ask you so I assume that we believe we do believe because assume that's going to happen. What is the economic picture? What's the impact that people are missing? When you look at the benefits of what this system is going to enable the impact? Specifically whether it's economics, productivity, efficient code, what are some of the things that maybe the VCS or other people in the naysayers side? Old school will, will throw stones at what's the, what's the big upside here? >>Mhm. So this will not be true for everyone and there will still be certain situations where it makes sense to choose different sets of of trade offs, but most everyone will be moving into the cloud for for convenience and speed reasons. And I'm deliberately not saying cost reasons. Um the reason being um usually or in the past you had simply different standard service delineations and all of the proserve, the consulting your hiring pool was all aligned with this old type of service delineation, which used to be a physical machine or a service or maybe even a service and you had a hot standby or something. If we, if we got like really a hugely respect from the same things still need to operate under laying what you do. But as we grow as an industry, more of more of this is commoditized and same as we commoditize service and storage network. We commoditized actually running off that machine and with service and such go even further. Um so it's not so much about about this fundamentally changing how it's built. It's just that a larger or a previously thing which was part of your value at and of what you did in your core is now just off the shelf infrastructure which you just by as much as you need again at certain scales and for certain specific use cases, this will not be true for the foreseeable future, but most everyone um will be moving there simply because where they actually add value and the people they can hire for and who are interested in that type of problem. I just mean that it's a lot more more sensical to to choose this different delineation but it's not cheaper >>and the commoditization and disintermediation is definitely happening, totally agree. And the complexity that's gonna be abstracted away with software is novell and it's also systematic. There's just it's new and there's some systems involved, so great insight there. I totally agree with you. The disruption is happening majority of almost all areas, so in all verticals and all industries, so so great point. I think this is where I think everyone's so excited and some people are paranoid actually frankly, but we cover that in depth on the Cuban other segments. But great point. We'll get back to what you're where you're spending your time right now. Um You're spending a lot of time on open metrics. What is that enabling take us through that? >>So um the super quick history of Prometheus, of course, we need that for open metrics. Promises was actually created in 2012. Um and the wire format which he used to in the exposition format, which he used to transport metrics into Prometheus is stable since 2014. Um But there is a large problem here. Um It carries the promise his name and a lot of competing projects and a lot of competing vendors of course there are vendors which compete with just the project. Um It's simply refused to to to take anything in which carried the promise his name. Of course, this doesn't align with their food um strategy, which they ran back then. So um together with scenes, the f we decided to just have a new different name for just that wire format for the underlying data model for everything which you need to make one complete exposition or a bunch of expositions towards towards permissions. So that's it at the corn, that's been ongoing since 2000 and 15 16 something. Um But there's also changes on the one hand, there is a super careful, a super super careful um Clean up and backwards compatible cleanup of a few things which the permit this exposition former serious here for didn't get right. But also we enable two features within this and as permitted chose open metrics as its official format. We also uplift committees and varying both heads. Obviously it's easier to get the synchronization. Um Ex employers stand out which is a completely new, at least outside of certain large search companies google. Um Who who used who use ex employers to do something different with with their traces. Um it was in 2017 when they told me that for them searching for traces didn't scale by labels. Uh and at that point I wanted to have both. I wanted to have traces and logs also with the same label set as permitting system. But when they tell you searching doesn't scale like they tell you you better listen. So uh the thing is this you have your index where you store all your data or your where you have the reference to enter your database and you have these label sets and they are super efficient and and quite powerful when compared to more traditional systems but they still carry a cost and that cost becomes non trivial at scale. So instead of storing the same labels for your metrics and your logs and your traces, the idea is to just store an I. D. For your trace which is super lightweight and it's literally just one idea. So your index is super tiny. Um And then you touch this information to your logs to your metrics and in the meantime also two year to year logs. Um So you know already that trace has certain properties because historically you have this needle estate problem. You have endless amounts of traces and you need to figure out what are the useful are they are the judicial and interesting aero state highlight and see some error occurring whatever if that information is already attached to your other signals. That's a lot easier. Of course. You see you're highlighting see bucket and you see a trace ID which is for that high latency bucket. So going into that trace, I already know it is a highlight and see trace for for a service which has a high latency, it has visited that labor. It was running this in that context, blah blah blah blah blah. Same for logs. There is an error. There is an exception, maybe a security breach, what have you and I can jump directly into a trace and I have all this mental context and the most expensive part is the humans. So enabling that human to not need to break mental uh train of thought to just jump directly from all the established state which they already have here in debugging just right into the trace, went back and just see why that thing behave that way. It's super powerful and it's also a lot cheaper to store this on the back and a four year traces which in our case internally we just run at 100% something. We do not throw data way, which means you don't have the super interesting thing. And by the way the trace just doesn't exist for us a good job. And that's the one thing to to from day one this intent to to marry those three pillars more closely. The other thing is by having a true lingua franca. It gave that concept of of of promises compatibility on the wire, its own name and it's its own distinct concept. And that is something which a lot of people simply attached to. So just by having that name, allow the completely different conversation over the last half decade or so and to close >>them close it >>up and to close that point because I come from the network, from the networking space and, and basically I T f r f C s are the currency within the networking space and how you force your vendors to support something, which is why I brought open metrics into the I. D. F. To to give it an official stamp of approval in Rfc number which is currently hopefully successful. Um So all of a sudden you can slip this into your tender and just tell your vendor, ex wife said okay, you need to support this. But I've seen all of a sudden by contract they're bound to to support communities native. So >>I support that Rfc yet or no, is that still coming? >>I, so at the last uh TF meeting, which was virtual, obviously I presented everything to the L. A W G. Um there was very good feedback. Um they want to adopt it as an informational uh I. D. Reason being it is most or it is a documentation of an already widely existed standard. So it gets different bits and pieces in the heather. Um Currently I'm waiting for a few rounds of feedback on specific wording how to make it more clear and such. Um looking >>good. It's looking good. >>Oh yes while presenting it. They actually told me that I have a conference with promises and performance. Well >>that's how you get things done in the old school internet. That's the way it was talking to Vince serving all of my friends and that generation we grew up, I mean I was telling a story on the clubhouse, just random that I grew up in the era. We used to pirate software used to deal software back in the old days. Pre open source. This is how things get done. So I gotta ask you the impact question. The, the deal with open metrics potentially could disrupt all those startups. So what, how does this impact all these stars because everyone is jockeying for land grabbing the observe ability space? Is that just because it's just too many people competing for one spot or do they all have differentiation? What happens to all those observe ability startups that got minted and funded? >>So I have, I think we have to split this into two answers, the first one open metrics and also Prometheus we're trying really hard to standardize what we're doing and to make this reusable as much as we possibly can um simply because premises itself does not have any any profit motivation or anything, it is just a project run by people. Um so we gain by, by users using our stuff and working in the way, which we think is a good way to operate. So anyone who just supports all those open standards, just on boards themselves onto a huge ecosystem of already installed base. And we're talking millions and millions and millions of installations, we don't have hard numbers, but the millions and millions I am certain of and thats installations, not users, so that's several orders of magnitude more. Um, so that that actually enables an ecosystem within which to move as to the second question. It is a super hot topic. So obviously that we see money starts coming in from all right. Um, I don't think that everyone will survive, but that is just how it usually is. There is a lot of of not very differentiated offerings, be the software, be they as a service, be their distributions? Well, you don't really see much much value and not not a lot of, not a lot of much anything in ways of innovation. So this is more about about making it easier to run or or taking that pain away, which obviously makes you open to attack by by all the hyper scale. Of course, they can just do this at a higher scale than you. Um, so unless you actually really in a way in that space and actually shape and lead in that space, at least to some extent, it will probably be relatively hard. That being said. >>Yeah, when you ride, when you ride the big waves like this, I mean, you you got to be on the right side of this. Uh, Pat Gelsinger's when he was that VM Where now is that intel told me on the cube one time. If you're not, you don't get it right on these waves, your driftwood, Right? So, so, you know, and we've seen this movie before, when you start to see the standards bodies like the I E T. F. Start to look at standards. You start to think there's a broader market opportunities, a need for some standards, which is good. It enables more value, right value creation, whether it's out in the open or if it's innovative from a commercialization standpoint, you know, these are good things and then you have everyone who's jockeying around from the land grab incomes, a standard momentum, you gotta be on the right side of these things. We know what we know it's gonna look like. If you're not on the right side of the standard, then your proprietary, >>precisely. >>And so that's the endgame. Okay, well, I really appreciate the impact. Final question. Um, as the world evolved post Covid as cloud Native goes mainstream, the enterprises in the cloud scale are demanding more things. Enterprises are are, you know, they want more stuff than just straight up in the cloud startups, for instance. So you start to see, you know, faster, more agility obviously, uh, with deploying modern apps, when you start getting into enterprise grade scale, you gotta start thinking, you know, this is an engineering and computer science discipline. Coming together, you've got to look at the architecture. What's your future vision of how the next gen programmable infrastructure looks like? >>You mean, as in actually manage those services or limited to observe ability to >>observe ability, role, observe ability. Just you're in the urine. The survivability speaks to the operating system of what's going on, distributed computing you're looking at, you gotta have a good observe ability if you want to deploy services. So, you know, as it evolves and this is not a fringe thing anymore. This is real deal. This observe abilities a key linchpin in the architecture. >>So, um, maybe to approach us from two sides. One of the things which, which, I mean I come from very much non cloud native background. One of the things which tends to be overlooked in cloud native is that not everything is green field. Matter of fact, legacy is the code word for makes actual money. Um, so a lot of brownfield installations, which still make money, which we keep making money and all of those existence, they will not go away anytime soon. And as soon as you go to actually industry trying to uplift themselves to industry that foreign, all those passwords you get a lot more complexity in, in just the availability of systems than just the cloud native scheme. So being able to to actually put all of those data types together and not just have you. Okay, nice. I have my micro service events fully instrumented and if anything happens on the layer below, I'm simply unable to make any any effort on debugging um things like for example, Prometheus course they are so widely adopted enable you to literally, and I did this myself um from the Diesel Genset of your data center over the network down to down to the office. If if someone is in there, if if if your station and your pager is is uh stepped in such to the database to the extra service which is facing your end customers, all of those use the same labels that use the same metadata to actually talk about this. So all of a sudden I can really drill down into my data, not only from you. Okay. I have my microservices, my database. Big deal. No, I can actually go down as deep in my infrastructure as my infrastructure is. And this is especially important for anyone who's from the more traditional enterprise because most of them will for the foreseeable future have tons and tons and tons of those installations and the ability to just marry all this data together no matter where it's coming from. Of course you have this lingual franklin, you have these widely adopted open standards. I think that is one of the main drivers in >>jail. I think you just nailed the hybrid and surprised use case, you know, operation at scale and integrating the systems. So great job Richard, thank you so much for coming on. Richard Hartman, Director of community Griffon A labs. I'm talking, observe ability here on the cube. I'm john for your host covering cube con 21 cognitive content. One virtual. Thanks for watching. Mhm Yeah. Mhm.
SUMMARY :
It's the 21 Virtual, I'm John Ferrier Host of the Cube. But you know, some say the reserve abilities, just network management was just different, like the underlying problems are still the same, but you just slice and dice your problems and compartmentalize So I want to ask you so I assume that we believe we do believe because assume that's at and of what you did in your core is now just off the shelf infrastructure And the complexity that's gonna be abstracted away with software is novell and it's also systematic. We do not throw data way, which means you don't have the super interesting of a sudden you can slip this into your tender and just tell your vendor, ex wife said okay, I, so at the last uh TF meeting, which was virtual, It's looking good. have a conference with promises and performance. So I gotta ask you the impact question. or or taking that pain away, which obviously makes you open to attack by and we've seen this movie before, when you start to see the standards bodies like the I E T. F. So you start to see, you know, faster, more agility obviously, uh, with deploying modern apps, So, you know, as it evolves and this is not a fringe thing anymore. One of the things which tends to be overlooked in cloud native is that not everything is green field. I think you just nailed the hybrid and surprised use case, you know, operation at scale
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Richard | PERSON | 0.99+ |
Richard Hartman | PERSON | 0.99+ |
John Ferrier | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
2012 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
second question | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Richard Hartmann | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
11 | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
two sides | QUANTITY | 0.99+ |
one spot | QUANTITY | 0.99+ |
Prometheus | TITLE | 0.99+ |
Vince | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
two answers | QUANTITY | 0.99+ |
Grafana Labs | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
four year | QUANTITY | 0.99+ |
2000 | DATE | 0.99+ |
KubeCon | EVENT | 0.99+ |
one idea | QUANTITY | 0.99+ |
two features | QUANTITY | 0.99+ |
three pillars | QUANTITY | 0.98+ |
two year | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
ORGANIZATION | 0.97+ | |
one | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.96+ |
first one | QUANTITY | 0.96+ |
Griffon | ORGANIZATION | 0.95+ |
half | DATE | 0.95+ |
one time | QUANTITY | 0.94+ |
15 | DATE | 0.93+ |
both heads | QUANTITY | 0.93+ |
day one | QUANTITY | 0.9+ |
Griffon A labs | ORGANIZATION | 0.87+ |
CloudNativeCon Europe 2021 | EVENT | 0.86+ |
Cloud Native Con | EVENT | 0.84+ |
last half decade | DATE | 0.82+ |
Cuban | OTHER | 0.81+ |
Cloud Native Con Europe | EVENT | 0.81+ |
red hat | ORGANIZATION | 0.79+ |
Covid | TITLE | 0.77+ |
tons | QUANTITY | 0.76+ |
too many people | QUANTITY | 0.76+ |
a day later | DATE | 0.74+ |
john | PERSON | 0.73+ |
con 21 | COMMERCIAL_ITEM | 0.71+ |
21 Virtual | COMMERCIAL_ITEM | 0.7+ |
L. A W | ORGANIZATION | 0.7+ |
Kublai | PERSON | 0.68+ |
intel | ORGANIZATION | 0.65+ |
VCS | ORGANIZATION | 0.65+ |
16 | DATE | 0.63+ |
Rfc | ORGANIZATION | 0.54+ |
franklin | ORGANIZATION | 0.53+ |
Khan | PERSON | 0.47+ |
2021 | DATE | 0.47+ |
21 | COMMERCIAL_ITEM | 0.34+ |
Kubernetes on Any Infrastructure Top to Bottom Tutorials for Docker Enterprise Container Cloud
>>all right, We're five minutes after the hour. That's all aboard. Who's coming aboard? Welcome everyone to the tutorial track for our launchpad of them. So for the next couple of hours, we've got a SYRIZA videos and experts on hand to answer questions about our new product, Doctor Enterprise Container Cloud. Before we jump into the videos and the technology, I just want to introduce myself and my other emcee for the session. I'm Bill Milks. I run curriculum development for Mirant us on. And >>I'm Bruce Basil Matthews. I'm the Western regional Solutions architect for Moran Tissue esa and welcome to everyone to this lovely launchpad oven event. >>We're lucky to have you with us proof. At least somebody on the call knows something about your enterprise Computer club. Um, speaking of people that know about Dr Enterprise Container Cloud, make sure that you've got a window open to the chat for this session. We've got a number of our engineers available and on hand to answer your questions live as we go through these videos and disgusting problem. So that's us, I guess, for Dr Enterprise Container Cloud, this is Mirant asses brand new product for bootstrapping Doctor Enterprise Kubernetes clusters at scale Anything. The airport Abu's? >>No, just that I think that we're trying Thio. Uh, let's see. Hold on. I think that we're trying Teoh give you a foundation against which to give this stuff a go yourself. And that's really the key to this thing is to provide some, you know, many training and education in a very condensed period. So, >>yeah, that's exactly what you're going to see. The SYRIZA videos we have today. We're going to focus on your first steps with Dr Enterprise Container Cloud from installing it to bootstrapping your regional child clusters so that by the end of the tutorial content today, you're gonna be prepared to spin up your first documentary prize clusters using documented prize container class. So just a little bit of logistics for the session. We're going to run through these tutorials twice. We're gonna do one run through starting seven minutes ago up until I guess it will be ten fifteen Pacific time. Then we're gonna run through the whole thing again. So if you've got other colleagues that weren't able to join right at the top of the hour and would like to jump in from the beginning, ten. Fifteen Pacific time. We're gonna do the whole thing over again. So if you want to see the videos twice, you got public friends and colleagues that, you know you wanna pull in for a second chance to see this stuff, we're gonna do it all. All twice. Yeah, this session. Any any logistics I should add, Bruce that No, >>I think that's that's pretty much what we had to nail down here. But let's zoom dash into those, uh, feature films. >>Let's do Edmonds. And like I said, don't be shy. Feel free to ask questions in the chat or engineers and boosting myself are standing by to answer your questions. So let me just tee up the first video here and walk their cost. Yeah. Mhm. Yes. Sorry. And here we go. So our first video here is gonna be about installing the Doctor Enterprise Container Club Management cluster. So I like to think of the management cluster as like your mothership, right? This is what you're gonna use to deploy all those little child clusters that you're gonna use is like, Come on it as clusters downstream. So the management costs was always our first step. Let's jump in there >>now. We have to give this brief little pause >>with no good day video. Focus for this demo will be the initial bootstrap of the management cluster in the first regional clusters to support AWS deployments. The management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case, eight of us and the Elsie um, components on the UCP Cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a big strap note on this dependencies on handling with download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the adduce environment. The fourth configuring the deployment, defining things like the machine types on the fifth phase. Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node, just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now we're just checking through AWS to make sure that the account we want to use we have the correct credentials on the correct roles set up and validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just going to check that we can, from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next, we're going to run it. I'm in. Deploy it. Changing into that big struck folder. Just making see what's there. Right now we have no license file, so we're gonna get the license filed. Oh, okay. Get the license file through the more antis downloads site, signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, Once we've done that, we can now go ahead with the rest of the deployment. See that the follow is there. Uh, huh? That's again checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. All right, The next big step is valid in all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running an AWS policy create. So it is part of that is creating our Food trucks script, creating the mystery policy files on top of AWS, Just generally preparing the environment using a cloud formation script you'll see in a second will give a new policy confirmations just waiting for it to complete. Yeah, and there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created Today I am console. Go to that new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media Access key I D and the secret access key. We went, Yeah, usually then exported on the command line. Okay. Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Yes. Okay, that's the key. Secret X key. Right on. Let's kick it off. Yeah, So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you, and as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the west side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS. At the end of the process, that cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Okay. Local clusters boat just waiting for the various objects to get ready. Standard communities objects here Okay, so we speed up this process a little bit just for demonstration purposes. Yeah. There we go. So first note is being built the best in host. Just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for a W s to create the instance. Okay. Yes. Here, beauty there. Okay. Mhm. Okay. Yeah, yeah. Okay. On there. We got question. Host has been built on three instances for the management clusters have now been created. We're going through the process of preparing. Those nodes were now copying everything over. See that? The scaling up of controllers in the big Strap cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Yeah. Yeah, just waiting for key. Clark. Uh huh. Start to finish up. Yeah. No. What? Now we're shutting down control this on the local bootstrap node on preparing our I. D. C. Configuration. Fourth indication, soon as this is completed. Last phase will be to deploy stack light into the new cluster the last time Monitoring tool set way Go stack like to plan It has started. Mhm coming to the end of the deployment Mountain. Yeah, America. Final phase of the deployment. Onda, We are done. Okay, You'll see. At the end they're providing us the details of you. I log in so there's a keeper clogging. You can modify that initial default password is part of the configuration set up with one documentation way. Go Councils up way can log in. Yeah, yeah, thank you very much for watching. >>Excellent. So in that video are wonderful field CTO Shauna Vera bootstrapped up management costume for Dr Enterprise Container Cloud Bruce, where exactly does that leave us? So now we've got this management costume installed like what's next? >>So primarily the foundation for being able to deploy either regional clusters that will then allow you to support child clusters. Uh, comes into play the next piece of what we're going to show, I think with Sean O'Mara doing this is the child cluster capability, which allows you to then deploy your application services on the local cluster. That's being managed by the ah ah management cluster that we just created with the bootstrap. >>Right? So this cluster isn't yet for workloads. This is just for bootstrapping up the downstream clusters. Those or what we're gonna use for workings. >>Exactly. Yeah. And I just wanted to point out, since Sean O'Mara isn't around, toe, actually answer questions. I could listen to that guy. Read the phone book, and it would be interesting, but anyway, you can tell him I said that >>he's watching right now, Crusoe. Good. Um, cool. So and just to make sure I understood what Sean was describing their that bootstrap er knows that you, like, ran document fresh pretender Cloud from to begin with. That's actually creating a kind kubernetes deployment kubernetes and Docker deployment locally. That then hits the AWS a p i in this example that make those e c two instances, and it makes like a three manager kubernetes cluster there, and then it, like, copies itself over toe those communities managers. >>Yeah, and and that's sort of where the transition happens. You can actually see it. The output that when it says I'm pivoting, I'm pivoting from my local kind deployment of cluster AP, I toothy, uh, cluster, that's that's being created inside of AWS or, quite frankly, inside of open stack or inside of bare metal or inside of it. The targeting is, uh, abstracted. Yeah, but >>those air three environments that we're looking at right now, right? Us bare metal in open staff environments. So does that kind cluster on the bootstrap er go away afterwards. You don't need that afterwards. Yeah, that is just temporary. To get things bootstrapped, then you manage things from management cluster on aws in this example? >>Yeah. Yeah. The seed, uh, cloud that post the bootstrap is not required anymore. And there's no, uh, interplay between them after that. So that there's no dependencies on any of the clouds that get created thereafter. >>Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, be a temporary container that would bootstrap all the other containers. Go away. It's, uh, so sort of a similar, similar temporary transient bootstrapping model. Cool. Excellent. What will convict there? It looked like there wasn't a ton, right? It looked like you had to, like, set up some AWS parameters like credentials and region and stuff like that. But other than that, that looked like heavily script herbal like there wasn't a ton of point and click there. >>Yeah, very much so. It's pretty straightforward from a bootstrapping standpoint, The config file that that's generated the template is fairly straightforward and targeted towards of a small medium or large, um, deployment. And by editing that single file and then gathering license file and all of the things that Sean went through, um, that that it makes it fairly easy to script >>this. And if I understood correctly as well that three manager footprint for your management cluster, that's the minimum, right. We always insist on high availability for this management cluster because boy do not wanna see oh, >>right, right. And you know, there's all kinds of persistent data that needs to be available, regardless of whether one of the notes goes down or not. So we're taking care of all of that for you behind the scenes without you having toe worry about it as a developer. >>No, I think there's that's a theme that I think will come back to throughout the rest of this tutorial session today is there's a lot of there's a lot of expertise baked him to Dr Enterprise Container Cloud in terms of implementing best practices for you like the defaulter, just the best practices of how you should be managing these clusters, Miss Seymour. Examples of that is the day goes on. Any interesting questions you want to call out from the chap who's >>well, there was. Yeah, yeah, there was one that we had responded to earlier about the fact that it's a management cluster that then conduce oh, either the the regional cluster or a local child molester. The child clusters, in each case host the application services, >>right? So at this point, we've got, in some sense, like the simplest architectures for our documentary prize Container Cloud. We've got the management cluster, and we're gonna go straight with child cluster. In the next video, there's a more sophisticated architecture, which will also proper today that inserts another layer between those two regional clusters. If you need to manage regions like across a BS, reads across with these documents anything, >>yeah, that that local support for the child cluster makes it a lot easier for you to manage the individual clusters themselves and to take advantage of our observation. I'll support systems a stack light and things like that for each one of clusters locally, as opposed to having to centralize thumb >>eso. It's a couple of good questions. In the chat here, someone was asking for the instructions to do this themselves. I strongly encourage you to do so. That should be in the docks, which I think Dale helpfully thank you. Dale provided links for that's all publicly available right now. So just head on in, head on into the docks like the Dale provided here. You can follow this example yourself. All you need is a Mirante license for this and your AWS credentials. There was a question from many a hear about deploying this toe azure. Not at G. Not at this time. >>Yeah, although that is coming. That's going to be in a very near term release. >>I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. Very bracing. Cool. Okay. Any other thoughts on this one does. >>No, just that the fact that we're running through these individual pieces of the steps Well, I'm sure help you folks. If you go to the link that, uh, the gentleman had put into the chat, um, giving you the step by staff. Um, it makes it fairly straightforward to try this yourselves. >>E strongly encourage that, right? That's when you really start to internalize this stuff. OK, but before we move on to the next video, let's just make sure everyone has a clear picture in your mind of, like, where we are in the life cycle here creating this management cluster. Just stop me if I'm wrong. Who's creating this management cluster is like, you do that once, right? That's when your first setting up your doctor enterprise container cloud environment of system. What we're going to start seeing next is creating child clusters and this is what you're gonna be doing over and over and over again. When you need to create a cluster for this Deb team or, you know, this other team river it is that needs commodity. Doctor Enterprise clusters create these easy on half will. So this was once to set up Dr Enterprise Container Cloud Child clusters, which we're going to see next. We're gonna do over and over and over again. So let's go to that video and see just how straightforward it is to spin up a doctor enterprise cluster for work clothes as a child cluster. Undocumented brands contain >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster, the scaling of the cluster and how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the U I so you can switch. Project Mary only has access to development. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Nan Yes, this H Keys Associate ID for Mary into her team on the cloud credentials that allow you to create access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences, Right? Let's now set up semester search keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name, we copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our local machine. A simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, you go to the clusters tab. We hit the create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one release version five point seven is the current release Onda Attach. Mary's Key is necessary Key. We can then check the rest of the settings, confirming the provider Any kubernetes c r D r I p address information. We can change this. Should we wish to? We'll leave it default for now on. Then what components? A stack light I would like to deploy into my Custer. For this. I'm enabling stack light on logging on Aiken. Sit up the retention sizes Attention times on. Even at this stage, at any customer alerts for the watchdogs. E consider email alerting which I will need my smart host details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I commend side on the route device size. There we go, my three machines obviously creating. I now need to add some workers to this custom. So I go through the same process this time once again, just selecting worker. I'll just add to once again, the AM is extremely important. Will fail if we don't pick the right, Am I for a boon to machine in this case and the deployment has started. We can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here we've created the VPC. We've created the sub nets on We've created the Internet gateway. It's unnecessary made of us and we have no warnings of the stage. Yeah, this will then run for a while. We have one minute past waken click through. We can check the status of the machine bulls as individuals so we can check the machine info, details of the machines that we've assigned, right? Mhm Onda. See any events pertaining to the machine areas like this one on normal? Yeah. Just watch asked. The community's components are waiting for the machines to start. Go back to Custer's. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway on the stage. The machines have been built on assigned. I pick up the U. S. Thank you. Yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. Mhm. No speeding things up a little bit. This whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured. Mhm, mhm. And then we go. Cluster has been deployed. So once the classes deployed, we can now never get around our environment. Okay, Are cooking into configure cluster We could modify their cluster. We could get the end points for alert alert manager on See here The griffon occupying and Prometheus are still building in the background but the cluster is available on you would be able to put workloads on it the stretch to download the cube conflict so that I can put workloads on it. It's again three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster Mhm all right Now that the build is fully completed, we can check out cluster info on. We can see that Allow the satellite components have been built. All the storage is there, and we have access to the CPU. I So if we click into the cluster, we can access the UCP dashboard, right? Shit. Click the signing with Detroit button to use the SSO on. We give Mary's possible to use the name once again. Thing is, an unlicensed cluster way could license at this point. Or just skip it on. There. We have the UCP dashboard. You can see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon, a data just being automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Mhm. So we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah. General dashboard of Cuba navies cluster one of this is configurable. You can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster, all right to scale the cluster on to add a notice. A simple is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger disks and you'll see that worker has been added from the provisioning state on shortly. We will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workouts are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button, validating which release you would like to update to. In this case, the next available releases five point seven point one. Here I'm kicking the update by in the background We will coordinate. Drain each node slowly go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Girl, we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already on In a few minutes we'll see that there are great has been completed. There we go. Great. Done. Yeah. If you work loads of both using proper cloud native community standards, there will be no impact. >>Excellent. So at this point, we've now got a cluster ready to start taking our communities of workloads. He started playing or APs to that costume. So watching that video, the thing that jumped out to me at first Waas like the inputs that go into defining this workload cost of it. All right, so we have to make sure we were using on appropriate am I for that kind of defines the substrate about what we're gonna be deploying our cluster on top of. But there's very little requirements. A so far as I could tell on top of that, am I? Because Docker enterprise Container Cloud is gonna bootstrap all the components that you need. That s all we have is kind of kind of really simple bunch box that we were deploying these things on top of so one thing that didn't get dug into too much in the video. But it's just sort of implied. Bruce, maybe you can comment on this is that release that Shawn had to choose for his, uh, for his cluster in creating it. And that release was also the thing we had to touch. Wanted to upgrade part cluster. So you have really sharp eyes. You could see at the end there that when you're doing the release upgrade enlisted out a stack of components docker, engine, kubernetes, calico, aled, different bits and pieces that go into, uh, go into one of these commodity clusters that deploy. And so, as far as I can tell in that case, that's what we mean by a release. In this sense, right? It's the validated stack off container ization and orchestration components that you know we've tested out and make sure it works well, introduction environments. >>Yeah, and and And that's really the focus of our effort is to ensure that any CVS in any of the stack are taken care of that there is a fixes air documented and up streamed to the open stack community source community, um, and and that, you know, then we test for the scaling ability and the reliability in high availability configuration for the clusters themselves. The hosts of your containers. Right. And I think one of the key, uh, you know, benefits that we provide is that ability to let you know, online, high. We've got an update for you, and it's fixes something that maybe you had asked us to fix. Uh, that all comes to you online as your managing your clusters, so you don't have to think about it. It just comes as part of the product. >>You just have to click on Yes. Please give me that update. Uh, not just the individual components, but again. It's that it's that validated stack, right? Not just, you know, component X, y and Z work. But they all work together effectively Scalable security, reliably cool. Um, yeah. So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old universal control plane. Doctor Enterprise. On top of that, Sean had the classic comment there, you know? Yeah. Yeah. You'll see a little warnings and errors or whatever. When you're setting up, UCP don't handle, right, Just let it do its job, and it will converge all its components, you know, after just just a minute or two. But we saw in that video, we sped things up a little bit there just we didn't wait for, you know, progress fighters to complete. But really, in real life, that whole process is that anything so spend up one of those one of those fosters so quite quite quick. >>Yeah, and and I think the the thoroughness with which it goes through its process and re tries and re tries, uh, as you know, and it was evident when we went through the initial ah video of the bootstrapping as well that the processes themselves are self healing, as they are going through. So they will try and retry and wait for the event to complete properly on. And once it's completed properly, then it will go to the next step. >>Absolutely. And the worst thing you could do is panic at the first warning and start tearing things that don't don't do that. Just don't let it let it heal. Let take care of itself. And that's the beauty of these manage solutions is that they bake in a lot of subject matter expertise, right? The decisions that are getting made by those containers is they're bootstrapping themselves, reflect the expertise of the Mirant ISS crew that has been developing this content in these two is free for years and years now, over recognizing humanities. One cool thing there that I really appreciate it actually that it adds on top of Dr Enterprise is that automatic griffon a deployment as well. So, Dr Enterprises, I think everyone knows has had, like, some very high level of statistics baked into its dashboard for years and years now. But you know our customers always wanted a double click on that right to be able to go a little bit deeper. And Griffon are really addresses that it's built in dashboards. That's what's really nice to see. >>Yeah, uh, and all of the alerts and, uh, data are actually captured in a Prometheus database underlying that you have access to so that you are allowed to add new alerts that then go out to touch slack and say hi, You need to watch your disk space on this machine or those kinds of things. Um, and and this is especially helpful for folks who you know, want to manage the application service layer but don't necessarily want to manage the operations side of the house. So it gives them a tool set that they can easily say here, Can you watch these for us? And Miran tas can actually help do that with you, So >>yeah, yeah, I mean, that's just another example of baking in that expert knowledge, right? So you can leverage that without tons and tons of a long ah, long runway of learning about how to do that sort of thing. Just get out of the box right away. There was the other thing, actually, that you could sleep by really quickly if you weren't paying close attention. But Sean mentioned it on the video. And that was how When you use dark enterprise container cloud to scale your cluster, particularly pulling a worker out, it doesn't just like Territo worker down and forget about it. Right? Is using good communities best practices to cordon and drain the No. So you aren't gonna disrupt your workloads? You're going to just have a bunch of containers instantly. Excellent crash. You could really carefully manage the migration of workloads off that cluster has baked right in tow. How? How? Document? The brass container cloud is his handling cluster scale. >>Right? And And the kubernetes, uh, scaling methodology is is he adhered to with all of the proper techniques that ensure that it will tell you. Wait, you've got a container that actually needs three, uh, three, uh, instances of itself. And you don't want to take that out, because that node, it means you'll only be able to have to. And we can't do that. We can't allow that. >>Okay, Very cool. Further thoughts on this video. So should we go to the questions. >>Let's let's go to the questions >>that people have. Uh, there's one good one here, down near the bottom regarding whether an a p I is available to do this. So in all these demos were clicking through this web. You I Yes, this is all a p. I driven. You could do all of this. You know, automate all this away is part of the CSC change. Absolutely. Um, that's kind of the point, right? We want you to be ableto spin up. Come on. I keep calling them commodity clusters. What I mean by that is clusters that you can create and throw away. You know, easily and automatically. So everything you see in these demos eyes exposed to FBI? >>Yeah. In addition, through the standard Cube cuddle, Uh, cli as well. So if you're not a programmer, but you still want to do some scripting Thio, you know, set up things and deploy your applications and things. You can use this standard tool sets that are available to accomplish that. >>There is a good question on scale here. So, like, just how many clusters and what sort of scale of deployments come this kind of support our engineers report back here that we've done in practice up to a Zeman ia's like two hundred clusters. We've deployed on this with two hundred fifty nodes in a cluster. So were, you know, like like I said, hundreds, hundreds of notes, hundreds of clusters managed by documented press container fall and then those downstream clusters, of course, subject to the usual constraints for kubernetes, right? Like default constraints with something like one hundred pods for no or something like that. There's a few different limitations of how many pods you can run on a given cluster that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. >>Yeah, E. I mean, I don't think that we constrain any of the capabilities that are available in the, uh, infrastructure deliveries, uh, service within the goober Netease framework. So were, you know, But we are, uh, adhering to the standards that we would want to set to make sure that we're not overloading a node or those kinds of things, >>right. Absolutely cool. Alright. So at this point, we've got kind of a two layered our protection when we are management cluster, but we deployed in the first video. Then we use that to deploy one child clustering work, classroom, uh, for more sophisticated deployments where we might want to manage child clusters across multiple regions. We're gonna add another layer into our architectural we're gonna add in regional cluster management. So this idea you're gonna have the single management cluster that we started within the first video. On the next video, we're gonna learn how to spin up a regional clusters, each one of which would manage, for example, a different AWS uh, US region. So let me just pull out the video for that bill. We'll check it out for me. Mhm. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectures of you how to set up the management environment, prepare for the deployment deployment overview and then just to prove it, to play a regional child cluster. So, looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case AWS on the LCN components on the D you speak Cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need a regional cluster? Different platform architectures, for example aws who have been stack even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager we also Machine Manager were held. Mandel are managed as well as the actual provider logic. Mhm. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. And you see, it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster similar to what we're going to deploy now, also only has three managers once again, no workers. But as a comparison, here's a child cluster This one has three managers, but also has additional workers associate it to the cluster. All right, we need to connect. Tell bootstrap note. Preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine. All right. A few things we have to do to make sure the environment is ready. First thing we're going to see go into route. We'll go into our releases folder where we have the kozberg struck on. This was the original bootstrap used to build the original management cluster. Yeah, we're going to double check to make sure our cube con figures there once again, the one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything is working. A condom. No damages waken access to a swell. Yeah. Next we're gonna edit the machine definitions. What we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I. So that's found under the templates AWS directory. We don't need to edit anything else here. But we could change items like the size of the machines attempts. We want to use that The key items to ensure where you changed the am I reference for the junta image is the one for the region in this case AWS region for utilizing this was no construct deployment. We have to make sure we're pointing in the correct open stack images. Yeah, okay. Set the correct and my save file. Now we need to get up credentials again. When we originally created the bootstrap cluster, we got credentials from eight of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we're just exporting the AWS access key and I d. What's important is CAAs aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our cube conflict that we want to use for the management cluster. When we looked at earlier Yeah, now we're exporting that. Want to call the cluster region Is Frank Foods Socrates Frankfurt yet trying to use something descriptive It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed. Um, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at W s and waiting for that bastard and no to get started. Please. The best you nerd Onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy. Dr. Enterprise, this is probably the longest face. Yeah, seeing the second that all the nerds will go from the player deployed. Prepare, prepare. Yeah, You'll see their status changes updates. He was the first night ready. Second, just applying second already. Both my time. No waiting from home control. Let's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running the date of the U. S. All my stay. Ah, now we're playing Stockland. Switch over is done on. Done. Now I will build a child cluster in the new region very, very quickly to find the cluster will pick. Our new credential has shown up. We'll just call it Frankfurt for simplicity a key and customs to find. That's the machine. That cluster stop with three managers. Set the correct Am I for the region? Yeah, Do the same to add workers. There we go test the building. Yeah. Total bill of time Should be about fifteen minutes. Concedes in progress. It's going to expect this up a little bit. Check the events. We've created all the dependencies, machine instances, machines, a boat shortly. We should have a working cluster in Frankfurt region. Now almost a one note is ready from management. Two in progress. Yeah, on we're done. Clusters up and running. Yeah. >>Excellent. So at this point, we've now got that three tier structure that we talked about before the video. We got that management cluster that we do strapped in the first video. Now we have in this example to different regional clustering one in Frankfurt, one of one management was two different aws regions. And sitting on that you can do Strap up all those Doctor enterprise costumes that we want for our work clothes. >>Yeah, that's the key to this is to be able to have co resident with your actual application service enabled clusters the management co resident with it so that you can, you know, quickly access that he observation Elson Surfboard services like the graph, Ana and that sort of thing for your particular region. A supposed to having to lug back into the home. What did you call it when we started >>the mothership? >>The mothership. Right. So we don't have to go back to the mother ship. We could get >>it locally. Yeah, when, like to that point of aggregating things under a single pane of glass? That's one thing that again kind of sailed by in the demo really quickly. But you'll notice all your different clusters were on that same cluster. Your pain on your doctor Enterprise Container Cloud management. Uh, court. Right. So both your child clusters for running workload and your regional clusters for bootstrapping. Those child clusters were all listed in the same place there. So it's just one pane of glass to go look for, for all of your clusters, >>right? And, uh, this is kind of an important point. I was, I was realizing, as we were going through this. All of the mechanics are actually identical between the bootstrapped cluster of the original services and the bootstrapped cluster of the regional services. It's the management layer of everything so that you only have managers, you don't have workers and that at the child cluster layer below the regional or the management cluster itself, that's where you have the worker nodes. And those are the ones that host the application services in that three tiered architecture that we've now defined >>and another, you know, detail for those that have sharp eyes. In that video, you'll notice when deploying a child clusters. There's not on Lee. A minimum of three managers for high availability management cluster. You must have at least two workers that's just required for workload failure. It's one of those down get out of work. They could potentially step in there, so your minimum foot point one of these child clusters is fine. Violence and scalable, obviously, from a >>That's right. >>Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want to my last video. There's another question here about, like where these clusters can live. So again, I know these examples are very aws heavy. Honestly, it's just easy to set up down on the other us. We could do things on bare metal and, uh, open stack departments on Prem. That's what all of this still works in exactly the same way. >>Yeah, the, uh, key to this, especially for the the, uh, child clusters, is the provision hers? Right? See you establish on AWS provision or you establish a bare metal provision or you establish a open stack provision. Or and eventually that list will include all of the other major players in the cloud arena. But you, by selecting the provision or within your management interface, that's where you decide where it's going to be hosted, where the child cluster is to be hosted. >>Speaking off all through a child clusters. Let's jump into our last video in the Siri's, where we'll see how to spin up a child cluster on bare metal. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. So why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent. Provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and supports high performance workloads like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another opera visor. Lay it between so continue on the theme Why Communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p G A s G p us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. Uh, we can handle utilization in the scheduling. Better Onda we increase the performances and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project will add the bare metal hosts, including the host name. I put my credentials I pay my address the Mac address on then provide a machine type label to determine what type of machine it is for later use. Okay, let's get started. So well again. Was the operator thing. We'll go and we'll create a project for our machines to be a member off helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. So the first thing we had to be in post, Yeah, many of the machine A name. Anything you want, que experimental zero one. Provide the IAP my user name type my password. Okay. On the Mac address for the common interface with the boot interface and then the i p m I i p address These machines will be at the time storage worker manager. He's a manager. Yeah, we're gonna add a number of other machines on will. Speed this up just so you could see what the process looks like in the future. Better discovery will be added to the product. Okay. Okay. Getting back there we have it are Six machines have been added, are busy being inspected, being added to the system. Let's have a look at the details of a single note. Yeah, you can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. I see. Okay, let's go and create the cluster. Yeah, So we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So we'll credit custom. We'll give it a name, but if it were selecting bare metal on the region, we're going to select the version we want to apply. No way. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of dress range on update the address range that we want to use for the cluster. Check that the sea ideal blocks for the Cuban ladies and tunnels are what we want them to be. Enable disabled stack light. Yeah, and soothe stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here. We're focused on building communities clusters, so we're gonna put the count of machines. You want managers? We're gonna pick the label type manager and create three machines is the manager for the Cuban eighties. Casting Okay thing. We're having workers to the same. It's a process. Just making sure that the worker label host level are I'm sorry. On when Wait for the machines to deploy. Let's go through the process of putting the operating system on the notes validating and operating system deploying doctor identifies Make sure that the cluster is up and running and ready to go. Okay, let's review the bold events waken See the machine info now populated with more information about the specifics of things like storage and of course, details of a cluster etcetera. Yeah, yeah, well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build? And that brings us to the end of this particular demo. You can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>All right, so there we have it, deploying a cluster to bare metal. Much the same is how we did for AWS. I guess maybe the biggest different stepwise there is there is that registration face first, right? So rather than just using AWS financials toe magically create PM's in the cloud. You got a point out all your bare metal servers to Dr Enterprise between the cloud and they really come in, I guess three profiles, right? You got your manager profile with a profile storage profile which has been labeled as allocate. Um, crossword cluster has appropriate, >>right? And And I think that the you know, the key differentiator here is that you have more physical control over what, uh, attributes that love your cat, by the way, uh, where you have the different attributes of a server of physical server. So you can, uh, ensure that the SSD configuration on the storage nodes is gonna be taken advantage of in the best way the GP use on the worker nodes and and that the management layer is going to have sufficient horsepower to, um, spin up to to scale up the the environments, as required. One of the things I wanted to mention, though, um, if I could get this out without the choking much better. Um, is that Ah, hey, mentioned the load balancer and I wanted to make sure in defining the load balancer and the load balancer ranges. Um, that is for the top of the the cluster itself. That's the operations of the management, uh, layer integrating with your systems internally to be able to access the the Cube Can figs. I I p address the, uh, in a centralized way. It's not the load balancer that's working within the kubernetes cluster that you are deploying. That's still cube proxy or service mesh, or however you're intending to do it. So, um, it's kind of an interesting step that your initial step in building this, um and we typically use things like metal L B or in gen X or that kind of thing is to establish that before we deploy this bear mental cluster so that it can ride on top of that for the tips and things. >>Very cool. So any other thoughts on what we've seen so far today? Bruce, we've gone through all the different layers. Doctor enterprise container clouds in these videos from our management are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. Closing thoughts before we take just a very short break and run through these demos again. >>You know, I've been very exciting. Ah, doing the presentation with you. I'm really looking forward to doing it the second time, so that we because we've got a good rhythm going about this kind of thing. So I'm looking forward to doing that. But I think that the key elements of what we're trying to convey to the folks out there in the audience that I hope you've gotten out of it is that will that this is an easy enough process that if you follow the step by steps going through the documentation that's been put out in the chat, um, that you'll be able to give this a go yourself, Um, and you don't have to limit yourself toe having physical hardware on prim to try it. You could do it in a ws as we've shown you today. And if you've got some fancy use cases like, uh, you you need a Hadoop And and, uh, you know, cloud oriented ai stuff that providing a bare metal service helps you to get there very fast. So right. Thank you. It's been a pleasure. >>Yeah, thanks everyone for coming out. So, like I said we're going to take a very short, like, three minute break here. Uh, take the opportunity to let your colleagues know if they were in another session or they didn't quite make it to the beginning of this session. Or if you just want to see these demos again, we're going to kick off this demo. Siri's again in just three minutes at ten. Twenty five a. M. Pacific time where we will see all this great stuff again. Let's take a three minute break. I'll see you all back here in just two minutes now, you know. Okay, folks, that's the end of our extremely short break. We'll give people just maybe, like one more minute to trickle in if folks are interested in coming on in and jumping into our demo. Siri's again. Eso For those of you that are just joining us now I'm Bill Mills. I head up curriculum development for the training team here. Moran Tous on Joining me for this session of demos is Bruce. Don't you go ahead and introduce yourself doors, who is still on break? That's cool. We'll give Bruce a minute or two to get back while everyone else trickles back in. There he is. Hello, Bruce. >>How'd that go for you? Okay, >>Very well. So let's kick off our second session here. I e just interest will feel for you. Thio. Let it run over here. >>Alright. Hi. Bruce Matthews here. I'm the Western Regional Solutions architect for Marantz. Use A I'm the one with the gray hair and the glasses. Uh, the handsome one is Bill. So, uh, Bill, take it away. >>Excellent. So over the next hour or so, we've got a Siris of demos that's gonna walk you through your first steps with Dr Enterprise Container Cloud Doctor Enterprise Container Cloud is, of course, Miranda's brand new offering from bootstrapping kubernetes clusters in AWS bare metal open stack. And for the providers in the very near future. So we we've got, you know, just just over an hour left together on this session, uh, if you joined us at the top of the hour back at nine. A. M. Pacific, we went through these demos once already. Let's do them again for everyone else that was only able to jump in right now. Let's go. Our first video where we're gonna install Dr Enterprise container cloud for the very first time and use it to bootstrap management. Cluster Management Cluster, as I like to describe it, is our mother ship that's going to spin up all the other kubernetes clusters, Doctor Enterprise clusters that we're gonna run our workloads on. So I'm gonna do >>I'm so excited. I can hardly wait. >>Let's do it all right to share my video out here. Yeah, let's do it. >>Good day. The focus for this demo will be the initial bootstrap of the management cluster on the first regional clusters. To support AWS deployments, the management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case AWS and the Elsom components on the UCP cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a bootstrap note on its dependencies on handling the download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the ideas environment, the fourth configuring the deployment, defining things like the machine types on the fifth phase, Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node. Just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now, we're just checking through aws to make sure that the account we want to use we have the correct credentials on the correct roles set up on validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just gonna check that we can from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next we're going to run it. Yeah, I've been deployed changing into that big struck folder, just making see what's there right now we have no license file, so we're gonna get the license filed. Okay? Get the license file through more antis downloads site signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, since we've done that, we can now go ahead with the rest of the deployment. Yeah, see what the follow is there? Uh huh. Once again, checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. Alright. Next big step is violating all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running in AWS policy create. So it is part of that is creating our food trucks script. Creating this through policy files onto the AWS, just generally preparing the environment using a cloud formation script, you'll see in a second, I'll give a new policy confirmations just waiting for it to complete. And there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created. Good day. I am console. Go to the new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media access Key I. D and the secret access key, but usually then exported on the command line. Okay, Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Okay, thanks. Is key. So you could X key Right on. Let's kick it off. So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you. Um, as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the AWS side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS at the end of the process. That cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Yeah, okay. Local clusters boat. Just waiting for the various objects to get ready. Standard communities objects here. Yeah, you mentioned Yeah. So we've speed up this process a little bit just for demonstration purposes. Okay, there we go. So first note is being built the bastion host just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for AWS to create the instance. Okay. Yeah. Beauty there. Movies. Okay, sketch. Hello? Yeah, Okay. Okay. On. There we go. Question host has been built on three instances for the management clusters have now been created. Okay, We're going through the process of preparing. Those nodes were now copying everything over. See that scaling up of controllers in the big strapped cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Right? Okay. Just waiting for key. Clark. Uh huh. So finish up. Yeah. No. Now we're shutting down. Control this on the local bootstrap node on preparing our I. D. C configuration, fourth indication. So once this is completed, the last phase will be to deploy stack light into the new cluster, that glass on monitoring tool set, Then we go stack like deployment has started. Mhm. Coming to the end of the deployment mountain. Yeah, they were cut final phase of the deployment. And we are done. Yeah, you'll see. At the end, they're providing us the details of you. I log in. So there's a key Clark log in. Uh, you can modify that initial default possible is part of the configuration set up where they were in the documentation way. Go Councils up way can log in. Yeah. Yeah. Thank you very much for watching. >>All right, so at this point, what we have we got our management cluster spun up, ready to start creating work clusters. So just a couple of points to clarify there to make sure everyone caught that, uh, as advertised. That's darker. Enterprise container cloud management cluster. That's not rework loans. are gonna go right? That is the tool and you're gonna use to start spinning up downstream commodity documentary prize clusters for bootstrapping record too. >>And the seed host that were, uh, talking about the kind cluster dingy actually doesn't have to exist after the bootstrap succeeds eso It's sort of like, uh, copies head from the seed host Toothy targets in AWS spins it up it then boots the the actual clusters and then it goes away too, because it's no longer necessary >>so that bootstrapping know that there's not really any requirements, Hardly on that, right. It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, as you just said, it's just a kubernetes in docker cluster on that piece. Drop note is just gonna get torn down after the set up finishes on. You no longer need that. Everything you're gonna do, you're gonna drive from the single pane of glass provided to you by your management cluster Doctor enterprise Continue cloud. Another thing that I think is sort of interesting their eyes that the convict is fairly minimal. Really? You just need to provide it like aws regions. Um, am I? And that's what is going to spin up that spending that matter faster. >>Right? There is a mammal file in the bootstrap directory itself, and all of the necessary parameters that you would fill in have default set. But you have the option then of going in and defining a different Am I different for a different region, for example? Oh, are different. Size of instance from AWS. >>One thing that people often ask about is the cluster footprint. And so that example you saw they were spitting up a three manager, um, managing cluster as mandatory, right? No single manager set up at all. We want high availability for doctrine Enterprise Container Cloud management. Like so again, just to make sure everyone sort of on board with the life cycle stage that we're at right now. That's the very first thing you're going to do to set up Dr Enterprise Container Cloud. You're going to do it. Hopefully exactly once. Right now, you've got your management cluster running, and they're gonna use that to spend up all your other work clusters Day today has has needed How do we just have a quick look at the questions and then lets take a look at spinning up some of those child clusters. >>Okay, e think they've actually been answered? >>Yeah, for the most part. One thing I'll point out that came up again in the Dail, helpfully pointed out earlier in surgery, pointed out again, is that if you want to try any of the stuff yourself, it's all of the dogs. And so have a look at the chat. There's a links to instructions, so step by step instructions to do each and every thing we're doing here today yourself. I really encourage you to do that. Taking this out for a drive on your own really helps internalizing communicate these ideas after the after launch pad today, Please give this stuff try on your machines. Okay, So at this point, like I said, we've got our management cluster. We're not gonna run workloads there that we're going to start creating child clusters. That's where all of our work and we're gonna go. That's what we're gonna learn how to do in our next video. Cue that up for us. >>I so love Shawn's voice. >>Wasn't that all day? >>Yeah, I watched him read the phone book. >>All right, here we go. Let's now that we have our management cluster set up, let's create a first child work cluster. >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster the scaling of the cluster on how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the u I. So you can switch Project Mary only has access to development. Uh huh. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Man. Yes, this H keys, Associate ID for Mary into her team on the cloud credentials that allow you to create or access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences. Right. Let's now set up some ssh keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name. We copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our machine. A very simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, we got the clusters tab we had to create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one released version five point seven is the current release Onda Attach. Mary's Key is necessary key. We can then check the rest of the settings, confirming the provider any kubernetes c r D a r i p address information. We can change this. Should we wish to? We'll leave it default for now and then what components of stack light? I would like to deploy into my custom for this. I'm enabling stack light on logging, and I consider the retention sizes attention times on. Even at this stage, add any custom alerts for the watchdogs. Consider email alerting which I will need my smart host. Details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I convinced side on the route. Device size. There we go. My three machines are busy creating. I now need to add some workers to this cluster. So I go through the same process this time once again, just selecting worker. I'll just add to once again the am I is extremely important. Will fail if we don't pick the right. Am I for a Clinton machine? In this case and the deployment has started, we can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen. Cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here. We've created the VPC. We've created the sub nets on. We've created the Internet Gateway. It's unnecessary made of us. And we have no warnings of the stage. Okay, this will then run for a while. We have one minute past. We can click through. We can check the status of the machine balls as individuals so we can check the machine info, details of the machines that we've assigned mhm and see any events pertaining to the machine areas like this one on normal. Yeah. Just last. The community's components are waiting for the machines to start. Go back to customers. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway. And at this stage, the machines have been built on assigned. I pick up the U S. Yeah, yeah, yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. No speeding things up a little bit this whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured mhm and then we go. Cluster has been deployed. So once the classes deployed, we can now never get around. Our environment are looking into configure cluster. We could modify their cluster. We could get the end points for alert Alert Manager See here the griffon occupying and Prometheus are still building in the background but the cluster is available on You would be able to put workloads on it at this stage to download the cube conflict so that I can put workloads on it. It's again the three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster. All right, Now that the build is fully completed, we can check out cluster info on. We can see that all the satellite components have been built. All the storage is there, and we have access to the CPU. I. So if we click into the cluster, we can access the UCP dashboard, click the signing with the clock button to use the SSO. We give Mary's possible to use the name once again. Thing is an unlicensed cluster way could license at this point. Or just skip it on. Do we have the UCP dashboard? You could see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon. A data just been automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Um, so we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah, a general dashboard of Cuba Navies cluster. What If this is configurable, you can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster. All right to scale the cluster on to add a No. This is simple. Is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger group disks and you'll see that worker has been added in the provisioning state. On shortly, we will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node we would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workloads are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button validating which release you would like to update to this case. This available releases five point seven point one give you I'm kicking the update back in the background. We will coordinate. Drain each node slowly, go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Who we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already. Yeah, and in a few minutes, we'll see that the upgrade has been completed. There we go. Great. Done. If you work loads of both using proper cloud native community standards, there will be no impact. >>All right, there. We haven't. We got our first workload cluster spun up and managed by Dr Enterprise Container Cloud. So I I loved Shawn's classic warning there. When you're spinning up an actual doctor enterprise deployment, you see little errors and warnings popping up. Just don't touch it. Just leave it alone and let Dr Enterprises self healing properties take care of all those very transient temporary glitches, resolve themselves and leave you with a functioning workload cluster within victims. >>And now, if you think about it that that video was not very long at all. And that's how long it would take you if someone came into you and said, Hey, can you spend up a kubernetes cluster for development development A. Over here, um, it literally would take you a few minutes to thio Accomplish that. And that was with a W s. Obviously, which is sort of, ah, transient resource in the cloud. But you could do exactly the same thing with resource is on Prem or resource is, um physical resource is and will be going through that later in the process. >>Yeah, absolutely one thing that is present in that demo, but that I like to highlight a little bit more because it just kind of glides by Is this notion of, ah, cluster release? So when Sean was creating that cluster, and also when when he was upgrading that cluster, he had to choose a release. What does that didn't really explain? What does that mean? Well, in Dr Enterprise Container Cloud, we have released numbers that capture the entire staff of container ization tools that will be deploying to that workload costume. So that's your version of kubernetes sed cor DNs calico. Doctor Engineer. All the different bits and pieces that not only work independently but are validated toe work together as a staff appropriate for production, humanities, adopted enterprise environments. >>Yep. From the bottom of the stack to the top, we actually test it for scale. Test it for CVS, test it for all of the various things that would, you know, result in issues with you running the application services. And I've got to tell you from having, you know, managed kubernetes deployments and things like that that if you're the one doing it yourself, it can get rather messy. Eso This makes it easy. >>Bruce, you were staying a second ago. They I'll take you at least fifteen minutes to install your release. Custer. Well, sure, but what would all the other bits and pieces you need toe? Not just It's not just about pressing the button to install it, right? It's making the right decision. About what components work? Well, our best tested toe be successful working together has a staff? Absolutely. We this release mechanism and Dr Enterprise Container Cloud. Let's just kind of package up that expert knowledge and make it available in a really straightforward, fashionable species. Uh, pre Confederate release numbers and Bruce is you're pointing out earlier. He's got delivered to us is updates kind of transparent period. When when? When Sean wanted toe update that cluster, he created little update. Custer Button appeared when an update was available. All you gotta do is click. It tells you what Here's your new stack of communities components. It goes ahead. And the straps those components for you? >>Yeah, it actually even displays at the top of the screen. Ah, little header That says you've got an update available. Do you want me to apply? It s o >>Absolutely. Another couple of cool things. I think that are easy to miss in that demo was I really like the on board Bafana that comes along with this stack. So we've been Prometheus Metrics and Dr Enterprise for years and years now. They're very high level. Maybe in in previous versions of Dr Enterprise having those detailed dashboards that Ravana provides, I think that's a great value out there. People always wanted to be ableto zoom in a little bit on that, uh, on those cluster metrics, you're gonna provides them out of the box for us. Yeah, >>that was Ah, really, uh, you know, the joining of the Miranda's and Dr teams together actually spawned us to be able to take the best of what Morantes had in the open stack environment for monitoring and logging and alerting and to do that integration in in a very short period of time so that now we've got it straight across the board for both the kubernetes world and the open stack world. Using the same tool sets >>warm. One other thing I wanna point out about that demo that I think there was some questions about our last go around was that demo was all about creating a managed workplace cluster. So the doctor enterprise Container Cloud managers were using those aws credentials provisioned it toe actually create new e c two instances installed Docker engine stalled. Doctor Enterprise. Remember all that stuff on top of those fresh new VM created and managed by Dr Enterprise contain the cloud. Nothing unique about that. AWS deployments do that on open staff doing on Parramatta stuff as well. Um, there's another flavor here, though in a way to do this for all of our long time doctor Enterprise customers that have been running Doctor Enterprise for years and years. Now, if you got existing UCP points existing doctor enterprise deployments, you plug those in to Dr Enterprise Container Cloud, uh, and use darker enterprise between the cloud to manage those pre existing Oh, working clusters. You don't always have to be strapping straight from Dr Enterprises. Plug in external clusters is bad. >>Yep, the the Cube config elements of the UCP environment. The bundling capability actually gives us a very straightforward methodology. And there's instructions on our website for exactly how thio, uh, bring in import and you see p cluster. Um so it it makes very convenient for our existing customers to take advantage of this new release. >>Absolutely cool. More thoughts on this wonders if we jump onto the next video. >>I think we should move press on >>time marches on here. So let's Let's carry on. So just to recap where we are right now, first video, we create a management cluster. That's what we're gonna use to create All our downstream were closed clusters, which is what we did in this video. Let's maybe the simplest architectures, because that's doing everything in one region on AWS pretty common use case because we want to be able to spin up workload clusters across many regions. And so to do that, we're gonna add a third layer in between the management and work cluster layers. That's gonna be our regional cluster managers. So this is gonna be, uh, our regional management cluster that exists per region that we're going to manage those regional managers will be than the ones responsible for spending part clusters across all these different regions. Let's see it in action in our next video. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectural overview, how to set up the management environment, prepare for the deployment deployment overview, and then just to prove it, to play a regional child cluster. So looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case, AWS on the L C M components on the d you speak cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need original cluster? Different platform architectures, for example AWS open stack, even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager. We also machine manager. We're hell Mandel are managed as well as the actual provider logic. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. When you see it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster, similar to what we're going to deploy now. Also only has three managers once again, no workers. But as a comparison is a child cluster. This one has three managers, but also has additional workers associate it to the cluster. Yeah, all right, we need to connect. Tell bootstrap note, preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine Mhm. All right, A few things we have to do to make sure the environment is ready. First thing we're gonna pseudo into route. I mean, we'll go into our releases folder where we have the car's boot strap on. This was the original bootstrap used to build the original management cluster. We're going to double check to make sure our cube con figures there It's again. The one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything's working, condone, load our images waken access to a swell. Yeah, Next, we're gonna edit the machine definitions what we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I So that's found under the templates AWS directory. We don't need to edit anything else here, but we could change items like the size of the machines attempts we want to use but the key items to ensure where changed the am I reference for the junta image is the one for the region in this case aws region of re utilizing. This was an open stack deployment. We have to make sure we're pointing in the correct open stack images. Yeah, yeah. Okay. Sit the correct Am I save the file? Yeah. We need to get up credentials again. When we originally created the bootstrap cluster, we got credentials made of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we just exporting AWS access key and I d. What's important is Kaz aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our Q conflict that we want to use for the management cluster when we looked at earlier. Yeah, now we're exporting that. Want to call? The cluster region is Frankfurt's Socrates Frankfurt yet trying to use something descriptive? It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at us and waiting for the past, you know, to get started. Please the best your node, onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy Dr Enterprise, he says. Probably the longest face we'll see in a second that all the nodes will go from the player deployed. Prepare, prepare Mhm. We'll see. Their status changes updates. It was the first word ready. Second, just applying second. Grady, both my time away from home control that's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running a data for us? Yeah, almost a on. Now we're playing Stockland. Thanks. Whichever is done on Done. Now we'll build a child cluster in the new region very, very quickly. Find the cluster will pick our new credential have shown up. We'll just call it Frankfurt for simplicity. A key on customers to find. That's the machine. That cluster stop with three manages set the correct Am I for the region? Yeah, Same to add workers. There we go. That's the building. Yeah. Total bill of time. Should be about fifteen minutes. Concedes in progress. Can we expect this up a little bit? Check the events. We've created all the dependencies, machine instances, machines. A boat? Yeah. Shortly. We should have a working caster in the Frankfurt region. Now almost a one note is ready from management. Two in progress. On we're done. Trust us up and running. >>Excellent. There we have it. We've got our three layered doctor enterprise container cloud structure in place now with our management cluster in which we scrap everything else. Our regional clusters which manage individual aws regions and child clusters sitting over depends. >>Yeah, you can. You know you can actually see in the hierarchy the advantages that that presents for folks who have multiple locations where they'd like a geographic locations where they'd like to distribute their clusters so that you can access them or readily co resident with your development teams. Um and, uh, one of the other things I think that's really unique about it is that we provide that same operational support system capability throughout. So you've got stack light monitoring the stack light that's monitoring the stack light down to the actual child clusters that they have >>all through that single pane of glass that shows you all your different clusters, whether their workload cluster like what the child clusters or usual clusters from managing different regions. Cool. Alright, well, time marches on your folks. We've only got a few minutes left and I got one more video in our last video for the session. We're gonna walk through standing up a child cluster on bare metal. So so far, everything we've seen so far has been aws focus. Just because it's kind of easy to make that was on AWS. We don't want to leave you with the impression that that's all we do, we're covering AWS bare metal and open step deployments as well documented Craftsman Cloud. Let's see it in action with a bare metal child cluster. >>We are on the home stretch, >>right. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. Yeah, so why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and support high performance workouts like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another hyper visor layer in between. So continuing on the theme Why communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p g A s G p, us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. We can handle utilization in the scheduling better Onda. We increase the performance and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project. Will add the bare metal hosts, including the host name. I put my credentials. I pay my address, Mac address on, then provide a machine type label to determine what type of machine it is. Related use. Okay, let's get started Certain Blufgan was the operator thing. We'll go and we'll create a project for our machines to be a member off. Helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. Yeah. So the first thing we had to be in post many of the machine a name. Anything you want? Yeah, in this case by mental zero one. Provide the IAP My user name. Type my password? Yeah. On the Mac address for the active, my interface with boot interface and then the i p m i P address. Yeah, these machines. We have the time storage worker manager. He's a manager. We're gonna add a number of other machines on will speed this up just so you could see what the process. Looks like in the future, better discovery will be added to the product. Okay, Okay. Getting back there. We haven't Are Six machines have been added. Are busy being inspected, being added to the system. Let's have a look at the details of a single note. Mhm. We can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. Okay, it's going to create the cluster. Mhm. Okay, so we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So credit custom. We'll give it a name. Thank you. But he thought were selecting bare metal on the region. We're going to select the version we want to apply on. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of the dress range update the address range that we want to use for the cluster. Check that the sea idea blocks for the communities and tunnels are what we want them to be. Enable disabled stack light and said the stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here we're focused on building communities clusters. So we're gonna put the count of machines. You want managers? We're gonna pick the label type manager on create three machines. Is a manager for the Cuban a disgusting? Yeah, they were having workers to the same. It's a process. Just making sure that the worker label host like you are so yes, on Duin wait for the machines to deploy. Let's go through the process of putting the operating system on the notes, validating that operating system. Deploying Docker enterprise on making sure that the cluster is up and running ready to go. Okay, let's review the bold events. We can see the machine info now populated with more information about the specifics of things like storage. Yeah, of course. Details of a cluster, etcetera. Yeah, Yeah. Okay. Well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build, and that brings us to the end of this particular do my as you can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>Here we have a child cluster on bare metal for folks that wanted to play the stuff on Prem. >>It's ah been an interesting journey taken from the mothership as we started out building ah management cluster and then populating it with a child cluster and then finally creating a regional cluster to spread the geographically the management of our clusters and finally to provide a platform for supporting, you know, ai needs and and big Data needs, uh, you know, thank goodness we're now able to put things like Hadoop on, uh, bare metal thio in containers were pretty exciting. >>Yeah, absolutely. So with this Doctor Enterprise container cloud platform. Hopefully this commoditized scooping clusters, doctor enterprise clusters that could be spun up and use quickly taking provisioning times. You know, from however many months to get new clusters spun up for our teams. Two minutes, right. We saw those clusters gets better. Just a couple of minutes. Excellent. All right, well, thank you, everyone, for joining us for our demo session for Dr Enterprise Container Cloud. Of course, there's many many more things to discuss about this and all of Miranda's products. If you'd like to learn more, if you'd like to get your hands dirty with all of this content, police see us a training don Miranda's dot com, where we can offer you workshops and a number of different formats on our entire line of products and hands on interactive fashion. Thanks, everyone. Enjoy the rest of the launchpad of that >>thank you all enjoy.
SUMMARY :
So for the next couple of hours, I'm the Western regional Solutions architect for Moran At least somebody on the call knows something about your enterprise Computer club. And that's really the key to this thing is to provide some, you know, many training clusters so that by the end of the tutorial content today, I think that's that's pretty much what we had to nail down here. So the management costs was always We have to give this brief little pause of the management cluster in the first regional clusters to support AWS deployments. So in that video are wonderful field CTO Shauna Vera bootstrapped So primarily the foundation for being able to deploy So this cluster isn't yet for workloads. Read the phone book, So and just to make sure I understood The output that when it says I'm pivoting, I'm pivoting from on the bootstrap er go away afterwards. So that there's no dependencies on any of the clouds that get created thereafter. Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, The config file that that's generated the template is fairly straightforward We always insist on high availability for this management cluster the scenes without you having toe worry about it as a developer. Examples of that is the day goes on. either the the regional cluster or a We've got the management cluster, and we're gonna go straight with child cluster. as opposed to having to centralize thumb So just head on in, head on into the docks like the Dale provided here. That's going to be in a very near term I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. No, just that the fact that we're running through these individual So let's go to that video and see just how We can check the status of the machine bulls as individuals so we can check the machine the thing that jumped out to me at first Waas like the inputs that go into defining Yeah, and and And that's really the focus of our effort is to ensure that So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old of the bootstrapping as well that the processes themselves are self healing, And the worst thing you could do is panic at the first warning and start tearing things that don't that then go out to touch slack and say hi, You need to watch your disk But Sean mentioned it on the video. And And the kubernetes, uh, scaling methodology is is he adhered So should we go to the questions. Um, that's kind of the point, right? you know, set up things and deploy your applications and things. that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. to the standards that we would want to set to make sure that we're not overloading On the next video, we're gonna learn how to spin up a Yeah, Do the same to add workers. We got that management cluster that we do strapped in the first video. Yeah, that's the key to this is to be able to have co resident with So we don't have to go back to the mother ship. So it's just one pane of glass to the bootstrapped cluster of the regional services. and another, you know, detail for those that have sharp eyes. Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want all of the other major players in the cloud arena. Let's jump into our last video in the Siri's, So the first thing we had to be in post, Yeah, many of the machine A name. Much the same is how we did for AWS. nodes and and that the management layer is going to have sufficient horsepower to, are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. that's been put out in the chat, um, that you'll be able to give this a go yourself, Uh, take the opportunity to let your colleagues know if they were in another session I e just interest will feel for you. Use A I'm the one with the gray hair and the glasses. And for the providers in the very near future. I can hardly wait. Let's do it all right to share my video So the first thing is, we need those route credentials which we're going to export on the command That is the tool and you're gonna use to start spinning up downstream It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, and all of the necessary parameters that you would fill in have That's the very first thing you're going to Yeah, for the most part. Let's now that we have our management cluster set up, let's create a first We can check the status of the machine balls as individuals so we can check the glitches, resolve themselves and leave you with a functioning workload cluster within exactly the same thing with resource is on Prem or resource is, All the different bits and pieces And I've got to tell you from having, you know, managed kubernetes And the straps those components for you? Yeah, it actually even displays at the top of the screen. I really like the on board Bafana that comes along with this stack. the best of what Morantes had in the open stack environment for monitoring and logging So the doctor enterprise Container Cloud managers were Yep, the the Cube config elements of the UCP environment. More thoughts on this wonders if we jump onto the next video. Let's maybe the simplest architectures, of the regional cluster and how it connects to the management cluster on their components, There we have it. that we provide that same operational support system capability Just because it's kind of easy to make that was on AWS. Just making sure that the worker label host like you are so yes, It's ah been an interesting journey taken from the mothership Enjoy the rest of the launchpad
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mary | PERSON | 0.99+ |
Sean | PERSON | 0.99+ |
Sean O'Mara | PERSON | 0.99+ |
Bruce | PERSON | 0.99+ |
Frankfurt | LOCATION | 0.99+ |
three machines | QUANTITY | 0.99+ |
Bill Milks | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
first video | QUANTITY | 0.99+ |
second phase | QUANTITY | 0.99+ |
Shawn | PERSON | 0.99+ |
first phase | QUANTITY | 0.99+ |
Three | QUANTITY | 0.99+ |
Two minutes | QUANTITY | 0.99+ |
three managers | QUANTITY | 0.99+ |
fifth phase | QUANTITY | 0.99+ |
Clark | PERSON | 0.99+ |
Bill Mills | PERSON | 0.99+ |
Dale | PERSON | 0.99+ |
Five minutes | QUANTITY | 0.99+ |
Nan | PERSON | 0.99+ |
second session | QUANTITY | 0.99+ |
Third phase | QUANTITY | 0.99+ |
Seymour | PERSON | 0.99+ |
Bruce Basil Matthews | PERSON | 0.99+ |
Moran Tous | PERSON | 0.99+ |
five minutes | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Tom Wilkie, Grafana Labs | KubeCon + CloudNativeCon NA 2019
>>Live from San Diego, California. It's the cube covering to clock in cloud native con brought to you by red hat, the cloud native computing foundation and its ecosystem. >>Welcome back to the queue bumps to men. And my cohost is John Troyer and you're watching the cube here at CubeCon, cloud-native con 2019 in beautiful and sunny San Diego today. Happy to welcome to the program a first time guest, Tom Willkie, who's vice president of product ECRO funnel labs. Thank you. Thank you so much for joining us. All right, so it's on your tee shirt. We've been hearing, uh, customers talking about it and the like, but, uh, why don't you introduce the company to our audience in a, where you fit in this broad landscape, uh, here at the CNCF show. Thank you. Yes. So Grafana is probably the most popular open source project for dashboarding and visualization. Um, started off focused on time series data on metrics, um, but really recently has branched out into log analysis and tracing and, and all, all of the kinds of aspects of your observability stack. >>Alright, so really big, uh, you know, broad topic there. Uh, we know many of the companies in that space. Uh, there's been many acquisitions, uh, you know, uh, recently in this, um, where, where do you fit in your system? I saw like databases, like a big focus, uh, when, when I, when I look at the company website, uh, bring us inside a little bit. Yeah. As a product to the offering. The customers most, um, >> most, most vendors in this space will sell you a monitoring product that includes the time series database normally includes visualization and some agent as well where pharma Lampson Griffon open source projects, very focused on the visualization aspects. So we are data source agnostic and we have back ends for more than 60 different data sources. So if you want to bring together data from let's say Datadog and combine it with some open source monitoring from, you can do that with. >>Uh, you can, you can have the dashboards and the individual panels in that dashboard combined data from multiple different data sources and we're pretty much the only game in town for that. You can, you can think of it like Tableau allows you to plug into a whole bunch of different databases for your BI with that. But for monitoring and for metrics. Well, so Tom, maybe let's, before we get into the exit products and more of the service and the, and the conference here, let's talk a little well on the front page of your website, you use the Oh 11, why word? So we've said where it's like monitoring here we use words like management, we use words like ops. Observability is a hot topic in the space and for people in a space that has some nuances. And so can you just maybe let the viewers and us know a little bit about what, how the space is looking at this and how you all feel about observability and what everybody here who's running some cloud native apps needs to actually function in production. >>Yeah. So I think, um, you can't talk about observability without either being pro or, or for, um, uh, the three pillars, right? So people talk about metrics, logs and traces. Um, I think what people miss here is that it's more about the experience for the developer, you know, Gruffalo and what we're trying to achieve is all about giving engineers and developers the tools they need to understand what their applications and their infrastructure doing, right? So we're not actually particularly picky about which pillars you use and which products you use to implement those pillars. But what we want to do is provide you with an experience that allows you to bring it all into a single, a single user interface and allows you to seamlessly move between the different sources of data and, and hopefully, uh, combine them in your analysis and in your root cause of any particular incident. >>And that for me is what observability means. It's about helping you understand the behavior of your application in particular. I mean, I'm, I'm a, I'm a software engineer by trade. I'm still on call. I still get paged at 3:00 AM occasionally. And, and having the right tools at 3:00 AM to allow me to as quickly as possible, figure out what happened and then dive into a fix. That's what we're about over funnel labs. All right. So Tom, one of the things we always need to understand and show here. There's the project and there's the company. Yep. Help us just kind of understand, you know, definitely a difference. The products, the, the, the mission of the company and how that fits with the project. So the Gruffalo project predates the company and it was started by taco. Um, he, you know, he saw a spot for like needing a much better kind of graphical editing of dashboards and making, making the kind of metrics way more accessible to your average human. >>Um, the final lab started really to focus on the it and, uh, monitoring observability use cases of profanity and, but the project itself is much broader than that. We see a lot of use cases in industrial, in IOT, even in BI as well. But Grafana labs is a company we're focused on the monitoring side of things. We're focused on the observability. So we also offer, we mean, like most companies, we have an enterprise version of. It has a few data sources for commercial vendors. So if you want to, you want to get your data dog or your Splunk into Grafana, then there's a commercial auction for that. But we also offer a hosted observability platform called Grafana clown. And this is where we take the best open source projects, the best tools that we think you need as an engineer to understand your applications and we host them for you and we operate them for you. >>We scale them, we upgrade them, we fix bugs, we sacrifice the clouds predominantly are hosted from atheists, our hosted graphite and our hosted Loki, our log aggregation system, um, all combined and brought together with uh, with the Gruffalo frontend. So yeah, like two products, a bunch of open source projects for final labs, employees, four of the promethium maintainers. And I'm one of the promethium maintainers. Um, we am employee graphite maintainers. Obviously a lot of Gryffindor maintainers, but also Loki. Um, I'm trying to think, like there's just so many open source projects. We, uh, we get involved with that. Really it's about synthesizing, uh, an observability platform out of those. And that's what we offer as a product. So you recently had an announcement that Loki is now GA. can you talk just a little bit about Loki and aggregation and logs and what Loki does? >>Yeah, I'd love to. Yeah. Um, a year ago in Seattle actually we announced the Loki project. Um, it was super early. I mean I just basically been finishing the code on the plane over and we announced it and no one I think could have predicted the response we had. Um, everyone was so keen and so hungry for alternative to traditional log aggregation systems. Um, so it's been a year and we've learned a hell of a lot. We've had so much feedback from the community. We've built a whole team internally around, around Loki. We now offer a hosted version of it and we've been running it in production now for over a year, um, doing some really great scale on it and we think it's ready for other people to do the same. One of the things we hear, especially at shows like this is I really, I really, you know, developers and the grassroots adopters come to us, say, we really love Loki. >>We really love what you're doing with it. Um, but my boss won't let me use it until it goes to be one. And so really yesterday we announced it's Don V. one, we think it's stable. We're not going to change any of the APS on you. We, uh, we would love you to use it and uh, and put it into production. All right. Uh, we'd like to hear a little bit more about the business side of things. So, um, I believe there was some news around funding, uh, uh, you know, how many people you have, how many, you know, can you parse for us, you know, how many customers have the projects versus how many customers have, uh, you know, the company's products. Well, we don't, we don't call them customers of the projects that users, yes, yes, we, uh, but I'm from a company where we have hundreds of customers. >>Um, I don't believe we make our revenue figures public and, uh, so I'm probably not going to dive into them, but I know, I know the CEO stands up at our, our yearly conference and, and discloses, you know, what our revenue the last year was. So I'll refer you to that. Um, the funding announcement, that was about a month ago. We, uh, we raised a great round from Lightspeed, um, 24 million I believe. Um, and we're gonna use that to really invest in the community, really invest in our projects and, and build a bit more of a commercial function. Um, the company is now about 110 people. I think, um, it's growing so quickly. I joined 18 months ago and we were 30 people and so we've almost quadrupled in size in, in the last year and a half. Um, so keeping up is quite a challenge. Uh, the two projects, uh, products I've already touched on a few hundred customers and I think we're, you know, we're really happy with the growth. >>We've been, uh, we've never had any institutional funding before this. The company is about five years old. So we've been growing based on organic revenue and, and, and, and, you know, barely profitable, uh, but reinvesting that into the company and, and it's, yeah, it's going really well. We're also one of the, I mean it's not that unique I guess, but we're remote first. We have a more than 50% of our employees work from home. I work from my basement in London. We have a few tiny like offices, one in Stockholm and one in New York, but, but we're really keen to hire the best people wherever they are. Um, and we invest a lot in travel. Uh, we invest a lot in, um, the, the right tools and getting the whole company together to really make that work. Actually a really fun place to work. What time? >>We're S we're still in the business here and I don't know how much time you've spent at the booth this year, but I don't, can you compare, I mean, we've been talking about the growth of this community and the growth of this conference. Can you compare say this year to last year, the, the people coming up, their maturity, the maturity of their production, et cetera. Are they, are they ready to buy? Are they still kicking? Are they still wondering what this Cooper Cooper need easy things is, you know, where, where is everybody this year and how does that, how has it changed? Yeah, and that's a good question where we're definitely seeing people with a lot more sophisticated questions. The, the, the conversations we're having at the booth are a lot longer than they've been in previous years. The um, you know, in particular people now know what key is. We only announced it a year ago and gonna have a lot of people asking us very detailed questions about what scale they can run it at. >>Um, otherwise, yeah, I think there is starting to be a bit more commercial intent at the conference, some few more buying decisions being made here. It's still predominantly a community oriented conference and I think the, the, I don't want that to go away. Like, that's one of the things that makes it attractive to me. And, and I bring my whole team here and that's one of the things that makes it attractive to them. But there is a little bit more, I'm a little more sales activity going on for sure. Any updates to the, to the tracing and monitoring observability stories of the projects here at CNCF this year since you as you're part of the promethium project? >> Yes. So we actually, we had the promethium conference in Munich two weeks ago and after each committee conference, the maintainers like to get together and kind of plan out the next six months of the project. >>So we started to talk about um, adding support for things like exemplars into Prometheus's. This is where each histogram bucket, you can associate an example trace that goes, that contributed towards that, that history and that latency. And then you can build nice user interfaces around that. So you can very quickly move from a latency graph to example traces that caused that. Um, so that's one of the things we're looking to do in Prometheus. And of course Jaeger graduated just a week ago. I think. Um, we're big users of Jaeger internally at for final amps. And actually on our booth right now, uh, we're showing a demo of how we're integrating, um, visualization of distributed tracing, integral foreigner. So you can, you know, using the same approach we do with metrics where we support multiple backends, we're going to support Yeager, we're going to support Zipkin, we're going to support as many open source tracing projects as we can with the Grafana UI experience and being able to seamlessly kind of switch between different data sources, metrics all the way to logs all the way to traces within one UI. >>And without ever having to copy and paste your query and make mistakes and kind of translate it in your head. Right. >> Tom, give us a little bit, look forward. Uh, you know, a lot of activities as the thing's going to, you know, graduating and pulling things together. So what should your users be looking for kind of over the next six to 12 months? >> That's a great question. Yeah, I think we do a yearly release cycle for foreigners. So the next one we're, we're aiming towards is for seven, like for me to find a seven's going to be all about tracing. So I really want to see the demo we're doing. I want to see that turned into like production ready code support for multiple different data sources, support for things like exemplars, which we're not showing yet. Um, I want to see all of that done in Grafana in the next year and we've also massively been flushing out the logging story. >>I'm with Loki, we've been adding support for uh, extracting metrics from the logs and I really think that's kind of where we're going to drive Loki forward in the future. And that really helps with systems that aren't really exposing metrics like legacy systems where the only kind of output you get from them is the logs. Um, beyond that. Yeah, I mean the welds are kind of oyster. I think I'm really keen to see the development of open telemetry and um, we've just starting to get involved to that project ourselves. Um, I'm really interested to kind of talk to people about what they need out of a tracing system. We, we see people asking for a hosted tracing systems. Um, but, but IMO is very much like pick the best open source ones. I don't think that's, that's emerged yet. I don't think people know which is the best one yet. >>So we're going to get involved in all of them. See which one's a C, which one's a community kind of coalesces around and maybe start offering a hosted version of that. >> You know, our final thing is, uh, you know, what advice do you have for users? Obviously, you know, you like the open source thing, but you know, they're hearing about observability everywhere there are, you know, the, the whole APM market is moving this direction. There's acquisitions as we talked about earlier. Um, there's so many moving pieces and a lot of different viewpoints out there. So just, you know, from a user, how do you know, how will things ma, what makes their lives easier and what advice would you give them? Yeah, no, definitely. I think a lot of vendors will tell you like to pick a, pick a vendor who's going to help you with this journey. >>Like I would say like, pick a vendor you trust who can help you make those decisions. Like find someone impartial who's gonna not make, not try and persuade you to buy their product. So we would, uh, you know, I would encourage you to try things out to dog food and to really like invest in experimentation. There's a lot going on in, uh, in, in the observability world and in the cloud native world. And you've got to, you've got to try it and see what fits. Like we embrace this, uh, composability of the, uh, of the observatory of, of the observability ecosystem. So like, try and find which, which choices work best for you. Like I, uh, whenever, whenever I talk to him, you still have to lick all the cupcakes in 2019. I think. I mean, I would, it depends on your level of kind of maturity, right? >>And sophistication. Like, I think if, uh, if, if this is really important to you, you should go down that approach. You should try them all. If this is not one of your core competencies that may be going with a vendor that helps you is a better approach. But, but I'm, I come from the open source world and, uh, you know, I like to see the, um, the whole ecosystem and all the different players and all the different, new and exciting ways to solve these problems. Um, so I'm, I'm always going to encourage people to have a play and try things out. All right, Tom, final word, Loki. Explain to us, uh, you know, when you're coming up with it, how you ended, uh, are you the God of mischief? Well, so the official line is the Loki is the, um, is the North mythology equivalent of Prometheus's, uh, in Greek mythology and, and lochia logging project is, is, is Prometheus's inspired logging. So we've tried to take the operational model from, from atheists, the query language from, from atheists and, and the kind of a cost efficiency from, from atheists and apply it to logs. Um, but I will admit to being a big fan of the Marvel movies. All right, Tom Willkie. Thank you so much for sharing the updates on, on the labs. Uh, we definitely look forward to hearing updates from you and thank you. All right, for, for John Troyer, I'm Stu Madmen back with more coverage here from San Diego. Thank you for watching. Thank you for watching the cube.
SUMMARY :
clock in cloud native con brought to you by red hat, the cloud native computing foundation but, uh, why don't you introduce the company to our audience in a, where you fit in this broad landscape, Alright, so really big, uh, you know, broad topic there. So if you want to bring together data from let's say Datadog how the space is looking at this and how you all feel about observability and what everybody here who's running So we're not actually particularly picky about which pillars you use and which products you use Um, he, you know, he saw a spot for like needing a much better kind of graphical editing the best open source projects, the best tools that we think you need as an engineer to understand your So you recently had an announcement that Loki is now GA. especially at shows like this is I really, I really, you know, developers and the grassroots adopters come to us, We, uh, we would love you to use it and uh, and put it into production. So I'll refer you to that. and, you know, barely profitable, uh, but reinvesting that into the company and, The um, you know, in particular people now know what key observability stories of the projects here at CNCF this year since you as you're part of the promethium project? each committee conference, the maintainers like to get together and kind of plan out the next six months of the project. So you can, you know, And without ever having to copy and paste your query and make mistakes and kind of translate it in your as the thing's going to, you know, graduating and pulling things together. So the next one we're, we're aiming towards is for seven, like for me to really exposing metrics like legacy systems where the only kind of output you get from them is the logs. So we're going to get involved in all of them. So just, you know, from a user, how do you know, how will things ma, what makes their lives easier and So we would, uh, you know, I would encourage you to try things out to dog food and to really like uh, you know, I like to see the, um, the whole ecosystem and all the different players and all the different,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Troyer | PERSON | 0.99+ |
Tom Willkie | PERSON | 0.99+ |
Stockholm | LOCATION | 0.99+ |
Tom | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
Grafana | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
Munich | LOCATION | 0.99+ |
San Diego | LOCATION | 0.99+ |
3:00 AM | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Tom Wilkie | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
24 million | QUANTITY | 0.99+ |
30 people | QUANTITY | 0.99+ |
San Diego, California | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
Prometheus | TITLE | 0.99+ |
two products | QUANTITY | 0.99+ |
Stu Madmen | PERSON | 0.99+ |
a year ago | DATE | 0.99+ |
Grafana Labs | ORGANIZATION | 0.99+ |
two weeks ago | DATE | 0.99+ |
seven | QUANTITY | 0.99+ |
Lightspeed | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
yesterday | DATE | 0.99+ |
this year | DATE | 0.99+ |
two projects | QUANTITY | 0.98+ |
more than 50% | QUANTITY | 0.98+ |
CloudNativeCon | EVENT | 0.98+ |
more than 60 different data sources | QUANTITY | 0.98+ |
a week ago | DATE | 0.98+ |
Loki | ORGANIZATION | 0.97+ |
first time | QUANTITY | 0.97+ |
Loki | PERSON | 0.97+ |
a half | QUANTITY | 0.97+ |
12 months | QUANTITY | 0.97+ |
Gruffalo | PERSON | 0.97+ |
Cooper Cooper | ORGANIZATION | 0.97+ |
over a year | QUANTITY | 0.96+ |
each committee conference | QUANTITY | 0.96+ |
18 months ago | DATE | 0.95+ |
GA | LOCATION | 0.95+ |
Lampson Griffon | ORGANIZATION | 0.95+ |
Marvel | ORGANIZATION | 0.94+ |
a year | QUANTITY | 0.94+ |
today | DATE | 0.94+ |
each histogram | QUANTITY | 0.94+ |
Gruffalo | ORGANIZATION | 0.93+ |
about 110 people | QUANTITY | 0.93+ |
Greek | OTHER | 0.92+ |
Tableau | TITLE | 0.92+ |
six | QUANTITY | 0.92+ |
ECRO | ORGANIZATION | 0.91+ |
about five years old | QUANTITY | 0.9+ |
single user interface | QUANTITY | 0.89+ |
first | QUANTITY | 0.88+ |
four | QUANTITY | 0.87+ |
CNCF | ORGANIZATION | 0.87+ |
CubeCon | EVENT | 0.86+ |
hundreds of customers | QUANTITY | 0.85+ |
Jaeger | ORGANIZATION | 0.84+ |
one UI | QUANTITY | 0.84+ |
three pillars | QUANTITY | 0.84+ |