Image Title

Search Results for Miranda:

Guillermo Miranda, IBM | IBM Think 2020


 

>> Announcer: From theCUBE studios in Palo Alto and Boston. It's theCUBE. Covering IBM Think. Brought to you by IBM. >> Hi everybody, we're back this is Dave Vellante from theCUBE and you're watching our wall-to-wall coverage of IBM's Digital Think 2020 event and we are really pleased to have Guillermo Miranda here. He's the Vice President of Corporate and Social Responsibility. Guillermo thanks for coming on theCUBE. >> Absolutely, good afternoon to you. Good evening, wherever you are. >> So, you know this notion of corporate responsibility, it really has gained steam lately and of course with COVID-19, companies like IBM really have to take the lead on this. The tech industry actually has been one of those industries that has been less hard hit and IBM as a leader along with some other companies are really being looked at to step up. So talk a little bit about social responsibility in the context of the current COVID climate. >> Absolutely. Now thank you for the question. Look, first our responsibility is with the safety of our employees and the continuity of business for our clients. In this frame what we have done is see what is the most adequate areas to respond to the emergency of the pandemic and using what we know in terms of expertise and the talent that we have is why we decided to work first with high performance computing. IBM design and produce the fastest computers in the world. So Summit and a consortium of providers of high performance computing is helping on the discovery of vaccinations and drugs for the pandemic. The second thing that we are doing is related with data and insights. We own The Weather Company which is at 80 million people connected to check the weather every morning, every afternoon. So through The Weather Company, we are providing insights and data about county level information on COVID-19. Another thing that we are doing is we are offering some of our products for free. Watson, it is a chatbot to inform about what is adequate, what is needed in the middle of a pandemic if you are a consumer. We are also helping with our volunteers. IBM volunteers are helping teachers and school districts to rapidly flip into remote learning and get used to the tools of working on a remote environment. And finally we have a micro volunteering opportunity for anybody that has a computer or an android phone. So with the world community grid, you can help with the discovery also of drugs and vaccinations for COVID-19. >> Wow that's great, those are four awesome initiatives. They can't get the vaccine fast enough. Getting good quality information in the hands of people in this era of fake news also very very important. Students missing out on some of the key parts of their learning so remote learning is key. I love this idea of kind of micro crowd sourcing solutions. Really kind of opening that up and hopefully we'll have some big wins there Guillermo. Thank you for that. I want to ask you people talk about blue collar jobs, they talk about white collar jobs, you guys talk about new collar jobs. You and others. What are new collar jobs and why are they important? >> Look, in this data, digital, artificial intelligence driven economy, it's important not to have a digital divide between the haves and the have nots on the foundational skills to be operational in a digital economy. So new collar jobs are precisely the intersection of the skills that you need to operate in this digital driven economy with the basic knowledge to be a user of technology. So think about a cyber security analyst. You don't need a masters degree in industrial engineering to be a cyber security analyst. You just need the basic things about operating an environment on a security control center for instance. Or talk about blockchain or talk about software engineering, full stack developer. There are many roles that you can do in this economy where you don't need to have a full four-year degree in a university to have a decent paying job for the digital economy. These are the new collar jobs and what we are attempting to do with the new collar job definition is to get rid of the paradigm that the university degree is the only passport to a successful career in the marketplace. You can start in different, having the opportunity to have a job in a high tech area. Not necessarily with a PhD in engineering as I said, it's something important for us, for our clients and for the community. >> Yeah, so that's a very interesting concept that a lot of us can relate to. To go back to our university days, many of the courses that we took, we shook our heads and said, "okay, why do I have to take this?" Okay, I get it, well rounded liberal arts experience, that's all good but it's almost like you're implying that the notion of specialization that we've known for years like for instance, in vocations, auto-mechanic, woodworking, etc. Planning that have really critical aspect of the economy. Applying that to the technology business. It's genius and very simple. >> Absolutely. Look, this is the reinvention of vocational education for the 21st century where you continue to need the plumber, you continue to need the hairdresser but also you need people that operate the digital platforms and are comfortable with this environment and they don't need to pass at the beginning through full university. And it's also the concept that we have divided the secondary education, high school from college, university etc., like a Chinese wall. Here is high school, here is college. No! There can be a clear integration because you can start to get ready without finishing high school yet. So there are several paradigms that we have evolved in the previous century that now we need to change and be ready for this 21st century digital driven economy. >> Yeah, very refreshing. Really about time that this thinking came into practice. Talk about P-Tech. How does P-Tech fit into acquiring these skills? And maybe you could give us a sense as to the sort of profile of the folks and there backgrounds and give us a sense as to and add some color to how that's all working. >> Absolutely, so look, the P-Tech model started 10 years ago in a high school in New York City, in Brooklyn. And the whole idea is to go to an under-served area and create a ramp onto success that will help you to first finish high school. Finishing high school is very important and has a lot connotations for your future. And then at the same time, they start getting an associate degree in an area of high growth. The third component is the industry partner. An industry partner that works with the school district and the community college in order to bring the knowledge of what is needed in that community in order to create real job opportunities and we will send you the people and then you will use it. No! We need to work together in order to train the talent for the future. And you just go to the middle age and the guilds were the ones that were preparing the workers. So the industry was preparing the workforce. Why in the 20th century we renounced to that? Having real, relevant skills starting in high school, helping the kids to graduate with a dual diploma. High school, college and practice in real life what it is to be in a workplace environment. So we have more than 220 schools. In this school year, we have more than 150,000 kids in 24 countries already working through the P-Tech model. >> Love it and really scaling that up. So let's say I'm an individual. I'm a young person, I'm from a diverse background, maybe my parents came to this country and I'm a first generation American. Of course, it's not just the United States, it's global but let's say I'm from a background that's less advantaged, how do I take advantage? How hard is it for me to tap in to something like P-Tech and get these skills? >> Well, first one of the characteristics of the model is this is free admission. So there is not a barrier fence. If your school district offers P-Tech, you can apply to P-Tech and get into the P-Tech model education without any barrier without any account. And the second thing that you need to have is curiosity. Because it's not going to be the typical high school where you have math, science, gym, whatever. This is more of an integration of how the look of a career will be in the future and how you have to start understanding that there are drivers into the economy that are fast tracks into well paid jobs. So curiosity on top of being ready to join a P-Tech school in the school district where you live in. >> That's great Guillermo, thank you for sharing that. Now of course corporate responsibility, that's a wide net. This is one of your passions. I'll give you the last word to kind of, where do you see this whole corporate responsibility movement going generally and specifically within IBM? >> I think that this whole pandemic will just accelerate some of the clear trends in the marketplace. Corporate responsibility cannot be an afterthought as before in the '80s or '90s. I will put a foundation. I have a little of profits that are left and then I will distribute grants and that's my whole corporate responsibility approach. Corporate responsibility needs to be within the fabric of how do you do business. It has to be embedded into the values of your company and your value proposition and you have to serve those projects with the same kind of skills and technology, in the case of IBM, that you do for your commercial engagements. And this is what we do in IBM. We help IBMers to be helpful to their communities with the same kind of quality and platforms that we offer to our clients. And we help to solve one of the most complicated problems in society through technology, innovation, time. >> Love it. Guillermo thanks so much, you're doing great work. Really appreciate you coming on theCUBE and sharing with our audience. Congratulations. >> Absolutely. Thank you for very much for having me. >> You're very welcome and thank you for watching everybody. This is Dave Vellante from theCUBE. You're watching our continuous coverage of IBM Think 2020, the digital version. Keep it right there, we'll be right back after this short break. (bright music)

Published Date : May 5 2020

SUMMARY :

Brought to you by IBM. He's the Vice President of Corporate Absolutely, good afternoon to you. of the current COVID climate. and the talent that we have is They can't get the vaccine fast enough. of the skills that you need to operate many of the courses that we took, that operate the digital platforms the folks and there backgrounds helping the kids to graduate Of course, it's not just the in the school district where you live in. thank you for sharing that. in the case of IBM, and sharing with our audience. Thank you for very much for having me. of IBM Think 2020, the digital version.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Guillermo MirandaPERSON

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

GuillermoPERSON

0.99+

Palo AltoLOCATION

0.99+

The Weather CompanyORGANIZATION

0.99+

20th centuryDATE

0.99+

New York CityLOCATION

0.99+

BostonLOCATION

0.99+

BrooklynLOCATION

0.99+

more than 150,000 kidsQUANTITY

0.99+

21st centuryDATE

0.99+

more than 220 schoolsQUANTITY

0.99+

COVID-19OTHER

0.99+

24 countriesQUANTITY

0.99+

third componentQUANTITY

0.99+

United StatesLOCATION

0.99+

80 million peopleQUANTITY

0.99+

second thingQUANTITY

0.98+

oneQUANTITY

0.98+

10 years agoDATE

0.98+

firstQUANTITY

0.98+

pandemicEVENT

0.97+

Digital Think 2020EVENT

0.97+

androidTITLE

0.96+

P-TechORGANIZATION

0.96+

21st centuryDATE

0.96+

Think 2020COMMERCIAL_ITEM

0.95+

theCUBEORGANIZATION

0.93+

first generationQUANTITY

0.92+

SummitORGANIZATION

0.92+

first oneQUANTITY

0.89+

four-year degreeQUANTITY

0.87+

ChineseOTHER

0.83+

Vice President of Corporate and Social ResponsibilityPERSON

0.76+

AmericanOTHER

0.75+

afternoonQUANTITY

0.74+

'80sDATE

0.73+

four awesome initiativesQUANTITY

0.72+

PORGANIZATION

0.7+

'90sDATE

0.68+

every morningQUANTITY

0.68+

WatsonPERSON

0.64+

TechORGANIZATION

0.52+

previousDATE

0.51+

COVIDEVENT

0.4+

ThinkCOMMERCIAL_ITEM

0.39+

Miranda Foster, Commvault & Al Bunte, Commvault | Commvault GO 2019


 

>>Live from Denver, Colorado. It's the cube covering comm vault. Go 2019 brought to you by Combolt. >>Hey, welcome back to the cubes coverage of combo go 19. Stu Miniman is here with me, Lisa Martin and we are wrapping up two days of really exciting wall to wall coverage of the new vault and we're very pleased to welcome a couple of special guests onto the program. To help us wrap up our two days, we have Miranda foster, the vice president of worldwide communications for comm vault and Al Bunty is here, the co founder, former COO and board member. Welcome Miranda and Al. Great to have you on the program. Thanks Lisa. So a lot of energy at this event and I don't think it has anything to do with our rarefied air here in the mile high city. Al, let's start with you. >>Well, there's other things in Colorado. >>There are, yeah, they don't talk about it. They talked about that on stage yesterday. So owl, you have been with convo ball as I mentioned, co-founder. What an evolution over the last 20 years. Can you take us back? >>Surely. So, um, yeah and it's been, it's, it's really kind of cool to see it coming together at this point. But if you go back 20 years when we started this, the whole idea was around data. And remember we walked into a company that was focused on optical storage. Um, we decided it would be a good company to invest in. Um, for two reasons. One, we thought they were really great people here, very creative and innovative and two, it was a great space. So if we believed we believe data would grow and that was a pretty decent thesis to go with. Yeah. And then, then it started moving from there. So I tell people I wasn't burdened with facts so I didn't understand why all these copies were being made of the same set of data. So we developed a platform and an architecture focused on indexing it so you just index at once and then could use it for many different purposes. >>And that just kept moving through the years with this very data centric approach to storage, management, backup protection, etc. It was all about the data. I happened to be lucky and said, you know, I think there's something to this thing called NAS and sand and storage networks and all those things. And I also said we have to plan for fur on scale on our solution of a million X. Now it was only off a magnitude of about a thousand on that, but it was the right idea. You know, you had to build something to scale and, and we came in and we wanted to build a company. We didn't want to just flip a company but we thought there is a longterm vision in it and if you take it all the way to the present here it's, it's really, um, it's, it feels really good to see where the company came from. It's a great foundation and now it will propel off this foundation, um, with a similar vision with great modern execution and management. >>Yeah. Al, when we had the chance to talk with you last year at the show in Nashville, it was setting up for that change. So I want to get your view there. There are some things that the company was working on and are being continued, but there's some things that, you know, Bob hammer would not have happened under his regime. So want to get your viewpoint as to the new Convolt, you know, what, what is, what are some of those new things that are moving forward with the company that might not have in the previous days? >>Yeah, that's a good questions. Do I think Mo, a lot of the innovation that you've seen here, um, would have happened maybe not as quickly. Um, we, the company obviously acquired Hedvig. Uh, we were on a very similar path but to do it ourselves. So you had kind of been a modern, we need to get to market quicker with some real pros. I think, um, the, the evolution of redoing sales management essentially was probably the biggest shift that needed to be under a new regime, if you will. Yeah. >>So Miranda, making these transitions can be really tricky from a marketing standpoint. Talk, talk us through a bit, some of the, how do you make sure trusted yet innovative and new that you've accomplished at this show? >>Well, trust it is obviously the most important because the Bob, the brand that Bob and Al built really embodies reliability for what we provide to our customers. I mean that's what gives them the peace of mind to sleep at night. But I'll tell you, Sanjay has been with us for just eight months now, February of 2019 and it's been busy. We've done a lot of things from a points on J transition with Bob and now to his point we've, we've acquired Hedvig, we've introduced this new SAS portfolio and you're exactly right. What we need to do is make sure that the reliability that customers have come to rely on Convolt for translates into what we're doing with the new Convolt and I think we've done a really good job. We've put a lot of muscle behind making sure, particularly with metallic that it was tried, it was trusted, it was beta tested, we got input from customers, partners, industry influencers. We really built it around the customer. So I think the brand that comm brings will translate well into the things that we've done with these, with these new shifts and movements within the company >>on, on that questions too as well. Um, I think Miranda is a good example of somebody that was with the company before a tremendous talent. She's got new opportunities here and she's run with it. So it's kinda that balance of some, uh, understood the fundamentals and the way we're trying to run the business. And she's grasped the new world as well. So, >>and Rob as well, right? Robin in his new, >>yeah, that's another good point. So that was all part of the transitioning here and Sanjay and the team had been very careful on trying to keep that balance. >>Change is really difficult anywhere, right? Dissect to any element of life. And you look at a business that's been very successful, has built a very strong, reliable brand for 20 years. Big leadership changes, not just with Sanjay, but all of the leadership changes. You know, analysts said, all right, you've got to upgrade your Salesforce. We're seeing a lot of movement in the area. You got to enhance your marketing. We're seeing metallic has the new routes to market, new partner focus, so PSI focuses. We're also seeing this expansion in the market, so what folks were saying, you know a year ago come on is answering in a big way and to your point in a fast way that's not easy to do. You've been here nine years since the beginning. Can you give us a little bit of a perspective, Miranda, about some of the things that were announced at the show? >>How excited everybody is, customers, partners, combo folks. How do you now extend the message and the communications from go globally after the show ends? That's an awesome question. I'm really passionate about this. So you know, Monday we announced metallic, we announced a new head of channels and alliances and Mercer Rowe, we had crazy technology innovation announcements with activate, with the acceleration of the integration with Hedvig with the momentum release that we put out today. We're also doing cool stuff with our corporate social responsibility in terms of sponsoring the new business Avengers coalition. That's something that Chris Powell is really championing here at, at the show and also within combo. So we're very excited about that. And then when you add people like yourselves, you know the tech field day folks, because not everybody can be here, right? Not everybody can be at go. So being able to extend the opportunity for, for folks to participate in combo, go through things like the cube through things like tech field day and using our social media tools and just getting all of the good vibes that are here. Because as Al says, this really is an intimate show, but we try to extend that to anybody who wants to follow us, to anybody who wants to be a part of it. And that's something that we've really focused on the last couple of years to make sure that folks who aren't here can, can get an embrace the environment here at Commonweal go. >>It's such an important piece that you're here helping with the transition I talked about. It's important that some of the existing >>get new roles and do responsibility going forward. What's your role going to be and what should we expect to see from you personally? Somebody has got to mow the lawn. >>Yeah. >>But yes, do I, I'll stay on the board. Um, we're talking through that. I think I'll be a very active board, not just the legal side of the equation. Um, try and stay involved with customers and, and strategies and, and even, uh, potential acquisitions, those kinds of things. Um, I'm also wandering off into the university environment. Uh, my Alma mater is a university of Iowa. I'm on the board there and uh, I'm involved in setting up innovation centers and entrepreneurial programs and that kind of thing. Um, I'll keep doing my farming thing and uh, actually have some ideas on that. There's a lot of technology as you guys know, attacking Nat space. So, and like I said, I'll try to keep a lot of things linked back into a combo. >>What Al can have confidence in is that I will keep him busy. So there's that. And then I will also put on the table, we agree to disagree with our college athletic loyalties. So I'm a big kid just because we don't compete really. Right. So I mean, but if I won Kansas wherever to play, then we would just politely disagree. Yeah. Well that's good that you have this agreement in place. I would love to get some anecdotal feedback from you of some of the things that you've heard over the last three days with all this news, all these changes. What are you hearing from customers and partners who you've had relationships with for a very long time? >>I think they're, I think they're all really excited, but, and maybe I'm biased, but they liked the idea that we're trying to not throw out all the old focus on customers, focus on technologies, continue the innovation. I'm pleased that we, Miranda and the team started taking this theme of what we do to a personal level, you know, recovery and those kinds of things. It isn't just the money in the business outages. It's a really a effect on a personal lives. And that resonates. I hear that a lot. Um, I asked our bigger customers and they've loved us for our support, how we take care of them. The, the intimacy of the partnership, you know, and I think they feel pleased that that's staying yet there's lot of modern Emity if that's a good word. I think fokai was what you, I think it's the blend of things and I think that really excites people. >>We've heard that a lot. You guys did a great job with having customers on stage and as a marketer who does customer marketing programs, I think there's nothing more validating than the voice of a customer. But suddenly today that I thought was a pivot on that convo, did well as Sonic healthcare was on main stage. And then he came onto the program and I really liked how he talked about some of the failures that they've been through. You know, we had the NASA talking yesterday, NASA, 60 years young, very infamous, probably for failure is not an option, but it is a very real possibility whether you're talking about space flight or you're talking about data protection and cyber attacks and the rise of that. And it was really, I'd say, refreshing to hear the voice of a customer say, these are the areas in which we failed. This is how come they've helped us recover and how much better and stronger are they? Not just as a company as Sonic healthcare, but even as an individual person responsible for that. That was a really great message that you guys were able to extend to the audience today and we wanted to get that out. >>I loved that as well. I think that was good. I have also back on driving innovation, I always felt one of my biggest jobs was to not punish people that failed. Yeah. I, you know, with the whole engineering team, the bright people in marketing, I, I would be very down on them if they didn't try, but I never wanted them to feel bad about trying and never punish them. >>And one of the things Matthew said on main stage, first of all, I love him. He's great. He's been a longtime CommonWell supporter. I love his sense of humor. He said, you know, combo came to me and said, can you identify, you know, your biggest disaster recovery moment? And he was like, no, because there's so many. Yes. Right? Like there's so many when you're responsible for this. It's just the unpredictability of it is crazy. And so he couldn't identify one, but he had a series of anecdotes that I think really helped the audience identify with and understand this is, these are big time challenges that we're up against today. And hearing his use case and how con ball is helping him solve his heart problems, I think was really cool. You're right. I loved that too. He said, I couldn't name one. There are so many. That's reality, right? As data proliferates, which every industry is experiencing, there's a tremendous amount of opportunity. There's also great risk as technology advances for good. The bad actors also have access to that sort of technology. So his honesty, I thought was, was refreshing, but spot on. And what a great example for other customers to listen to the RA. To your point, I, if I punish people for failure, we're not going to learn from it. >>Yeah, you'll never move forward. >>Miranda. So much that we learn this week at the shows. Some, a lot of branding, a lot of customers, I know some people might be taking a couple of days off, but what should we expect to be seeing from con vault post go this year, >>continue to innovation. We're not letting our foot off the gas at all. Just continuing innovation as as as we integrate with Hedvig continued acceleration with metallic. I mean those guys are aggressive. They were built as a startup within an enterprise company built on Comvalt enterprise foundation. Those guys are often running, they are motivated, they're highly talented, highly skilled and they're going to market with a solution that is targeted at a specific market and those guys are really, really ready to go. So continued innovation with Hedvig integrate, sorry, integration with Hedvig with metallic. I think you're just going to be seeing a lot more from Combalt in the future on the heels of what we consider humbled, proud leadership with the Gartner magic quadrant. You know the one two punch with the Forrester wave. I think that you're just going to be seeing a lot more from Combalt and in terms of how we're really getting out there and aggressive. And that's not to mention Al, you know what we do with our core solutions. I mean today we just announced a bunch of enhancements to the core technology, which is, which is the bread and butter of, of what we do. So we're not letting the foot off the gas to be sure >>the team stay in really, really aggressive too. And the other thing I'd add as a major investor that I'm expecting is sales. Now I'd love to just your, your final thoughts that the culture of Convolt because while there's some acceleration and there's some change, I think some of the fundamentals stay the same. Yeah, it's, it's right to, and again, that's why I feel we're at a good point on this transition process. You alluded to it earlier, but I feel really good about the leadership that's in, they've treated me terrifically. I'm almost almost part of the team. I love that they're, they're trying to leverage off all the assets that were created in his company. Technology, obviously platform architecture, support base, our support capabilities. I, I told Sandy today I wish she really would have nailed the part about, and by the way, support and our capabilities with customers as a huge differentiator and it was part of our original, Stu knows he's heard me forever. Our original DNA, we wanted to focus on two things. Great technology, keep the great technology lead and customer support and satisfaction. So those elements, now you blend that stew with really terrific Salesforce. As Ricardo says, have you guys talk with Ricardo soon? But anyway, the head of sales is hiring great athletes, particularly for the enterprise space. Then you take it with a real terrific marketing organization that's focused, Oh, had modern techniques and analytics on all those things. You know, it's, it's in my opinion, as an investor especially, I'm expecting really good things >>bar's been set well. I can't think of a better way for Sue and me to our coverage owl veranda. Thank you. This has been fantastic. You've got to go. You get a lawn to mow, you've got a vacation to get onto and you need some wordsmithing would focus your rights. You have a flight ticket. They do five hours. Hi guys. Thank you. This has been awesome. Hashtag new comm vault for our guests and I, Lisa Martin, you've been watching the cubes coverage of Convault go and 19 we will see you next time.

Published Date : Oct 16 2019

SUMMARY :

Go 2019 brought to you by Combolt. So a lot of energy at this event and I don't think it has anything to do with our rarefied air here So owl, you have been with convo ball as I mentioned, co-founder. So I tell people I wasn't burdened with facts And I also said we have to plan for but there's some things that, you know, Bob hammer would not have happened under So you had kind of been a modern, we need to get to market quicker with some real pros. Talk, talk us through a bit, some of the, how do you make sure trusted yet innovative and new that the reliability that customers have come to rely on Convolt for translates into what example of somebody that was with the company before a tremendous So that was all part of the transitioning here and has the new routes to market, new partner focus, so PSI focuses. So you know, Monday we announced metallic, It's important that some of the existing going to be and what should we expect to see from you personally? There's a lot of technology as you guys know, I would love to get some anecdotal feedback from you of some of the things that you've heard over the last three days we do to a personal level, you know, recovery and those kinds of things. That was a really great message that you guys were able to extend to the audience today and we wanted I think that was good. And one of the things Matthew said on main stage, first of all, I love him. So much that we learn this week at the shows. on the heels of what we consider humbled, proud leadership with the Gartner magic So those elements, now you blend I can't think of a better way for Sue and me to our coverage owl

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MirandaPERSON

0.99+

Lisa MartinPERSON

0.99+

MatthewPERSON

0.99+

NashvilleLOCATION

0.99+

Chris PowellPERSON

0.99+

ColoradoLOCATION

0.99+

LisaPERSON

0.99+

February of 2019DATE

0.99+

two daysQUANTITY

0.99+

20 yearsQUANTITY

0.99+

RobPERSON

0.99+

NASAORGANIZATION

0.99+

BobPERSON

0.99+

SanjayPERSON

0.99+

Miranda FosterPERSON

0.99+

HedvigORGANIZATION

0.99+

RobinPERSON

0.99+

five hoursQUANTITY

0.99+

Stu MinimanPERSON

0.99+

ConvoltORGANIZATION

0.99+

Al BuntePERSON

0.99+

ComvaltORGANIZATION

0.99+

Al.PERSON

0.99+

eight monthsQUANTITY

0.99+

SuePERSON

0.99+

Bob hammerPERSON

0.99+

MondayDATE

0.99+

Sonic healthcareORGANIZATION

0.99+

60 yearsQUANTITY

0.99+

last yearDATE

0.99+

Mercer RoweORGANIZATION

0.99+

yesterdayDATE

0.99+

nine yearsQUANTITY

0.99+

two reasonsQUANTITY

0.99+

AlPERSON

0.99+

ComboltORGANIZATION

0.99+

twoQUANTITY

0.99+

Denver, ColoradoLOCATION

0.99+

GartnerORGANIZATION

0.98+

two thingsQUANTITY

0.98+

todayDATE

0.98+

OneQUANTITY

0.98+

a year agoDATE

0.98+

CombaltORGANIZATION

0.98+

Al BuntyPERSON

0.97+

IowaLOCATION

0.97+

oneQUANTITY

0.97+

this yearDATE

0.97+

CombaltPERSON

0.97+

SandyPERSON

0.97+

PSIORGANIZATION

0.96+

this weekDATE

0.96+

SASORGANIZATION

0.95+

about a thousandQUANTITY

0.95+

RicardoPERSON

0.94+

ConvaultTITLE

0.94+

two punchQUANTITY

0.93+

comm vaultORGANIZATION

0.93+

2019DATE

0.93+

CommonWellORGANIZATION

0.92+

CommvaultORGANIZATION

0.89+

last 20 yearsDATE

0.77+

MoPERSON

0.76+

last three daysDATE

0.75+

StuPERSON

0.71+

comboORGANIZATION

0.71+

Adrian Ionel, Mirantis | DockerCon 2021


 

>>Hello and welcome to the cubes coverage of dr khan 2021. I'm john Kerry, host of the cube agent I own L. C. Ceo and co founder chairman of Morantes cube alumni Adrian Great to see you. Thanks for coming on the cube here for dr khan coverage. Good to see you. Hey >>john nice to see. You gotta do. >>So obviously open source innovation continues. You guys are at the forefront of it. Great to see you what's new Miranda's, give us the update on what's happening. >>Well, I mean what's, what's interesting is we've had one of the best years ever last year and it's very much more continuous, you know, into this year. It's pretty fantastic. We wanted about 160 new customers. Kubernetes is definitely on a tear. We see customers doing bigger and bigger and more exciting things, which is absolutely great to say lens is getting tremendous destruction and I think we have a five fold increase in user base within a year. So it's a lot of fun Right now, customers are definitely pushing the boundaries of what benefits can do. And they want to get the cloud native infrastructure and they want to get there faster and they want to be big and exciting things. And we are so happy to be part of the right. >>You guys are investing in brand new open source solutions for customers. Give us an update on on why and why do they matter for your customer? >>Well, there are, let me unpack this a little bit and there are really two elements to this. One is wide. Open Source and what's new. What matters. So the open source is not new, but open source is being embraced more and more heavily. Bye bye companies everywhere because just a very flexible and cost efficient and highly innovative way to to use innovation and to continue software and a lot of innovation these days is happening the open source communities, which is why it's super exciting for many, many users now. What's new with us? I think there are two really terrific things that we brought the market that we see, get a lot of interest and attention from our customers and create value. One is this idea of delivering, including the infrastructure that's been in space as a service for some of the largest news cases out there. Very large enterprises. We want to have a cloud experience on prime just like they have it in public clouds. That is absolutely fantastic. And that's new and different and very, very exciting. Customs. The second thing that's new and compelling and exciting is the is lands which is this kubernetes, i. e. that has empowered in the meantime, close to 180,000 communities, developers around the world to make it much much easier to take advantage of genetics. So you can think of it as a I. D. And a D. Bugger for anybody who is using genetics on public clouds or on on private infrastructure. That is getting tremendous traction and adoption. >>The interest in kubernetes has been unbelievable. I mean in coop con we saw kubernetes almost become boring in the sense of like it's everyone's using it and there's still now it's enabling a lot more cloud native development. Why does that lens matter what is the benefit? Because that's that's a killer opportunity because kubernetes is actively being adopted. The general consensus is it's delivering the value. >>Yeah. So let me unpack this in two aspects why Wise Bennett is important, why people adopting it and then how it lands adding value on top of it for people who want to use humanity's common. It is tremendously important is because it solves some very, very fundamental problems for developers and operators when building cloud native applications. These are problems that are very essential to actually operating in production but are really unpleasant people to solve, like availability, scalability, reusability of services. So all of that with amenities comes right out of the box and developers no longer have to worry about it. And at the same time, the benefits gives you a standard where you can build apps on public clouds and then move them on prem or build them on trend with them on public clouds and anywhere in between. So it gives a kind of this universal cloud native standard that you as a developer can rely on. And that's extremely valuable for developers. We all remember from the java times when java came online, people really value this idea of white ones run anywhere and that's exactly what benefits does for you in a clown in the world. So it's extremely screaming valuable for people. Um now how does let's add value in this context is also very exciting. So what's happening when you build these applications on a minute? This is that you have many, many services which interact with each other in fairly complex and sometimes unpredictable ways and they're also very much interact with the infrastructure. So you have you can you can imagine kind of this jungle this label building of many different cloud native services working together to build your app, run your app well, how are you going to navigate that and debug that as a developer as you build and optimize your code. So what lengths does it gives you kind of like a real time poppet of pounds of console. You can imagine like you're a fighter pilot in this jet and you have all these instruments kind of coming out here and gives you like this fantastic real time situational awareness. So you can very quickly figure out what is it that you need to do? Either fixing a bug in your application or optimize the performance of the code of making more your rival fixing security issues. And it makes it extremely easy for developers to use. Right? But this tradition has been hard to use complicated, this makes it super fast, easy, have a lot of fun. >>You know, that is really the great theme about this conference this year and your point exactly is developer experience making it simpler and easier. Okay. And innovative is really hits the mark on productivity. I mean and that's really been a key part. So I think that's why I think people are so excited about kubernetes because it's not like some other technologies that had all the setup requirement and making things easier to get stood up and manage. Its huge. So congratulations. A great point, great call out there, great insight. The next question to ask you is you guys have coined the term software factory. Um, yeah, this kind of plays into this. If you have all the services, you can roll them up together with lens and those tools, it's gonna be easier, more productive. So that means it's more software, open source is the software factory to what does that term mean? And how >>it is leverage. Yeah, So here's what it means to us. And so, as you know, today, Soft is being produced by two groups working together to build software, uh, certainly the poor people are the developments, these are the people who create the core functionality. Imagine all the software should be architected and ultimately ship the code right? And maintain the code, but the developers today don't operate just by themselves. They have their psychics, they have their friends for often platform engineering and platform engineers. These are the people who are helping developers, you know, make some of the most important choices as to which platform states we should use, which services they should use, how they should think about governance. How should they think about cloud infrastructure they should use, which open source libraries they should use. How often they should be fresh those libraries and support. So this platform engineers create if you want the factory, the substrate and the automation, which allows these developers to be highly productive. And the analogy want to make is the chip design, right. If you imagine ship design today, you take advantage of a lot of software, a lot of tooling and a lot of free package libraries. You get your job done, you're not doing it by yourself. Uh just wiring transistors together or logical elements. You do it using a massive amount of automation and software, like recent polls. So that's that's what we aim to provide you to customers because what we discovered is that customers, I don't want to be in the business of buildings off the factories, They don't want to be in the business or building platform engineering teams. If they can avoid it, they just do it because they have no choice. But it's difficult for them to do. It's cumbersome, it's expensive. It's a one off. It really doesn't create any unique business value because the platform engineering for a bank is very similar to the platform engineering for, let's say, an oil gas company or the insurance company. Um So we do it for them turnkey as a service. So they can be focusing on what Madison's for that. >>That's a great inside. I love that platform engineering, enabling software developers because, you know, look at sas throwing features together. Being a feature developer is cool. And and and the old days of platform was the full stack developer. And now you have this notion of platform as a service in a way, in this kind of new way. What's different agents? You've seen these waves of innovation? Certainly an open source that we've been covering your career for over a decade uh with more Anderson and open stick and others. This idea of a platform that enables software. What's changed now about this new substrate, you mentioned what's different than the old platform model? >>Uh That's a wonderful question. Uh a couple of things are different. So the first thing that's different is the openness and uh, and that everything is based on open source frameworks as opposed to platforms that we that are highly opinionated and, and I lock in. So I think that's that's a very, very fundamental difference. If you're looking at the initial kind of platform as a service approaches, there were there were extremely opinionated and very rigid and not always open source or just a combination between open source and proprietary. So that's one very big difference. The second very big difference is the emphasis on, and it goes along with the first one, the emphasis on um, multi cloud and infrastructure independence, where a platform is not wedded to a particular stack, where it's a AWS stack or a uh, an Azure stack or the EMR stack. And, and but it's truly a layer above. That's completely open source center. >>Yeah. >>And the third thing that is different is the idea that it's not just the software, the software alone will not do the job, you need the software and the content and the support and the expertise. If you're looking at how platform engineering is done at the large company like Apple, for example, facebook, it's really always the combination of those three things. It's the automation framework, the software, It's the content, the open source libraries or any other libraries that you create. And then it's the expertise that goes all this together and it's being offered to developers to be able to take advantage of this like soft factory. So I think these are the major differences in terms of where we are today was five years ago, 10 years ago. >>Thank you for unpacking that for I think that's a great uh great captures the shift and value. This brings up my next uh question for you because you know, you take that to the next level. DeVOps is now also graduating to a whole another level. The future of devops uh and software engineering more and more around kubernetes and your tools like lens and others managing the point. What is the new role of devops? Obviously Deb see cops but devops is now changing to What's the future of devops in your opinion? >>Well, I believe that there is going to become more and more integrated where our option is going to become uh something like Zero Arts, where are you going to be fully automated And something that's being delivered entirely through software and developers will be able to focus entirely, on, on creating and shipping code. I think that's the major, that's a major change that's happening. The problem is still yet I think to be solved like 100% correctly is the challenge of the last mile. like deploying that code on on on the infrastructure and making sure that he's performing correctly to the sls and optimizing everything. I also believe that the complexity veneta is very powerful by the same time offers a lot of room for complexity. There are many knobs and dials that you can turn in these microservices based architecture. And what we're discovering now is that this complexity kind of exceeds the ability of the individual developer or even a group of developers who constantly optimize things. So I believe what we will see is a I machine learning, taking charge of optimizing a lot of parameters, operating parameters around the applications and that unemployment benefits to ensure those applications perform to the expectations of the illness. And that might mean performing to a very high standard security. Or it might mean performing to a very a low latency in certain geography. Might mean performing too a very low cost structure that you can expect and those things can change over time. Right? So this challenge of operating an application introduction Burnett is substrate is I think dramatically higher than on just additional cloud infrastructure or virtualization. Because you have so many services inter operating with each other and so many different parameters you can set for machine learning and Ai >>I love the machine learning. Ai and I'd love to just get your thoughts on because I love the Zero ops narrative Because that's day one zero ops now that you're here day to being discussed and people are also hyping up, you know, ai Ops and other things. But you know this notion of day to, okay, I'm shipping stuff in the cloud and I have to have zero ops on day 234 et cetera. Uh, what's your take on that? Because that seems to be a hot air that customers and enterprises are getting in and understanding the new wave, writing it and then going, wait a minute pushing new code that's breaking something over there I built months ago. So this is just notion of day to obstacle. But again, if you want to be zero ops, it's gonna be every day. >>Oh, I think you hit the nail on the head. I don't think there's going to be a difference between they want the zero they want and today chair, I think every day is going to be the zero. And the reason for that is because people will be shipping all the time. So your application will change all the time. So the application will always be fresh, so it will always be there zero. So zero ops has to be there all the time. Not just in the birthday. >>Great slogan! Every day is day zero, which means it's going well. I mean there's no no problems. So I gotta ask you the question was one of the big things that's coming up as well as this idea of an SRE not new to devops world, but as enterprises start to get into an SRE role where with hybrid and now edge becoming people not just industrial, um there's been a lot of activity going on a distributed basis. So you're gonna need to have this kind of notion of large scale and 00 ops, which essentially means automation, all those things you mentioned, >>not everyone can >>afford that. Um Not every company can afford to have you know hardcore devops groups to manage and their release process, all that stuff. So how are you helping customers and how do you see this problem being solved? Because this is the accelerant people want, they want the the easy button, they want the zero ops but they just they don't they can't pipeline people fast enough to do this role. >>Yeah. What you're describing is the central differentiator we bring to customers is this idea of as a service experience with guaranteed outcomes. So that's what makes us different versus the traditional enterprise infrastructure software model where people just consume software vendors and system integrate themselves and then are in charge of operations themselves and carrying the technical risks themselves. We deliver everything as a service with guaranteed outcomes through the through cloud native experience. That means guaranteed as L. A. Is predictable outcomes, continuous updates, continuous upgrades. Your on prem infrastructure or your edge infrastructure is going to look and feel and behave exactly like a public cloud experience where you're not going to have to worry about sRS or maintaining the underlying being delivered to you as a service. That's a big part, that's a central part of what makes us different in this space. >>That's great value proposition. Can you just expand give an example of a use case where you guys are doing that? Because this is something that I'm seeing a lot of people looking to go faster. You know speed is good but also it could kill right? So you can break things if you go to a. >>Yeah absolutely. I can give you several examples where we're doing this um very exciting company. So one companies booking dot com booking dot com as a massive on from infrastructure but they also massive public cloud consumer. And they decided they want to bring their own infrastructure to the cloud level of automation, cloud level Sophistication, in other words, they want to have their Aws on brand, they wanted to the old, so eccentric and we're delivering this to them with very high in the cell is exactly as a service turnkey Where there is nothing for them to system in grade or to tune and optimize and operate is being really operating 24/7 guaranteed sls and outcomes by us. Well, combination of soft film expertise that we have at massive scale and to the standards of booking dot com. This is one example, another example and this is a very large company um is the opposite side of the spectrum. You know, because they're not called Mexico super successful. Soft as a service company in the security space, growing in leaps and bounds in very high technical demands and security demands. And they want to have an on prem and cloud infrastructure to complement public clouds. Why? Because security is very important to them. Latency is very important to them. Control the customer experience is very important to them. Cost is very important to them. So for that reason they want that in a network of data centers around the globe And we provide that for them. Turnkey as a service than before seven, which enables them to focus 100% on building their own sense on their the functionality which matters to their customers and not have to worry about the underlying cloud infrastructure in their data centers. All of that gets provided to them has guaranteed about experience to their end users. So this would be the examples where we're doing a >>great service. People are looking for a great job. Adrian, Great to see you. Thank you for coming on the cube here, doc are gone 2021. Um, take a minute to put a plug in for the company. What are you guys up to? What you're looking for hiring? I'll see. You got great tracks with customers, congratulates on lens. Um give a quick update on what's going >>on. Happy happy to give it up in the company. So he, here are the highlights. It was super excited about about what we achieved last year and then what we're up to this year. So last year, what we're proud of is despite Covid, we haven't laid off a single person. We kept all the staff and we hired staff. We have gained 160 new customers, many of them, some of the world's largest and best companies and 300 of all existing customers have expanded their business with us last year, which is fantastic. We also had a very strong financial physical cash flow positive. It was a tremendous, tremendous here for us. Uh, this year is very much growth here for us and we would incredible focus on customer outcomes and customer experience. So what we are really, really digging in super hard on is to give the customers the technology and the services that enable them to get to ship software faster and easier to dramatically increase the productivity of dissolved the development efforts on any cloud infrastructure on crime and public clouds using containers and is and to do that as scale. So we're extremely focused on customer outcomes, custom experience and then the innovation is required to make that happen. So you will continue to see a lot of innovation around lens. So the last better release of lens that we brought about has now a cloud service and have a lot of feature where you can share all your cloud automation with your bodies, in, in uh, in uh, in your development team. So the lens used to be a single user product. Now it's a multi user and team based product, which is fantastic, continues to grow very quickly. And then container cloud as a service. Uh, it's a very big part that we're meeting on the infrastructure side. Are you get quite >>the open source cloud company. Adrian. Congratulations. We've been again following even on the many waves of innovation. Open stack, large scale open source software. Congratulations. >>Uh chris >>Thank you very much for coming on the cube. >>Yeah. >>Okay. Dr khan 2021 cube coverage. I'm john furrier here where the Gi Enel Ceo, co founder and chairman of Miranda's sharing his perspective on the open source innovation with their process and also key trends in the industry that is changing the game in accelerating cloud value cloud scales. Cloud native applications. Thanks for watching. Mhm.

Published Date : May 27 2021

SUMMARY :

I'm john Kerry, host of the cube agent I john nice to see. Great to see you what's new Miranda's, give us the update on what's happening. are definitely pushing the boundaries of what benefits can do. You guys are investing in brand new open source solutions for customers. in the meantime, close to 180,000 communities, developers around the world to The general consensus is it's delivering the value. And at the same time, the benefits gives you a standard where you can build that had all the setup requirement and making things easier to get stood up and manage. So that's that's what we aim to provide you to customers because what we discovered And and and the old days of platform was the full stack developer. So the first thing that's different is the openness and uh, the software alone will not do the job, you need the software and the content What is the new role of devops? is going to become uh something like Zero Arts, where are you going to be fully automated okay, I'm shipping stuff in the cloud and I have to have zero ops on day 234 et cetera. So the application will always be fresh, so it will always be there zero. So I gotta ask you the question was one of the big things that's coming up as well as this idea of an SRE not new to devops world, Um Not every company can afford to have you know hardcore to worry about sRS or maintaining the underlying being delivered to you as So you can break things if you go to a. So for that reason they want that in a network of data centers around the globe in for the company. So the last better release of lens that we brought about We've been again following even on the many waves the open source innovation with their process and also key trends in the industry that is changing

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AppleORGANIZATION

0.99+

john KerryPERSON

0.99+

AdrianPERSON

0.99+

last yearDATE

0.99+

two groupsQUANTITY

0.99+

Adrian IonelPERSON

0.99+

Wise BennettPERSON

0.99+

two aspectsQUANTITY

0.99+

100%QUANTITY

0.99+

two elementsQUANTITY

0.99+

300QUANTITY

0.99+

2021DATE

0.99+

160 new customersQUANTITY

0.99+

MirantisPERSON

0.99+

facebookORGANIZATION

0.99+

one exampleQUANTITY

0.99+

johnPERSON

0.99+

five years agoDATE

0.99+

AWSORGANIZATION

0.99+

todayDATE

0.99+

CovidPERSON

0.98+

first oneQUANTITY

0.98+

secondQUANTITY

0.98+

zeroQUANTITY

0.98+

OneQUANTITY

0.98+

Gi Enel CeoPERSON

0.98+

this yearDATE

0.98+

second thingQUANTITY

0.98+

10 years agoDATE

0.98+

javaTITLE

0.98+

third thingQUANTITY

0.97+

oneQUANTITY

0.97+

dot comORGANIZATION

0.97+

180,000 communitiesQUANTITY

0.97+

three thingsQUANTITY

0.96+

single personQUANTITY

0.96+

Zero ArtsORGANIZATION

0.96+

about 160 new customersQUANTITY

0.96+

john furrierPERSON

0.95+

over a decadeQUANTITY

0.94+

a yearQUANTITY

0.94+

sevenQUANTITY

0.93+

first thingQUANTITY

0.93+

DockerConEVENT

0.93+

MirandaORGANIZATION

0.91+

five foldQUANTITY

0.91+

dr khanPERSON

0.91+

single user productQUANTITY

0.91+

khanPERSON

0.89+

chrisPERSON

0.88+

monthsDATE

0.85+

two really terrific thingsQUANTITY

0.83+

AzureTITLE

0.83+

zero opsQUANTITY

0.81+

AndersonPERSON

0.8+

MexicoLOCATION

0.79+

DebPERSON

0.77+

dot comORGANIZATION

0.75+

MadisonORGANIZATION

0.75+

minuteQUANTITY

0.75+

primeCOMMERCIAL_ITEM

0.74+

L. C. CeoORGANIZATION

0.74+

DrPERSON

0.71+

Morantes cubeORGANIZATION

0.71+

day 234QUANTITY

0.7+

EMRTITLE

0.6+

00 opsOTHER

0.59+

ZeroTITLE

0.58+

pounds of consoleQUANTITY

0.54+

of peopleQUANTITY

0.52+

DockerCon2021 Keynote


 

>>Individuals create developers, translate ideas to code, to create great applications and great applications. Touch everyone. A Docker. We know that collaboration is key to your innovation sharing ideas, working together. Launching the most secure applications. Docker is with you wherever your team innovates, whether it be robots or autonomous cars, we're doing research to save lives during a pandemic, revolutionizing, how to buy and sell goods online, or even going into the unknown frontiers of space. Docker is launching innovation everywhere. Join us on the journey to build, share, run the future. >>Hello and welcome to Docker con 2021. We're incredibly excited to have more than 80,000 of you join us today from all over the world. As it was last year, this year at DockerCon is 100% virtual and 100% free. So as to enable as many community members as possible to join us now, 100%. Virtual is also an acknowledgement of the continuing global pandemic in particular, the ongoing tragedies in India and Brazil, the Docker community is a global one. And on behalf of all Dr. Khan attendees, we are donating $10,000 to UNICEF support efforts to fight the virus in those countries. Now, even in those regions of the world where the pandemic is being brought under control, virtual first is the new normal. It's been a challenging transition. This includes our team here at Docker. And we know from talking with many of you that you and your developer teams are challenged by this as well. So to help application development teams better collaborate and ship faster, we've been working on some powerful new features and we thought it would be fun to start off with a demo of those. How about it? Want to have a look? All right. Then no further delay. I'd like to introduce Youi Cal and Ben, gosh, over to you and Ben >>Morning, Ben, thanks for jumping on real quick. >>Have you seen the email from Scott? The one about updates and the docs landing page Smith, the doc combat and more prominence. >>Yeah. I've got something working on my local machine. I haven't committed anything yet. I was thinking we could try, um, that new Docker dev environments feature. >>Yeah, that's cool. So if you hit the share button, what I should do is it will take all of your code and the dependencies and the image you're basing it on and wrap that up as one image for me. And I can then just monitor all my machines that have been one click, like, and then have it side by side, along with the changes I've been looking at as well, because I was also having a bit of a look and then I can really see how it differs to what I'm doing. Maybe I can combine it to do the best of both worlds. >>Sounds good. Uh, let me get that over to you, >>Wilson. Yeah. If you pay with the image name, I'll get that started up. >>All right. Sen send it over >>Cheesy. Okay, great. Let's have a quick look at what you he was doing then. So I've been messing around similar to do with the batter. I've got movie at the top here and I think it looks pretty cool. Let's just grab that image from you. Pick out that started on a dev environment. What this is doing. It's just going to grab the image down, which you can take all of the code, the dependencies only get brunches working on and I'll get that opened up in my idea. Ready to use. It's a here close. We can see our environment as my Molly image, just coming down there and I've got my new idea. >>We'll load this up and it'll just connect to my dev environment. There we go. It's connected to the container. So we're working all in the container here and now give it a moment. What we'll do is we'll see what changes you've been making as well on the code. So it's like she's been working on a landing page as well, and it looks like she's been changing the banner as well. So let's get this running. Let's see what she's actually doing and how it looks. We'll set up our checklist and then we'll see how that works. >>Great. So that's now rolling. So let's just have a look at what you use doing what changes she had made. Compare those to mine just jumped back into my dev container UI, see that I've got both of those running side by side with my changes and news changes. Okay. So she's put Molly up there rather than mobi or somebody had the same idea. So I think in a way I can make us both happy. So if we just jumped back into what we'll do, just add Molly and Moby and here I'll save that. And what we can see is, cause I'm just working within the container rather than having to do sort of rebuild of everything or serve, or just reload my content. No, that's straight the page. So what I can then do is I can come up with my browser here. Once that's all refreshed, refresh the page once hopefully, maybe twice, we should then be able to see your refresh it or should be able to see that we get Malia mobi come up. So there we go, got Molly mobi. So what we'll do now is we'll describe that state. It sends us our image and then we'll just create one of those to share with URI or share. And we'll get a link for that. I guess we'll send that back over to you. >>So I've had a look at what you were doing and I'm actually going to change. I think that might work for both of us. I wondered if you could take a look at it. If I send it over. >>Sounds good. Let me grab the link. >>Yeah, it's a dev environment link again. So if you just open that back in the doc dashboard, it should be able to open up the code that I've changed and then just run it in the same way you normally do. And that shouldn't interrupt what you're already working on because there'll be able to run side by side with your other brunch. You already got, >>Got it. Got it. Loading here. Well, that's great. It's Molly and movie together. I love it. I think we should ship it. >>Awesome. I guess it's chip it and get on with the rest of.com. Wasn't that cool. Thank you Joey. Thanks Ben. Everyone we'll have more of this later in the keynote. So stay tuned. Let's say earlier, we've all been challenged by this past year, whether the COVID pandemic, the complete evaporation of customer demand in many industries, unemployment or business bankruptcies, we all been touched in some way. And yet, even to miss these tragedies last year, we saw multiple sources of hope and inspiration. For example, in response to COVID we saw global communities, including the tech community rapidly innovate solutions for analyzing the spread of the virus, sequencing its genes and visualizing infection rates. In fact, if all in teams collaborating on solutions for COVID have created more than 1,400 publicly shareable images on Docker hub. As another example, we all witnessed the historic landing and exploration of Mars by the perseverance Rover and its ingenuity drone. >>Now what's common in these examples, these innovative and ambitious accomplishments were made possible not by any single individual, but by teams of individuals collaborating together. The power of teams is why we've made development teams central to Docker's mission to build tools and content development teams love to help them get their ideas from code to cloud as quickly as possible. One of the frictions we've seen that can slow down to them in teams is that the path from code to cloud can be a confusing one, riddle with multiple point products, tools, and images that need to be integrated and maintained an automated pipeline in order for teams to be productive. That's why a year and a half ago we refocused Docker on helping development teams make sense of all this specifically, our goal is to provide development teams with the trusted content, the sharing capabilities and the pipeline integrations with best of breed third-party tools to help teams ship faster in short, to provide a collaborative application development platform. >>Everything a team needs to build. Sharon run create applications. Now, as I noted earlier, it's been a challenging year for everyone on our planet and has been similar for us here at Docker. Our team had to adapt to working from home local lockdowns caused by the pandemic and other challenges. And despite all this together with our community and ecosystem partners, we accomplished many exciting milestones. For example, in open source together with the community and our partners, we open sourced or made major contributions to many projects, including OCI distribution and the composed plugins building on these open source projects. We had powerful new capabilities to the Docker product, both free and subscription. For example, support for WSL two and apple, Silicon and Docker, desktop and vulnerability scanning audit logs and image management and Docker hub. >>And finally delivering an easy to use well-integrated development experience with best of breed tools and content is only possible through close collaboration with our ecosystem partners. For example, this last year we had over 100 commercialized fees, join our Docker verified publisher program and over 200 open source projects, join our Docker sponsored open source program. As a result of these efforts, we've seen some exciting growth in the Docker community in the 12 months since last year's Docker con for example, the number of registered developers grew 80% to over 8 million. These developers created many new images increasing the total by 56% to almost 11 million. And the images in all these repositories were pulled by more than 13 million monthly active IP addresses totaling 13 billion pulls a month. Now while the growth is exciting by Docker, we're even more excited about the stories we hear from you and your development teams about how you're using Docker and its impact on your businesses. For example, cancer researchers and their bioinformatics development team at the Washington university school of medicine needed a way to quickly analyze their clinical trial results and then share the models, the data and the analysis with other researchers they use Docker because it gives them the ease of use choice of pipeline tools and speed of sharing so critical to their research. And most importantly to the lives of their patients stay tuned for another powerful customer story later in the keynote from Matt fall, VP of engineering at Oracle insights. >>So with this last year behind us, what's next for Docker, but challenge you this last year of force changes in how development teams work, but we felt for years to come. And what we've learned in our discussions with you will have long lasting impact on our product roadmap. One of the biggest takeaways from those discussions that you and your development team want to be quicker to adapt, to changes in your environment so you can ship faster. So what is DACA doing to help with this first trusted content to own the teams that can focus their energies on what is unique to their businesses and spend as little time as possible on undifferentiated work are able to adapt more quickly and ship faster in order to do so. They need to be able to trust other components that make up their app together with our partners. >>Docker is doubling down and providing development teams with trusted content and the tools they need to use it in their applications. Second, remote collaboration on a development team, asking a coworker to take a look at your code used to be as easy as swiveling their chair around, but given what's happened in the last year, that's no longer the case. So as you even been hinted in the demo at the beginning, you'll see us deliver more capabilities for remote collaboration within a development team. And we're enabling development team to quickly adapt to any team configuration all on prem hybrid, all work from home, helping them remain productive and focused on shipping third ecosystem integrations, those development teams that can quickly take advantage of innovations throughout the ecosystem. Instead of getting locked into a single monolithic pipeline, there'll be the ones able to deliver amps, which impact their businesses faster. >>So together with our ecosystem partners, we are investing in more integrations with best of breed tools, right? Integrated automated app pipelines. Furthermore, we'll be writing more public API APIs and SDKs to enable ecosystem partners and development teams to roll their own integrations. We'll be sharing more details about remote collaboration and ecosystem integrations. Later in the keynote, I'd like to take a moment to share with Docker and our partners are doing for trusted content, providing development teams, access to content. They can trust, allows them to focus their coding efforts on what's unique and differentiated to that end Docker and our partners are bringing more and more trusted content to Docker hub Docker official images are 160 images of popular upstream open source projects that serve as foundational building blocks for any application. These include operating systems, programming, languages, databases, and more. Furthermore, these are updated patch scan and certified frequently. So I said, no image is older than 30 days. >>Docker verified publisher images are published by more than 100 commercialized feeds. The image Rebos are explicitly designated verify. So the developers searching for components for their app know that the ISV is actively maintaining the image. Docker sponsored open source projects announced late last year features images for more than 200 open source communities. Docker sponsors these communities through providing free storage and networking resources and offering their community members unrestricted access repos for businesses allow businesses to update and share their apps privately within their organizations using role-based access control and user authentication. No, and finally, public repos for communities enable community projects to be freely shared with anonymous and authenticated users alike. >>And for all these different types of content, we provide services for both development teams and ISP, for example, vulnerability scanning and digital signing for enhanced security search and filtering for discoverability packaging and updating services and analytics about how these products are being used. All this trusted content, we make available to develop teams for them directly to discover poll and integrate into their applications. Our goal is to meet development teams where they live. So for those organizations that prefer to manage their internal distribution of trusted content, we've collaborated with leading container registry partners. We announced our partnership with J frog late last year. And today we're very pleased to announce our partnerships with Amazon and Miranda's for providing an integrated seamless experience for joint for our joint customers. Lastly, the container images themselves and this end to end flow are built on open industry standards, which provided all the teams with flexibility and choice trusted content enables development teams to rapidly build. >>As I let them focus on their unique differentiated features and use trusted building blocks for the rest. We'll be talking more about trusted content as well as remote collaboration and ecosystem integrations later in the keynote. Now ecosystem partners are not only integral to the Docker experience for development teams. They're also integral to a great DockerCon experience, but please join me in thanking our Dr. Kent on sponsors and checking out their talks throughout the day. I also want to thank some others first up Docker team. Like all of you this last year has been extremely challenging for us, but the Docker team rose to the challenge and worked together to continue shipping great product, the Docker community of captains, community leaders, and contributors with your welcoming newcomers, enthusiasm for Docker and open exchanges of best practices and ideas talker, wouldn't be Docker without you. And finally, our development team customers. >>You trust us to help you build apps. Your businesses rely on. We don't take that trust for granted. Thank you. In closing, we often hear about the tenant's developer capable of great individual feeds that can transform project. But I wonder if we, as an industry have perhaps gotten this wrong by putting so much emphasis on weight, on the individual as discussed at the beginning, great accomplishments like innovative responses to COVID-19 like landing on Mars are more often the results of individuals collaborating together as a team, which is why our mission here at Docker is delivered tools and content developers love to help their team succeed and become 10 X teams. Thanks again for joining us, we look forward to having a great DockerCon with you today, as well as a great year ahead of us. Thanks and be well. >>Hi, I'm Dana Lawson, VP of engineering here at get hub. And my job is to enable this rich interconnected community of builders and makers to build even more and hopefully have a great time doing it in order to enable the best platform for developers, which I know is something we are all passionate about. We need to partner across the ecosystem to ensure that developers can have a great experience across get hub and all the tools that they want to use. No matter what they are. My team works to build the tools and relationships to make that possible. I am so excited to join Scott on this virtual stage to talk about increasing developer velocity. So let's dive in now, I know this may be hard for some of you to believe, but as a former CIS admin, some 21 years ago, working on sense spark workstations, we've come such a long way for random scripts and desperate systems that we've stitched together to this whole inclusive developer workflow experience being a CIS admin. >>Then you were just one piece of the siloed experience, but I didn't want to just push code to production. So I created scripts that did it for me. I taught myself how to code. I was the model lazy CIS admin that got dangerous and having pushed a little too far. I realized that working in production and building features is really a team sport that we had the opportunity, all of us to be customer obsessed today. As developers, we can go beyond the traditional dev ops mindset. We can really focus on adding value to the customer experience by ensuring that we have work that contributes to increasing uptime via and SLS all while being agile and productive. We get there. When we move from a pass the Baton system to now having an interconnected developer workflow that increases velocity in every part of the cycle, we get to work better and smarter. >>And honestly, in a way that is so much more enjoyable because we automate away all the mundane and manual and boring tasks. So we get to focus on what really matters shipping, the things that humans get to use and love. Docker has been a big part of enabling this transformation. 10, 20 years ago, we had Tomcat containers, which are not Docker containers. And for y'all hearing this the first time go Google it. But that was the way we built our applications. We had to segment them on the server and give them resources. Today. We have Docker containers, these little mini Oasys and Docker images. You can do it multiple times in an orchestrated manner with the power of actions enabled and Docker. It's just so incredible what you can do. And by the way, I'm showing you actions in Docker, which I hope you use because both are great and free for open source. >>But the key takeaway is really the workflow and the automation, which you certainly can do with other tools. Okay, I'm going to show you just how easy this is, because believe me, if this is something I can learn and do anybody out there can, and in this demo, I'll show you about the basic components needed to create and use a package, Docker container actions. And like I said, you won't believe how awesome the combination of Docker and actions is because you can enable your workflow to do no matter what you're trying to do in this super baby example. We're so small. You could take like 10 seconds. Like I am here creating an action due to a simple task, like pushing a message to your logs. And the cool thing is you can use it on any the bit on this one. Like I said, we're going to use push. >>You can do, uh, even to order a pizza every time you roll into production, if you wanted, but at get hub, that'd be a lot of pizzas. And the funny thing is somebody out there is actually tried this and written that action. If you haven't used Docker and actions together, check out the docs on either get hub or Docker to get you started. And a huge shout out to all those doc writers out there. I built this demo today using those instructions. And if I can do it, I know you can too, but enough yapping let's get started to save some time. And since a lot of us are Docker and get hub nerds, I've already created a repo with a Docker file. So we're going to skip that step. Next. I'm going to create an action's Yammel file. And if you don't Yammer, you know, actions, the metadata defines my important log stuff to capture and the input and my time out per parameter to pass and puts to the Docker container, get up a build image from your Docker file and run the commands in a new container. >>Using the Sigma image. The cool thing is, is you can use any Docker image in any language for your actions. It doesn't matter if it's go or whatever in today's I'm going to use a shell script and an input variable to print my important log stuff to file. And like I said, you know me, I love me some. So let's see this action in a workflow. When an action is in a private repo, like the one I demonstrating today, the action can only be used in workflows in the same repository, but public actions can be used by workflows in any repository. So unfortunately you won't get access to the super awesome action, but don't worry in the Guild marketplace, there are over 8,000 actions available, especially the most important one, that pizza action. So go try it out. Now you can do this in a couple of ways, whether you're doing it in your preferred ID or for today's demo, I'm just going to use the gooey. I'm going to navigate to my actions tab as I've done here. And I'm going to in my workflow, select new work, hello, probably load some workflows to Claire to get you started, but I'm using the one I've copied. Like I said, the lazy developer I am in. I'm going to replace it with my action. >>That's it. So now we're going to go and we're going to start our commitment new file. Now, if we go over to our actions tab, we can see the workflow in progress in my repository. I just click the actions tab. And because they wrote the actions on push, we can watch the visualization under jobs and click the job to see the important stuff we're logging in the input stamp in the printed log. And we'll just wait for this to run. Hello, Mona and boom. Just like that. It runs automatically within our action. We told it to go run as soon as the files updated because we're doing it on push merge. That's right. Folks in just a few minutes, I built an action that writes an entry to a log file every time I push. So I don't have to do it manually. In essence, with automation, you can be kind to your future self and save time and effort to focus on what really matters. >>Imagine what I could do with even a little more time, probably order all y'all pieces. That is the power of the interconnected workflow. And it's amazing. And I hope you all go try it out, but why do we care about all of that? Just like in the demo, I took a manual task with both tape, which both takes time and it's easy to forget and automated it. So I don't have to think about it. And it's executed every time consistently. That means less time for me to worry about my human errors and mistakes, and more time to focus on actually building the cool stuff that people want. Obviously, automation, developer productivity, but what is even more important to me is the developer happiness tools like BS, code actions, Docker, Heroku, and many others reduce manual work, which allows us to focus on building things that are awesome. >>And to get into that wonderful state that we call flow. According to research by UC Irvine in Humboldt university in Germany, it takes an average of 23 minutes to enter optimal creative state. What we call the flow or to reenter it after distraction like your dog on your office store. So staying in flow is so critical to developer productivity and as a developer, it just feels good to be cranking away at something with deep focus. I certainly know that I love that feeling intuitive collaboration and automation features we built in to get hub help developer, Sam flow, allowing you and your team to do so much more, to bring the benefits of automation into perspective in our annual October's report by Dr. Nicole, Forsgren. One of my buddies here at get hub, took a look at the developer productivity in the stork year. You know what we found? >>We found that public GitHub repositories that use the Automational pull requests, merge those pull requests. 1.2 times faster. And the number of pooled merged pull requests increased by 1.3 times, that is 34% more poor requests merged. And other words, automation can con can dramatically increase, but the speed and quantity of work completed in any role, just like an open source development, you'll work more efficiently with greater impact when you invest the bulk of your time in the work that adds the most value and eliminate or outsource the rest because you don't need to do it, make the machines by elaborate by leveraging automation in their workflows teams, minimize manual work and reclaim that time for innovation and maintain that state of flow with development and collaboration. More importantly, their work is more enjoyable because they're not wasting the time doing the things that the machines or robots can do for them. >>And I remember what I said at the beginning. Many of us want to be efficient, heck even lazy. So why would I spend my time doing something I can automate? Now you can read more about this research behind the art behind this at October set, get hub.com, which also includes a lot of other cool info about the open source ecosystem and how it's evolving. Speaking of the open source ecosystem we at get hub are so honored to be the home of more than 65 million developers who build software together for everywhere across the globe. Today, we're seeing software development taking shape as the world's largest team sport, where development teams collaborate, build and ship products. It's no longer a solo effort like it was for me. You don't have to take my word for it. Check out this globe. This globe shows real data. Every speck of light you see here represents a contribution to an open source project, somewhere on earth. >>These arts reach across continents, cultures, and other divides. It's distributed collaboration at its finest. 20 years ago, we had no concept of dev ops, SecOps and lots, or the new ops that are going to be happening. But today's development and ops teams are connected like ever before. This is only going to continue to evolve at a rapid pace, especially as we continue to empower the next hundred million developers, automation helps us focus on what's important and to greatly accelerate innovation. Just this past year, we saw some of the most groundbreaking technological advancements and achievements I'll say ever, including critical COVID-19 vaccine trials, as well as the first power flight on Mars. This past month, these breakthroughs were only possible because of the interconnected collaborative open source communities on get hub and the amazing tools and workflows that empower us all to create and innovate. Let's continue building, integrating, and automating. So we collectively can give developers the experience. They deserve all of the automation and beautiful eye UIs that we can muster so they can continue to build the things that truly do change the world. Thank you again for having me today, Dr. Khan, it has been a pleasure to be here with all you nerds. >>Hello. I'm Justin. Komack lovely to see you here. Talking to developers, their world is getting much more complex. Developers are being asked to do everything security ops on goal data analysis, all being put on the rockers. Software's eating the world. Of course, and this all make sense in that view, but they need help. One team. I told you it's shifted all our.net apps to run on Linux from windows, but their developers found the complexity of Docker files based on the Linux shell scripts really difficult has helped make these things easier for your teams. Your ones collaborate more in a virtual world, but you've asked us to make this simpler and more lightweight. You, the developers have asked for a paved road experience. You want things to just work with a simple options to be there, but it's not just the paved road. You also want to be able to go off-road and do interesting and different things. >>Use different components, experiments, innovate as well. We'll always offer you both those choices at different times. Different developers want different things. It may shift for ones the other paved road or off road. Sometimes you want reliability, dependability in the zone for day to day work, but sometimes you have to do something new, incorporate new things in your pipeline, build applications for new places. Then you knew those off-road abilities too. So you can really get under the hood and go and build something weird and wonderful and amazing. That gives you new options. Talk as an independent choice. We don't own the roads. We're not pushing you into any technology choices because we own them. We're really supporting and driving open standards, such as ISEI working opensource with the CNCF. We want to help you get your applications from your laptops, the clouds, and beyond, even into space. >>Let's talk about the key focus areas, that frame, what DACA is doing going forward. These are simplicity, sharing, flexibility, trusted content and care supply chain compared to building where the underlying kernel primitives like namespaces and Seagraves the original Docker CLI was just amazing Docker engine. It's a magical experience for everyone. It really brought those innovations and put them in a world where anyone would use that, but that's not enough. We need to continue to innovate. And it was trying to get more done faster all the time. And there's a lot more we can do. We're here to take complexity away from deeply complicated underlying things and give developers tools that are just amazing and magical. One of the area we haven't done enough and make things magical enough that we're really planning around now is that, you know, Docker images, uh, they're the key parts of your application, but you know, how do I do something with an image? How do I, where do I attach volumes with this image? What's the API. Whereas the SDK for this image, how do I find an example or docs in an API driven world? Every bit of software should have an API and an API description. And our vision is that every container should have this API description and the ability for you to understand how to use it. And it's all a seamless thing from, you know, from your code to the cloud local and remote, you can, you can use containers in this amazing and exciting way. >>One thing I really noticed in the last year is that companies that started off remote fast have constant collaboration. They have zoom calls, apron all day terminals, shattering that always working together. Other teams are really trying to learn how to do this style because they didn't start like that. We used to walk around to other people's desks or share services on the local office network. And it's very difficult to do that anymore. You want sharing to be really simple, lightweight, and informal. Let me try your container or just maybe let's collaborate on this together. Um, you know, fast collaboration on the analysts, fast iteration, fast working together, and he wants to share more. You want to share how to develop environments, not just an image. And we all work by seeing something someone else in our team is doing saying, how can I do that too? I can, I want to make that sharing really, really easy. Ben's going to talk about this more in the interest of one minute. >>We know how you're excited by apple. Silicon and gravis are not excited because there's a new architecture, but excited because it's faster, cooler, cheaper, better, and offers new possibilities. The M one support was the most asked for thing on our public roadmap, EFA, and we listened and share that we see really exciting possibilities, usership arm applications, all the way from desktop to production. We know that you all use different clouds and different bases have deployed to, um, you know, we work with AWS and Azure and Google and more, um, and we want to help you ship on prime as well. And we know that you use huge number of languages and the containers help build applications that use different languages for different parts of the application or for different applications, right? You can choose the best tool. You have JavaScript hat or everywhere go. And re-ask Python for data and ML, perhaps getting excited about WebAssembly after hearing about a cube con, you know, there's all sorts of things. >>So we need to make that as easier. We've been running the whole month of Python on the blog, and we're doing a month of JavaScript because we had one specific support about how do I best put this language into production of that language into production. That detail is important for you. GPS have been difficult to use. We've added GPS suppose in desktop for windows, but we know there's a lot more to do to make the, how multi architecture, multi hardware, multi accelerator world work better and also securely. Um, so there's a lot more work to do to support you in all these things you want to do. >>How do we start building a tenor has applications, but it turns out we're using existing images as components. I couldn't assist survey earlier this year, almost half of container image usage was public images rather than private images. And this is growing rapidly. Almost all software has open source components and maybe 85% of the average application is open source code. And what you're doing is taking whole container images as modules in your application. And this was always the model with Docker compose. And it's a model that you're already et cetera, writing you trust Docker, official images. We know that they might go to 25% of poles on Docker hub and Docker hub provides you the widest choice and the best support that trusted content. We're talking to people about how to make this more helpful. We know, for example, that winter 69 four is just showing us as support, but the image doesn't yet tell you that we're working with canonical to improve messaging from specific images about left lifecycle and support. >>We know that you need more images, regularly updated free of vulnerabilities, easy to use and discover, and Donnie and Marie neuro, going to talk about that more this last year, the solar winds attack has been in the, in the news. A lot, the software you're using and trusting could be compromised and might be all over your organization. We need to reduce the risk of using vital open-source components. We're seeing more software supply chain attacks being targeted as the supply chain, because it's often an easier place to attack and production software. We need to be able to use this external code safely. We need to, everyone needs to start from trusted sources like photography images. They need to scan for known vulnerabilities using Docker scan that we built in partnership with sneak and lost DockerCon last year, we need just keep updating base images and dependencies, and we'll, we're going to help you have the control and understanding about your images that you need to do this. >>And there's more, we're also working on the nursery V2 project in the CNCF to revamp container signings, or you can tell way or software comes from we're working on tooling to make updates easier, and to help you understand and manage all the principals carrier you're using security is a growing concern for all of us. It's really important. And we're going to help you work with security. We can't achieve all our dreams, whether that's space travel or amazing developer products ever see without deep partnerships with our community to cloud is RA and the cloud providers aware most of you ship your occasion production and simple routes that take your work and deploy it easily. Reliably and securely are really important. Just get into production simply and easily and securely. And we've done a bunch of work on that. And, um, but we know there's more to do. >>The CNCF on the open source cloud native community are an amazing ecosystem of creators and lovely people creating an amazing strong community and supporting a huge amount of innovation has its roots in the container ecosystem and his dreams beyond that much of the innovation is focused around operate experience so far, but developer experience is really a growing concern in that community as well. And we're really excited to work on that. We also uses appraiser tool. Then we know you do, and we know that you want it to be easier to use in your environment. We just shifted Docker hub to work on, um, Kubernetes fully. And, um, we're also using many of the other projects are Argo from atheists. We're spending a lot of time working with Microsoft, Amazon right now on getting natural UV to ready to ship in the next few. That's a really detailed piece of collaboration we've been working on for a long term. Long time is really important for our community as the scarcity of the container containers and, um, getting content for you, working together makes us stronger. Our community is made up of all of you have. Um, it's always amazing to be reminded of that as a huge open source community that we already proud to work with. It's an amazing amount of innovation that you're all creating and where perhaps it, what with you and share with you as well. Thank you very much. And thank you for being here. >>Really excited to talk to you today and share more about what Docker is doing to help make you faster, make your team faster and turn your application delivery into something that makes you a 10 X team. What we're hearing from you, the developers using Docker everyday fits across three common themes that we hear consistently over and over. We hear that your time is super important. It's critical, and you want to move faster. You want your tools to get out of your way, and instead to enable you to accelerate and focus on the things you want to be doing. And part of that is that finding great content, great application components that you can incorporate into your apps to move faster is really hard. It's hard to discover. It's hard to find high quality content that you can trust that, you know, passes your test and your configuration needs. >>And it's hard to create good content as well. And you're looking for more safety, more guardrails to help guide you along that way so that you can focus on creating value for your company. Secondly, you're telling us that it's a really far to collaborate effectively with your team and you want to do more, to work more effectively together to help your tools become more and more seamless to help you stay in sync, both with yourself across all of your development environments, as well as with your teammates so that you can more effectively collaborate together. Review each other's work, maintain things and keep them in sync. And finally, you want your applications to run consistently in every single environment, whether that's your local development environment, a cloud-based development environment, your CGI pipeline, or the cloud for production, and you want that micro service to provide that consistent experience everywhere you go so that you have similar tools, similar environments, and you don't need to worry about things getting in your way, but instead things make it easy for you to focus on what you wanna do and what Docker is doing to help solve all of these problems for you and your colleagues is creating a collaborative app dev platform. >>And this collaborative application development platform consists of multiple different pieces. I'm not going to walk through all of them today, but the overall view is that we're providing all the tooling you need from the development environment, to the container images, to the collaboration services, to the pipelines and integrations that enable you to focus on making your applications amazing and changing the world. If we start zooming on a one of those aspects, collaboration we hear from developers regularly is that they're challenged in synchronizing their own setups across environments. They want to be able to duplicate the setup of their teammates. Look, then they can easily get up and running with the same applications, the same tooling, the same version of the same libraries, the same frameworks. And they want to know if their applications are good before they're ready to share them in an official space. >>They want to collaborate on things before they're done, rather than feeling like they have to officially published something before they can effectively share it with others to work on it, to solve this. We're thrilled today to announce Docker, dev environments, Docker, dev environments, transform how your team collaborates. They make creating, sharing standardized development environments. As simple as a Docker poll, they make it easy to review your colleagues work without affecting your own work. And they increase the reproducibility of your own work and decreased production issues in doing so because you've got consistent environments all the way through. Now, I'm going to pass it off to our principal product manager, Ben Gotch to walk you through more detail on Docker dev environments. >>Hi, I'm Ben. I work as a principal program manager at DACA. One of the areas that doc has been looking at to see what's hard today for developers is sharing changes that you make from the inner loop where the inner loop is a better development, where you write code, test it, build it, run it, and ultimately get feedback on those changes before you merge them and try and actually ship them out to production. Most amount of us build this flow and get there still leaves a lot of challenges. People need to jump between branches to look at each other's work. Independence. Dependencies can be different when you're doing that and doing this in this new hybrid wall of work. Isn't any easier either the ability to just save someone, Hey, come and check this out. It's become much harder. People can't come and sit down at your desk or take your laptop away for 10 minutes to just grab and look at what you're doing. >>A lot of the reason that development is hard when you're remote, is that looking at changes and what's going on requires more than just code requires all the dependencies and everything you've got set up and that complete context of your development environment, to understand what you're doing and solving this in a remote first world is hard. We wanted to look at how we could make this better. Let's do that in a way that let you keep working the way you do today. Didn't want you to have to use a browser. We didn't want you to have to use a new idea. And we wanted to do this in a way that was application centric. We wanted to let you work with all the rest of the application already using C for all the services and all those dependencies you need as part of that. And with that, we're excited to talk more about docket developer environments, dev environments are new part of the Docker experience that makes it easier you to get started with your whole inner leap, working inside a container, then able to share and collaborate more than just the code. >>We want it to enable you to share your whole modern development environment, your whole setup from DACA, with your team on any operating system, we'll be launching a limited beta of dev environments in the coming month. And a GA dev environments will be ID agnostic and supporting composts. This means you'll be able to use an extend your existing composed files to create your own development environment in whatever idea, working in dev environments designed to be local. First, they work with Docker desktop and say your existing ID, and let you share that whole inner loop, that whole development context, all of your teammates in just one collect. This means if you want to get feedback on the working progress change or the PR it's as simple as opening another idea instance, and looking at what your team is working on because we're using compose. You can just extend your existing oppose file when you're already working with, to actually create this whole application and have it all working in the context of the rest of the services. >>So it's actually the whole environment you're working with module one service that doesn't really understand what it's doing alone. And with that, let's jump into a quick demo. So you can see here, two dev environments up and running. First one here is the same container dev environment. So if I want to go into that, let's see what's going on in the various code button here. If that one open, I can get straight into my application to start making changes inside that dev container. And I've got all my dependencies in here, so I can just run that straight in that second application I have here is one that's opened up in compose, and I can see that I've also got my backend, my front end and my database. So I've got all my services running here. So if I want, I can open one or more of these in a dev environment, meaning that that container has the context that dev environment has the context of the whole application. >>So I can get back into and connect to all the other services that I need to test this application properly, all of them, one unit. And then when I've made my changes and I'm ready to share, I can hit my share button type in the refund them on to share that too. And then give that image to someone to get going, pick that up and just start working with that code and all my dependencies, simple as putting an image, looking ahead, we're going to be expanding development environments, more of your dependencies for the whole developer worst space. We want to look at backing up and letting you share your volumes to make data science and database setups more repeatable and going. I'm still all of this under a single workspace for your team containing images, your dev environments, your volumes, and more we've really want to allow you to create a fully portable Linux development environment. >>So everyone you're working with on any operating system, as I said, our MVP we're coming next month. And that was for vs code using their dev container primitive and more support for other ideas. We'll follow to find out more about what's happening and what's coming up next in the future of this. And to actually get a bit of a deeper dive in the experience. Can we check out the talk I'm doing with Georgie and girl later on today? Thank you, Ben, amazing story about how Docker is helping to make developer teams more collaborative. Now I'd like to talk more about applications while the dev environment is like the workbench around what you're building. The application itself has all the different components, libraries, and frameworks, and other code that make up the application itself. And we hear developers saying all the time things like, how do they know if their images are good? >>How do they know if they're secure? How do they know if they're minimal? How do they make great images and great Docker files and how do they keep their images secure? And up-to-date on every one of those ties into how do I create more trust? How do I know that I'm building high quality applications to enable you to do this even more effectively than today? We are pleased to announce the DACA verified polisher program. This broadens trusted content by extending beyond Docker official images, to give you more and more trusted building blocks that you can incorporate into your applications. It gives you confidence that you're getting what you expect because Docker verifies every single one of these publishers to make sure they are who they say they are. This improves our secure supply chain story. And finally it simplifies your discovery of the best building blocks by making it easy for you to find things that you know, you can trust so that you can incorporate them into your applications and move on and on the right. You can see some examples of the publishers that are involved in Docker, official images and our Docker verified publisher program. Now I'm pleased to introduce you to marina. Kubicki our senior product manager who will walk you through more about what we're doing to create a better experience for you around trust. >>Thank you, Dani, >>Mario Andretti, who is a famous Italian sports car driver. One said that if everything feels under control, you're just not driving. You're not driving fast enough. Maya Andretti is not a software developer and a software developers. We know that no matter how fast we need to go in order to drive the innovation that we're working on, we can never allow our applications to spin out of control and a Docker. As we continue talking to our, to the developers, what we're realizing is that in order to reach that speed, the developers are the, the, the development community is looking for the building blocks and the tools that will, they will enable them to drive at the speed that they need to go and have the trust in those building blocks. And in those tools that they will be able to maintain control over their applications. So as we think about some of the things that we can do to, to address those concerns, uh, we're realizing that we can pursue them in a number of different venues, including creating reliable content, including creating partnerships that expands the options for the reliable content. >>Um, in order to, in a we're looking at creating integrations, no link security tools, talk about the reliable content. The first thing that comes to mind are the Docker official images, which is a program that we launched several years ago. And this is a set of curated, actively maintained, open source images that, uh, include, uh, operating systems and databases and programming languages. And it would become immensely popular for, for, for creating the base layers of, of the images of, of the different images, images, and applications. And would we realizing that, uh, many developers are, instead of creating something from scratch, basically start with one of the official images for their basis, and then build on top of that. And this program has become so popular that it now makes up a quarter of all of the, uh, Docker poles, which essentially ends up being several billion pulse every single month. >>As we look beyond what we can do for the open source. Uh, we're very ability on the open source, uh, spectrum. We are very excited to announce that we're launching the Docker verified publishers program, which is continuing providing the trust around the content, but now working with, uh, some of the industry leaders, uh, in multiple, in multiple verticals across the entire technology technical spec, it costs entire, uh, high tech in order to provide you with more options of the images that you can use for building your applications. And it still comes back to trust that when you are searching for content in Docker hub, and you see the verified publisher badge, you know, that this is, this is the content that, that is part of the, that comes from one of our partners. And you're not running the risk of pulling the malicious image from an employee master source. >>As we look beyond what we can do for, for providing the reliable content, we're also looking at some of the tools and the infrastructure that we can do, uh, to create a security around the content that you're creating. So last year at the last ad, the last year's DockerCon, we announced partnership with sneak. And later on last year, we launched our DACA, desktop and Docker hub vulnerability scans that allow you the options of writing scans in them along multiple points in your dev cycle. And in addition to providing you with information on the vulnerability on, on the vulnerabilities, in, in your code, uh, it also provides you with a guidance on how to re remediate those vulnerabilities. But as we look beyond the vulnerability scans, we're also looking at some of the other things that we can do, you know, to, to, to, uh, further ensure that the integrity and the security around your images, your images, and with that, uh, later on this year, we're looking to, uh, launch the scope, personal access tokens, and instead of talking about them, I will simply show you what they look like. >>So if you can see here, this is my page in Docker hub, where I've created a four, uh, tokens, uh, read-write delete, read, write, read only in public read in public creeper read only. So, uh, earlier today I went in and I, I logged in, uh, with my read only token. And when you see, when I'm going to pull an image, it's going to allow me to pull an image, not a problem success. And then when I do the next step, I'm going to ask to push an image into the same repo. Uh, would you see is that it's going to give me an error message saying that they access is denied, uh, because there is an additional authentication required. So these are the things that we're looking to add to our roadmap. As we continue thinking about the things that we can do to provide, um, to provide additional building blocks, content, building blocks, uh, and, and, and tools to build the trust so that our DACA developer and skinned code faster than Mario Andretti could ever imagine. Uh, thank you to >>Thank you, marina. It's amazing what you can do to improve the trusted content so that you can accelerate your development more and move more quickly, move more collaboratively and build upon the great work of others. Finally, we hear over and over as that developers are working on their applications that they're looking for, environments that are consistent, that are the same as production, and that they want their applications to really run anywhere, any environment, any architecture, any cloud one great example is the recent announcement of apple Silicon. We heard from developers on uproar that they needed Docker to be available for that architecture before they could add those to it and be successful. And we listened. And based on that, we are pleased to share with you Docker, desktop on apple Silicon. This enables you to run your apps consistently anywhere, whether that's developing on your team's latest dev hardware, deploying an ARM-based cloud environments and having a consistent architecture across your development and production or using multi-year architecture support, which enables your whole team to collaborate on its application, using private repositories on Docker hub, and thrilled to introduce you to Hughie cower, senior director for product management, who will walk you through more of what we're doing to create a great developer experience. >>Senior director of product management at Docker. And I'd like to jump straight into a demo. This is the Mac mini with the apple Silicon processor. And I want to show you how you can now do an end-to-end arm workflow from my M one Mac mini to raspberry PI. As you can see, we have vs code and Docker desktop installed on a, my, the Mac mini. I have a small example here, and I have a raspberry PI three with an led strip, and I want to turn those LEDs into a moving rainbow. This Dockerfile here, builds the application. We build the image with the Docker, build X command to make the image compatible for all raspberry pies with the arm. 64. Part of this build is built with the native power of the M one chip. I also add the push option to easily share the image with my team so they can give it a try to now Dr. >>Creates the local image with the application and uploads it to Docker hub after we've built and pushed the image. We can go to Docker hub and see the new image on Docker hub. You can also explore a variety of images that are compatible with arm processors. Now let's go to the raspberry PI. I have Docker already installed and it's running Ubuntu 64 bit with the Docker run command. I can run the application and let's see what will happen from there. You can see Docker is downloading the image automatically from Docker hub and when it's running, if it's works right, there are some nice colors. And with that, if we have an end-to-end workflow for arm, where continuing to invest into providing you a great developer experience, that's easy to install. Easy to get started with. As you saw in the demo, if you're interested in the new Mac, mini are interested in developing for our platforms in general, we've got you covered with the same experience you've come to expect from Docker with over 95,000 arm images on hub, including many Docker official images. >>We think you'll find what you're looking for. Thank you again to the community that helped us to test the tech previews. We're so delighted to hear when folks say that the new Docker desktop for apple Silicon, it just works for them, but that's not all we've been working on. As Dani mentioned, consistency of developer experience across environments is so important. We're introducing composed V2 that makes compose a first-class citizen in the Docker CLI you no longer need to install a separate composed biter in order to use composed, deploying to production is simpler than ever with the new compose integration that enables you to deploy directly to Amazon ECS or Azure ACI with the same methods you use to run your application locally. If you're interested in running slightly different services, when you're debugging versus testing or, um, just general development, you can manage that all in one place with the new composed service to hear more about what's new and Docker desktop, please join me in the three 15 breakout session this afternoon. >>And now I'd love to tell you a bit more about bill decks and convince you to try it. If you haven't already it's our next gen build command, and it's no longer experimental as shown in the demo with built X, you'll be able to do multi architecture builds, share those builds with your team and the community on Docker hub. With build X, you can speed up your build processes with remote caches or build all the targets in your composed file in parallel with build X bake. And there's so much more if you're using Docker, desktop or Docker, CE you can use build X checkout tonus is talk this afternoon at three 45 to learn more about build X. And with that, I hope everyone has a great Dr. Khan and back over to you, Donnie. >>Thank you UA. It's amazing to hear about what we're doing to create a better developer experience and make sure that Docker works everywhere you need to work. Finally, I'd like to wrap up by showing you everything that we've announced today and everything that we've done recently to make your lives better and give you more and more for the single price of your Docker subscription. We've announced the Docker verified publisher program we've announced scoped personal access tokens to make it easier for you to have a secure CCI pipeline. We've announced Docker dev environments to improve your collaboration with your team. Uh, we shared with you Docker, desktop and apple Silicon, to make sure that, you know, Docker runs everywhere. You need it to run. And we've announced Docker compose version two, finally making it a first-class citizen amongst all the other great Docker tools. And we've done so much more recently as well from audit logs to advanced image management, to compose service profiles, to improve where you can run Docker more easily. >>Finally, as we look forward, where we're headed in the upcoming year is continuing to invest in these themes of helping you build, share, and run modern apps more effectively. We're going to be doing more to help you create a secure supply chain with which only grows more and more important as time goes on. We're going to be optimizing your update experience to make sure that you can easily understand the current state of your application, all its components and keep them all current without worrying about breaking everything as you're doing. So we're going to make it easier for you to synchronize your work. Using cloud sync features. We're going to improve collaboration through dev environments and beyond, and we're going to do make it easy for you to run your microservice in your environments without worrying about things like architecture or differences between those environments. Thank you so much. I'm thrilled about what we're able to do to help make your lives better. And now you're going to be hearing from one of our customers about what they're doing to launch their business with Docker >>I'm Matt Falk, I'm the head of engineering and orbital insight. And today I want to talk to you a little bit about data from space. So who am I like many of you, I'm a software developer and a software developer about seven companies so far, and now I'm a head of engineering. So I spend most of my time doing meetings, but occasionally I'll still spend time doing design discussions, doing code reviews. And in my free time, I still like to dabble on things like project oiler. So who's Oberlin site. What do we do? Portal insight is a large data supplier and analytics provider where we take data geospatial data anywhere on the planet, any overhead sensor, and translate that into insights for the end customer. So specifically we have a suite of high performance, artificial intelligence and machine learning analytics that run on this geospatial data. >>And we build them to specifically determine natural and human service level activity anywhere on the planet. What that really means is we take any type of data associated with a latitude and longitude and we identify patterns so that we can, so we can detect anomalies. And that's everything that we do is all about identifying those patterns to detect anomalies. So more specifically, what type of problems do we solve? So supply chain intelligence, this is one of the use cases that we we'd like to talk about a lot. It's one of our main primary verticals that we go after right now. And as Scott mentioned earlier, this had a huge impact last year when COVID hit. So specifically supply chain intelligence is all about identifying movement patterns to and from operating facilities to identify changes in those supply chains. How do we do this? So for us, we can do things where we track the movement of trucks. >>So identifying trucks, moving from one location to another in aggregate, same thing we can do with foot traffic. We can do the same thing for looking at aggregate groups of people moving from one location to another and analyzing their patterns of life. We can look at two different locations to determine how people are moving from one location to another, or going back and forth. All of this is extremely valuable for detecting how a supply chain operates and then identifying the changes to that supply chain. As I said last year with COVID, everything changed in particular supply chains changed incredibly, and it was hugely important for customers to know where their goods or their products are coming from and where they were going, where there were disruptions in their supply chain and how that's affecting their overall supply and demand. So to use our platform, our suite of tools, you can start to gain a much better picture of where your suppliers or your distributors are going from coming from or going to. >>So what's our team look like? So my team is currently about 50 engineers. Um, we're spread into four different teams and the teams are structured like this. So the first team that we have is infrastructure engineering and this team largely deals with deploying our Dockers using Kubernetes. So this team is all about taking Dockers, built by other teams, sometimes building the Dockers themselves and putting them into our production system, our platform engineering team, they produce these microservices. So they produce microservice, Docker images. They develop and test with them locally. Their entire environments are dockerized. They produce these doctors, hand them over to him for infrastructure engineering to be deployed. Similarly, our product engineering team does the same thing. They develop and test with Dr. Locally. They also produce a suite of Docker images that the infrastructure team can then deploy. And lastly, we have our R and D team, and this team specifically produces machine learning algorithms using Nvidia Docker collectively, we've actually built 381 Docker repositories and 14 million. >>We've had 14 million Docker pools over the lifetime of the company, just a few stats about us. Um, but what I'm really getting to here is you can see actually doctors becoming almost a form of communication between these teams. So one of the paradigms in software engineering that you're probably familiar with encapsulation, it's really helpful for a lot of software engineering problems to break the problem down, isolate the different pieces of it and start building interfaces between the code. This allows you to scale different pieces of the platform or different pieces of your code in different ways that allows you to scale up certain pieces and keep others at a smaller level so that you can meet customer demands. And for us, one of the things that we can largely do now is use Dockers as that interface. So instead of having an entire platform where all teams are talking to each other, and everything's kind of, mishmashed in a monolithic application, we can now say this team is only able to talk to this team by passing over a particular Docker image that defines the interface of what needs to be built before it passes to the team and really allows us to scalp our development and be much more efficient. >>Also, I'd like to say we are hiring. Um, so we have a number of open roles. We have about 30 open roles in our engineering team that we're looking to fill by the end of this year. So if any of this sounds really interesting to you, please reach out after the presentation. >>So what does our platform do? Really? Our platform allows you to answer any geospatial question, and we do this at three different inputs. So first off, where do you want to look? So we did this as what we call an AOI or an area of interest larger. You can think of this as a polygon drawn on the map. So we have a curated data set of almost 4 million AOIs, which you can go and you can search and use for your analysis, but you're also free to build your own. Second question is what you want to look for. We do this with the more interesting part of our platform of our machine learning and AI capabilities. So we have a suite of algorithms that automatically allow you to identify trucks, buildings, hundreds of different types of aircraft, different types of land use, how many people are moving from one location to another different locations that people in a particular area are moving to or coming from all of these different analyses or all these different analytics are available at the click of a button, and then determine what you want to look for. >>Lastly, you determine when you want to find what you're looking for. So that's just, uh, you know, do you want to look for the next three hours? Do you want to look for the last week? Do you want to look every month for the past two, whatever the time cadence is, you decide that you hit go and out pops a time series, and that time series tells you specifically where you want it to look what you want it to look for and how many, or what percentage of the thing you're looking for appears in that area. Again, we do all of this to work towards patterns. So we use all this data to produce a time series from there. We can look at it, determine the patterns, and then specifically identify the anomalies. As I mentioned with supply chain, this is extremely valuable to identify where things change. So we can answer these questions, looking at a particular operating facility, looking at particular, what is happening with the level of activity is at that operating facility where people are coming from, where they're going to, after visiting that particular facility and identify when and where that changes here, you can just see it's a picture of our platform. It's actually showing all the devices in Manhattan, um, over a period of time. And it's more of a heat map view. So you can actually see the hotspots in the area. >>So really the, and this is the heart of the talk, but what happened in 2020? So for men, you know, like many of you, 2020 was a difficult year COVID hit. And that changed a lot of what we're doing, not from an engineering perspective, but also from an entire company perspective for us, the motivation really became to make sure that we were lowering our costs and increasing innovation simultaneously. Now those two things often compete with each other. A lot of times you want to increase innovation, that's going to increase your costs, but the challenge last year was how to do both simultaneously. So here's a few stats for you from our team. In Q1 of last year, we were spending almost $600,000 per month on compute costs prior to COVID happening. That wasn't hugely a concern for us. It was a lot of money, but it wasn't as critical as it was last year when we really needed to be much more efficient. >>Second one is flexibility for us. We were deployed on a single cloud environment while we were cloud thought ready, and that was great. We want it to be more flexible. We want it to be on more cloud environments so that we could reach more customers. And also eventually get onto class side networks, extending the base of our customers as well from a custom analytics perspective. This is where we get into our traction. So last year, over the entire year, we computed 54,000 custom analytics for different users. We wanted to make sure that this number was steadily increasing despite us trying to lower our costs. So we didn't want the lowering cost to come as the sacrifice of our user base. Lastly, of particular percentage here that I'll say definitely needs to be improved is 75% of our projects never fail. So this is where we start to get into a bit of stability of our platform. >>Now I'm not saying that 25% of our projects fail the way we measure this is if you have a particular project or computation that runs every day and any one of those runs sale account, that is a failure because from an end-user perspective, that's an issue. So this is something that we know we needed to improve on and we needed to grow and make our platform more stable. I'm going to something that we really focused on last year. So where are we now? So now coming out of the COVID valley, we are starting to soar again. Um, we had, uh, back in April of last year, we had the entire engineering team. We actually paused all development for about four weeks. You had everyone focused on reducing our compute costs in the cloud. We got it down to 200 K over the period of a few months. >>And for the next 12 months, we hit that number every month. This is huge for us. This is extremely important. Like I said, in the COVID time period where costs and operating efficiency was everything. So for us to do that, that was a huge accomplishment last year and something we'll keep going forward. One thing I would actually like to really highlight here, two is what allowed us to do that. So first off, being in the cloud, being able to migrate things like that, that was one thing. And we were able to use there's different cloud services in a more particular, in a more efficient way. We had a very detailed tracking of how we were spending things. We increased our data retention policies. We optimized our processing. However, one additional piece was switching to new technologies on, in particular, we migrated to get lab CICB. >>Um, and this is something that the costs we use Docker was extremely, extremely easy. We didn't have to go build new new code containers or repositories or change our code in order to do this. We were simply able to migrate the containers over and start using a new CIC so much. In fact, that we were able to do that migration with three engineers in just two weeks from a cloud environment and flexibility standpoint, we're now operating in two different clouds. We were able to last night, I've over the last nine months to operate in the second cloud environment. And again, this is something that Docker helped with incredibly. Um, we didn't have to go and build all new interfaces to all new, different services or all different tools in the next cloud provider. All we had to do was build a base cloud infrastructure that ups agnostic the way, all the different details of the cloud provider. >>And then our doctors just worked. We can move them to another environment up and running, and our platform was ready to go from a traction perspective. We're about a third of the way through the year. At this point, we've already exceeded the amount of customer analytics we produce last year. And this is thanks to a ton more albums, that whole suite of new analytics that we've been able to build over the past 12 months and we'll continue to build going forward. So this is really, really great outcome for us because we were able to show that our costs are staying down, but our analytics and our customer traction, honestly, from a stability perspective, we improved from 75% to 86%, not quite yet 99 or three nines or four nines, but we are getting there. Um, and this is actually thanks to really containerizing and modularizing different pieces of our platform so that we could scale up in different areas. This allowed us to increase that stability. This piece of the code works over here, toxin an interface to the rest of the system. We can scale this piece up separately from the rest of the system, and that allows us much more easily identify issues in the system, fix those and then correct the system overall. So basically this is a summary of where we were last year, where we are now and how much more successful we are now because of the issues that we went through last year and largely brought on by COVID. >>But that this is just a screenshot of the, our, our solution actually working on supply chain. So this is in particular, it is showing traceability of a distribution warehouse in salt lake city. It's right in the center of the screen here. You can see the nice kind of orange red center. That's a distribution warehouse and all the lines outside of that, all the dots outside of that are showing where people are, where trucks are moving from that location. So this is really helpful for supply chain companies because they can start to identify where their suppliers are, are coming from or where their distributors are going to. So with that, I want to say, thanks again for following along and enjoy the rest of DockerCon.

Published Date : May 27 2021

SUMMARY :

We know that collaboration is key to your innovation sharing And we know from talking with many of you that you and your developer Have you seen the email from Scott? I was thinking we could try, um, that new Docker dev environments feature. So if you hit the share button, what I should do is it will take all of your code and the dependencies and Uh, let me get that over to you, All right. It's just going to grab the image down, which you can take all of the code, the dependencies only get brunches working It's connected to the container. So let's just have a look at what you use So I've had a look at what you were doing and I'm actually going to change. Let me grab the link. it should be able to open up the code that I've changed and then just run it in the same way you normally do. I think we should ship it. For example, in response to COVID we saw global communities, including the tech community rapidly teams make sense of all this specifically, our goal is to provide development teams with the trusted We had powerful new capabilities to the Docker product, both free and subscription. And finally delivering an easy to use well-integrated development experience with best of breed tools and content And what we've learned in our discussions with you will have long asking a coworker to take a look at your code used to be as easy as swiveling their chair around, I'd like to take a moment to share with Docker and our partners are doing for trusted content, providing development teams, and finally, public repos for communities enable community projects to be freely shared with anonymous Lastly, the container images themselves and this end to end flow are built on open industry standards, but the Docker team rose to the challenge and worked together to continue shipping great product, the again for joining us, we look forward to having a great DockerCon with you today, as well as a great year So let's dive in now, I know this may be hard for some of you to believe, I taught myself how to code. And by the way, I'm showing you actions in Docker, And the cool thing is you can use it on any And if I can do it, I know you can too, but enough yapping let's get started to save Now you can do this in a couple of ways, whether you're doing it in your preferred ID or for today's In essence, with automation, you can be kind to your future self And I hope you all go try it out, but why do we care about all of that? And to get into that wonderful state that we call flow. and eliminate or outsource the rest because you don't need to do it, make the machines Speaking of the open source ecosystem we at get hub are so to be here with all you nerds. Komack lovely to see you here. We want to help you get your applications from your laptops, And it's all a seamless thing from, you know, from your code to the cloud local And we all And we know that you use So we need to make that as easier. We know that they might go to 25% of poles we need just keep updating base images and dependencies, and we'll, we're going to help you have the control to cloud is RA and the cloud providers aware most of you ship your occasion production Then we know you do, and we know that you want it to be easier to use in your It's hard to find high quality content that you can trust that, you know, passes your test and your configuration more guardrails to help guide you along that way so that you can focus on creating value for your company. that enable you to focus on making your applications amazing and changing the world. Now, I'm going to pass it off to our principal product manager, Ben Gotch to walk you through more doc has been looking at to see what's hard today for developers is sharing changes that you make from the inner dev environments are new part of the Docker experience that makes it easier you to get started with your whole inner leap, We want it to enable you to share your whole modern development environment, your whole setup from DACA, So you can see here, So I can get back into and connect to all the other services that I need to test this application properly, And to actually get a bit of a deeper dive in the experience. Docker official images, to give you more and more trusted building blocks that you can incorporate into your applications. We know that no matter how fast we need to go in order to drive The first thing that comes to mind are the Docker official images, And it still comes back to trust that when you are searching for content in And in addition to providing you with information on the vulnerability on, So if you can see here, this is my page in Docker hub, where I've created a four, And based on that, we are pleased to share with you Docker, I also add the push option to easily share the image with my team so they can give it a try to now continuing to invest into providing you a great developer experience, a first-class citizen in the Docker CLI you no longer need to install a separate composed And now I'd love to tell you a bit more about bill decks and convince you to try it. image management, to compose service profiles, to improve where you can run Docker more easily. So we're going to make it easier for you to synchronize your work. And today I want to talk to you a little bit about data from space. What that really means is we take any type of data associated with a latitude So to use our platform, our suite of tools, you can start to gain a much better picture of where your So the first team that we have is infrastructure This allows you to scale different pieces of the platform or different pieces of your code in different ways that allows So if any of this sounds really interesting to you, So we have a suite of algorithms that automatically allow you to identify So you can actually see the hotspots in the area. the motivation really became to make sure that we were lowering our costs and increasing innovation simultaneously. of particular percentage here that I'll say definitely needs to be improved is 75% Now I'm not saying that 25% of our projects fail the way we measure this is if you have a particular And for the next 12 months, we hit that number every month. night, I've over the last nine months to operate in the second cloud environment. And this is thanks to a ton more albums, they can start to identify where their suppliers are, are coming from or where their distributors are going

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mario AndrettiPERSON

0.99+

DaniPERSON

0.99+

Matt FalkPERSON

0.99+

Dana LawsonPERSON

0.99+

AmazonORGANIZATION

0.99+

Maya AndrettiPERSON

0.99+

DonniePERSON

0.99+

MicrosoftORGANIZATION

0.99+

MonaPERSON

0.99+

NicolePERSON

0.99+

UNICEFORGANIZATION

0.99+

25%QUANTITY

0.99+

GermanyLOCATION

0.99+

14 millionQUANTITY

0.99+

75%QUANTITY

0.99+

ManhattanLOCATION

0.99+

KhanPERSON

0.99+

10 minutesQUANTITY

0.99+

last yearDATE

0.99+

99QUANTITY

0.99+

1.3 timesQUANTITY

0.99+

1.2 timesQUANTITY

0.99+

ClairePERSON

0.99+

DockerORGANIZATION

0.99+

ScottPERSON

0.99+

BenPERSON

0.99+

UC IrvineORGANIZATION

0.99+

85%QUANTITY

0.99+

OracleORGANIZATION

0.99+

34%QUANTITY

0.99+

JustinPERSON

0.99+

JoeyPERSON

0.99+

80%QUANTITY

0.99+

160 imagesQUANTITY

0.99+

2020DATE

0.99+

$10,000QUANTITY

0.99+

10 secondsQUANTITY

0.99+

23 minutesQUANTITY

0.99+

JavaScriptTITLE

0.99+

AprilDATE

0.99+

twoQUANTITY

0.99+

56%QUANTITY

0.99+

PythonTITLE

0.99+

MollyPERSON

0.99+

Mac miniCOMMERCIAL_ITEM

0.99+

Hughie cowerPERSON

0.99+

two weeksQUANTITY

0.99+

100%QUANTITY

0.99+

GeorgiePERSON

0.99+

Matt fallPERSON

0.99+

MarsLOCATION

0.99+

Second questionQUANTITY

0.99+

KubickiPERSON

0.99+

MobyPERSON

0.99+

IndiaLOCATION

0.99+

DockerConEVENT

0.99+

Youi CalPERSON

0.99+

three ninesQUANTITY

0.99+

J frogORGANIZATION

0.99+

200 KQUANTITY

0.99+

appleORGANIZATION

0.99+

SharonPERSON

0.99+

AWSORGANIZATION

0.99+

10 XQUANTITY

0.99+

COVID-19OTHER

0.99+

windowsTITLE

0.99+

381QUANTITY

0.99+

NvidiaORGANIZATION

0.99+

Kubernetes on Any Infrastructure Top to Bottom Tutorials for Docker Enterprise Container Cloud


 

>>all right, We're five minutes after the hour. That's all aboard. Who's coming aboard? Welcome everyone to the tutorial track for our launchpad of them. So for the next couple of hours, we've got a SYRIZA videos and experts on hand to answer questions about our new product, Doctor Enterprise Container Cloud. Before we jump into the videos and the technology, I just want to introduce myself and my other emcee for the session. I'm Bill Milks. I run curriculum development for Mirant us on. And >>I'm Bruce Basil Matthews. I'm the Western regional Solutions architect for Moran Tissue esa and welcome to everyone to this lovely launchpad oven event. >>We're lucky to have you with us proof. At least somebody on the call knows something about your enterprise Computer club. Um, speaking of people that know about Dr Enterprise Container Cloud, make sure that you've got a window open to the chat for this session. We've got a number of our engineers available and on hand to answer your questions live as we go through these videos and disgusting problem. So that's us, I guess, for Dr Enterprise Container Cloud, this is Mirant asses brand new product for bootstrapping Doctor Enterprise Kubernetes clusters at scale Anything. The airport Abu's? >>No, just that I think that we're trying Thio. Uh, let's see. Hold on. I think that we're trying Teoh give you a foundation against which to give this stuff a go yourself. And that's really the key to this thing is to provide some, you know, many training and education in a very condensed period. So, >>yeah, that's exactly what you're going to see. The SYRIZA videos we have today. We're going to focus on your first steps with Dr Enterprise Container Cloud from installing it to bootstrapping your regional child clusters so that by the end of the tutorial content today, you're gonna be prepared to spin up your first documentary prize clusters using documented prize container class. So just a little bit of logistics for the session. We're going to run through these tutorials twice. We're gonna do one run through starting seven minutes ago up until I guess it will be ten fifteen Pacific time. Then we're gonna run through the whole thing again. So if you've got other colleagues that weren't able to join right at the top of the hour and would like to jump in from the beginning, ten. Fifteen Pacific time. We're gonna do the whole thing over again. So if you want to see the videos twice, you got public friends and colleagues that, you know you wanna pull in for a second chance to see this stuff, we're gonna do it all. All twice. Yeah, this session. Any any logistics I should add, Bruce that No, >>I think that's that's pretty much what we had to nail down here. But let's zoom dash into those, uh, feature films. >>Let's do Edmonds. And like I said, don't be shy. Feel free to ask questions in the chat or engineers and boosting myself are standing by to answer your questions. So let me just tee up the first video here and walk their cost. Yeah. Mhm. Yes. Sorry. And here we go. So our first video here is gonna be about installing the Doctor Enterprise Container Club Management cluster. So I like to think of the management cluster as like your mothership, right? This is what you're gonna use to deploy all those little child clusters that you're gonna use is like, Come on it as clusters downstream. So the management costs was always our first step. Let's jump in there >>now. We have to give this brief little pause >>with no good day video. Focus for this demo will be the initial bootstrap of the management cluster in the first regional clusters to support AWS deployments. The management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case, eight of us and the Elsie um, components on the UCP Cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a big strap note on this dependencies on handling with download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the adduce environment. The fourth configuring the deployment, defining things like the machine types on the fifth phase. Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node, just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now we're just checking through AWS to make sure that the account we want to use we have the correct credentials on the correct roles set up and validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just going to check that we can, from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next, we're going to run it. I'm in. Deploy it. Changing into that big struck folder. Just making see what's there. Right now we have no license file, so we're gonna get the license filed. Oh, okay. Get the license file through the more antis downloads site, signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, Once we've done that, we can now go ahead with the rest of the deployment. See that the follow is there. Uh, huh? That's again checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. All right, The next big step is valid in all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running an AWS policy create. So it is part of that is creating our Food trucks script, creating the mystery policy files on top of AWS, Just generally preparing the environment using a cloud formation script you'll see in a second will give a new policy confirmations just waiting for it to complete. Yeah, and there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created Today I am console. Go to that new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media Access key I D and the secret access key. We went, Yeah, usually then exported on the command line. Okay. Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Yes. Okay, that's the key. Secret X key. Right on. Let's kick it off. Yeah, So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you, and as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the west side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS. At the end of the process, that cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Okay. Local clusters boat just waiting for the various objects to get ready. Standard communities objects here Okay, so we speed up this process a little bit just for demonstration purposes. Yeah. There we go. So first note is being built the best in host. Just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for a W s to create the instance. Okay. Yes. Here, beauty there. Okay. Mhm. Okay. Yeah, yeah. Okay. On there. We got question. Host has been built on three instances for the management clusters have now been created. We're going through the process of preparing. Those nodes were now copying everything over. See that? The scaling up of controllers in the big Strap cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Yeah. Yeah, just waiting for key. Clark. Uh huh. Start to finish up. Yeah. No. What? Now we're shutting down control this on the local bootstrap node on preparing our I. D. C. Configuration. Fourth indication, soon as this is completed. Last phase will be to deploy stack light into the new cluster the last time Monitoring tool set way Go stack like to plan It has started. Mhm coming to the end of the deployment Mountain. Yeah, America. Final phase of the deployment. Onda, We are done. Okay, You'll see. At the end they're providing us the details of you. I log in so there's a keeper clogging. You can modify that initial default password is part of the configuration set up with one documentation way. Go Councils up way can log in. Yeah, yeah, thank you very much for watching. >>Excellent. So in that video are wonderful field CTO Shauna Vera bootstrapped up management costume for Dr Enterprise Container Cloud Bruce, where exactly does that leave us? So now we've got this management costume installed like what's next? >>So primarily the foundation for being able to deploy either regional clusters that will then allow you to support child clusters. Uh, comes into play the next piece of what we're going to show, I think with Sean O'Mara doing this is the child cluster capability, which allows you to then deploy your application services on the local cluster. That's being managed by the ah ah management cluster that we just created with the bootstrap. >>Right? So this cluster isn't yet for workloads. This is just for bootstrapping up the downstream clusters. Those or what we're gonna use for workings. >>Exactly. Yeah. And I just wanted to point out, since Sean O'Mara isn't around, toe, actually answer questions. I could listen to that guy. Read the phone book, and it would be interesting, but anyway, you can tell him I said that >>he's watching right now, Crusoe. Good. Um, cool. So and just to make sure I understood what Sean was describing their that bootstrap er knows that you, like, ran document fresh pretender Cloud from to begin with. That's actually creating a kind kubernetes deployment kubernetes and Docker deployment locally. That then hits the AWS a p i in this example that make those e c two instances, and it makes like a three manager kubernetes cluster there, and then it, like, copies itself over toe those communities managers. >>Yeah, and and that's sort of where the transition happens. You can actually see it. The output that when it says I'm pivoting, I'm pivoting from my local kind deployment of cluster AP, I toothy, uh, cluster, that's that's being created inside of AWS or, quite frankly, inside of open stack or inside of bare metal or inside of it. The targeting is, uh, abstracted. Yeah, but >>those air three environments that we're looking at right now, right? Us bare metal in open staff environments. So does that kind cluster on the bootstrap er go away afterwards. You don't need that afterwards. Yeah, that is just temporary. To get things bootstrapped, then you manage things from management cluster on aws in this example? >>Yeah. Yeah. The seed, uh, cloud that post the bootstrap is not required anymore. And there's no, uh, interplay between them after that. So that there's no dependencies on any of the clouds that get created thereafter. >>Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, be a temporary container that would bootstrap all the other containers. Go away. It's, uh, so sort of a similar, similar temporary transient bootstrapping model. Cool. Excellent. What will convict there? It looked like there wasn't a ton, right? It looked like you had to, like, set up some AWS parameters like credentials and region and stuff like that. But other than that, that looked like heavily script herbal like there wasn't a ton of point and click there. >>Yeah, very much so. It's pretty straightforward from a bootstrapping standpoint, The config file that that's generated the template is fairly straightforward and targeted towards of a small medium or large, um, deployment. And by editing that single file and then gathering license file and all of the things that Sean went through, um, that that it makes it fairly easy to script >>this. And if I understood correctly as well that three manager footprint for your management cluster, that's the minimum, right. We always insist on high availability for this management cluster because boy do not wanna see oh, >>right, right. And you know, there's all kinds of persistent data that needs to be available, regardless of whether one of the notes goes down or not. So we're taking care of all of that for you behind the scenes without you having toe worry about it as a developer. >>No, I think there's that's a theme that I think will come back to throughout the rest of this tutorial session today is there's a lot of there's a lot of expertise baked him to Dr Enterprise Container Cloud in terms of implementing best practices for you like the defaulter, just the best practices of how you should be managing these clusters, Miss Seymour. Examples of that is the day goes on. Any interesting questions you want to call out from the chap who's >>well, there was. Yeah, yeah, there was one that we had responded to earlier about the fact that it's a management cluster that then conduce oh, either the the regional cluster or a local child molester. The child clusters, in each case host the application services, >>right? So at this point, we've got, in some sense, like the simplest architectures for our documentary prize Container Cloud. We've got the management cluster, and we're gonna go straight with child cluster. In the next video, there's a more sophisticated architecture, which will also proper today that inserts another layer between those two regional clusters. If you need to manage regions like across a BS, reads across with these documents anything, >>yeah, that that local support for the child cluster makes it a lot easier for you to manage the individual clusters themselves and to take advantage of our observation. I'll support systems a stack light and things like that for each one of clusters locally, as opposed to having to centralize thumb >>eso. It's a couple of good questions. In the chat here, someone was asking for the instructions to do this themselves. I strongly encourage you to do so. That should be in the docks, which I think Dale helpfully thank you. Dale provided links for that's all publicly available right now. So just head on in, head on into the docks like the Dale provided here. You can follow this example yourself. All you need is a Mirante license for this and your AWS credentials. There was a question from many a hear about deploying this toe azure. Not at G. Not at this time. >>Yeah, although that is coming. That's going to be in a very near term release. >>I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. Very bracing. Cool. Okay. Any other thoughts on this one does. >>No, just that the fact that we're running through these individual pieces of the steps Well, I'm sure help you folks. If you go to the link that, uh, the gentleman had put into the chat, um, giving you the step by staff. Um, it makes it fairly straightforward to try this yourselves. >>E strongly encourage that, right? That's when you really start to internalize this stuff. OK, but before we move on to the next video, let's just make sure everyone has a clear picture in your mind of, like, where we are in the life cycle here creating this management cluster. Just stop me if I'm wrong. Who's creating this management cluster is like, you do that once, right? That's when your first setting up your doctor enterprise container cloud environment of system. What we're going to start seeing next is creating child clusters and this is what you're gonna be doing over and over and over again. When you need to create a cluster for this Deb team or, you know, this other team river it is that needs commodity. Doctor Enterprise clusters create these easy on half will. So this was once to set up Dr Enterprise Container Cloud Child clusters, which we're going to see next. We're gonna do over and over and over again. So let's go to that video and see just how straightforward it is to spin up a doctor enterprise cluster for work clothes as a child cluster. Undocumented brands contain >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster, the scaling of the cluster and how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the U I so you can switch. Project Mary only has access to development. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Nan Yes, this H Keys Associate ID for Mary into her team on the cloud credentials that allow you to create access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences, Right? Let's now set up semester search keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name, we copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our local machine. A simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, you go to the clusters tab. We hit the create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one release version five point seven is the current release Onda Attach. Mary's Key is necessary Key. We can then check the rest of the settings, confirming the provider Any kubernetes c r D r I p address information. We can change this. Should we wish to? We'll leave it default for now on. Then what components? A stack light I would like to deploy into my Custer. For this. I'm enabling stack light on logging on Aiken. Sit up the retention sizes Attention times on. Even at this stage, at any customer alerts for the watchdogs. E consider email alerting which I will need my smart host details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I commend side on the route device size. There we go, my three machines obviously creating. I now need to add some workers to this custom. So I go through the same process this time once again, just selecting worker. I'll just add to once again, the AM is extremely important. Will fail if we don't pick the right, Am I for a boon to machine in this case and the deployment has started. We can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here we've created the VPC. We've created the sub nets on We've created the Internet gateway. It's unnecessary made of us and we have no warnings of the stage. Yeah, this will then run for a while. We have one minute past waken click through. We can check the status of the machine bulls as individuals so we can check the machine info, details of the machines that we've assigned, right? Mhm Onda. See any events pertaining to the machine areas like this one on normal? Yeah. Just watch asked. The community's components are waiting for the machines to start. Go back to Custer's. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway on the stage. The machines have been built on assigned. I pick up the U. S. Thank you. Yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. Mhm. No speeding things up a little bit. This whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured. Mhm, mhm. And then we go. Cluster has been deployed. So once the classes deployed, we can now never get around our environment. Okay, Are cooking into configure cluster We could modify their cluster. We could get the end points for alert alert manager on See here The griffon occupying and Prometheus are still building in the background but the cluster is available on you would be able to put workloads on it the stretch to download the cube conflict so that I can put workloads on it. It's again three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster Mhm all right Now that the build is fully completed, we can check out cluster info on. We can see that Allow the satellite components have been built. All the storage is there, and we have access to the CPU. I So if we click into the cluster, we can access the UCP dashboard, right? Shit. Click the signing with Detroit button to use the SSO on. We give Mary's possible to use the name once again. Thing is, an unlicensed cluster way could license at this point. Or just skip it on. There. We have the UCP dashboard. You can see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon, a data just being automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Mhm. So we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah. General dashboard of Cuba navies cluster one of this is configurable. You can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster, all right to scale the cluster on to add a notice. A simple is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger disks and you'll see that worker has been added from the provisioning state on shortly. We will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workouts are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button, validating which release you would like to update to. In this case, the next available releases five point seven point one. Here I'm kicking the update by in the background We will coordinate. Drain each node slowly go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Girl, we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already on In a few minutes we'll see that there are great has been completed. There we go. Great. Done. Yeah. If you work loads of both using proper cloud native community standards, there will be no impact. >>Excellent. So at this point, we've now got a cluster ready to start taking our communities of workloads. He started playing or APs to that costume. So watching that video, the thing that jumped out to me at first Waas like the inputs that go into defining this workload cost of it. All right, so we have to make sure we were using on appropriate am I for that kind of defines the substrate about what we're gonna be deploying our cluster on top of. But there's very little requirements. A so far as I could tell on top of that, am I? Because Docker enterprise Container Cloud is gonna bootstrap all the components that you need. That s all we have is kind of kind of really simple bunch box that we were deploying these things on top of so one thing that didn't get dug into too much in the video. But it's just sort of implied. Bruce, maybe you can comment on this is that release that Shawn had to choose for his, uh, for his cluster in creating it. And that release was also the thing we had to touch. Wanted to upgrade part cluster. So you have really sharp eyes. You could see at the end there that when you're doing the release upgrade enlisted out a stack of components docker, engine, kubernetes, calico, aled, different bits and pieces that go into, uh, go into one of these commodity clusters that deploy. And so, as far as I can tell in that case, that's what we mean by a release. In this sense, right? It's the validated stack off container ization and orchestration components that you know we've tested out and make sure it works well, introduction environments. >>Yeah, and and And that's really the focus of our effort is to ensure that any CVS in any of the stack are taken care of that there is a fixes air documented and up streamed to the open stack community source community, um, and and that, you know, then we test for the scaling ability and the reliability in high availability configuration for the clusters themselves. The hosts of your containers. Right. And I think one of the key, uh, you know, benefits that we provide is that ability to let you know, online, high. We've got an update for you, and it's fixes something that maybe you had asked us to fix. Uh, that all comes to you online as your managing your clusters, so you don't have to think about it. It just comes as part of the product. >>You just have to click on Yes. Please give me that update. Uh, not just the individual components, but again. It's that it's that validated stack, right? Not just, you know, component X, y and Z work. But they all work together effectively Scalable security, reliably cool. Um, yeah. So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old universal control plane. Doctor Enterprise. On top of that, Sean had the classic comment there, you know? Yeah. Yeah. You'll see a little warnings and errors or whatever. When you're setting up, UCP don't handle, right, Just let it do its job, and it will converge all its components, you know, after just just a minute or two. But we saw in that video, we sped things up a little bit there just we didn't wait for, you know, progress fighters to complete. But really, in real life, that whole process is that anything so spend up one of those one of those fosters so quite quite quick. >>Yeah, and and I think the the thoroughness with which it goes through its process and re tries and re tries, uh, as you know, and it was evident when we went through the initial ah video of the bootstrapping as well that the processes themselves are self healing, as they are going through. So they will try and retry and wait for the event to complete properly on. And once it's completed properly, then it will go to the next step. >>Absolutely. And the worst thing you could do is panic at the first warning and start tearing things that don't don't do that. Just don't let it let it heal. Let take care of itself. And that's the beauty of these manage solutions is that they bake in a lot of subject matter expertise, right? The decisions that are getting made by those containers is they're bootstrapping themselves, reflect the expertise of the Mirant ISS crew that has been developing this content in these two is free for years and years now, over recognizing humanities. One cool thing there that I really appreciate it actually that it adds on top of Dr Enterprise is that automatic griffon a deployment as well. So, Dr Enterprises, I think everyone knows has had, like, some very high level of statistics baked into its dashboard for years and years now. But you know our customers always wanted a double click on that right to be able to go a little bit deeper. And Griffon are really addresses that it's built in dashboards. That's what's really nice to see. >>Yeah, uh, and all of the alerts and, uh, data are actually captured in a Prometheus database underlying that you have access to so that you are allowed to add new alerts that then go out to touch slack and say hi, You need to watch your disk space on this machine or those kinds of things. Um, and and this is especially helpful for folks who you know, want to manage the application service layer but don't necessarily want to manage the operations side of the house. So it gives them a tool set that they can easily say here, Can you watch these for us? And Miran tas can actually help do that with you, So >>yeah, yeah, I mean, that's just another example of baking in that expert knowledge, right? So you can leverage that without tons and tons of a long ah, long runway of learning about how to do that sort of thing. Just get out of the box right away. There was the other thing, actually, that you could sleep by really quickly if you weren't paying close attention. But Sean mentioned it on the video. And that was how When you use dark enterprise container cloud to scale your cluster, particularly pulling a worker out, it doesn't just like Territo worker down and forget about it. Right? Is using good communities best practices to cordon and drain the No. So you aren't gonna disrupt your workloads? You're going to just have a bunch of containers instantly. Excellent crash. You could really carefully manage the migration of workloads off that cluster has baked right in tow. How? How? Document? The brass container cloud is his handling cluster scale. >>Right? And And the kubernetes, uh, scaling methodology is is he adhered to with all of the proper techniques that ensure that it will tell you. Wait, you've got a container that actually needs three, uh, three, uh, instances of itself. And you don't want to take that out, because that node, it means you'll only be able to have to. And we can't do that. We can't allow that. >>Okay, Very cool. Further thoughts on this video. So should we go to the questions. >>Let's let's go to the questions >>that people have. Uh, there's one good one here, down near the bottom regarding whether an a p I is available to do this. So in all these demos were clicking through this web. You I Yes, this is all a p. I driven. You could do all of this. You know, automate all this away is part of the CSC change. Absolutely. Um, that's kind of the point, right? We want you to be ableto spin up. Come on. I keep calling them commodity clusters. What I mean by that is clusters that you can create and throw away. You know, easily and automatically. So everything you see in these demos eyes exposed to FBI? >>Yeah. In addition, through the standard Cube cuddle, Uh, cli as well. So if you're not a programmer, but you still want to do some scripting Thio, you know, set up things and deploy your applications and things. You can use this standard tool sets that are available to accomplish that. >>There is a good question on scale here. So, like, just how many clusters and what sort of scale of deployments come this kind of support our engineers report back here that we've done in practice up to a Zeman ia's like two hundred clusters. We've deployed on this with two hundred fifty nodes in a cluster. So were, you know, like like I said, hundreds, hundreds of notes, hundreds of clusters managed by documented press container fall and then those downstream clusters, of course, subject to the usual constraints for kubernetes, right? Like default constraints with something like one hundred pods for no or something like that. There's a few different limitations of how many pods you can run on a given cluster that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. >>Yeah, E. I mean, I don't think that we constrain any of the capabilities that are available in the, uh, infrastructure deliveries, uh, service within the goober Netease framework. So were, you know, But we are, uh, adhering to the standards that we would want to set to make sure that we're not overloading a node or those kinds of things, >>right. Absolutely cool. Alright. So at this point, we've got kind of a two layered our protection when we are management cluster, but we deployed in the first video. Then we use that to deploy one child clustering work, classroom, uh, for more sophisticated deployments where we might want to manage child clusters across multiple regions. We're gonna add another layer into our architectural we're gonna add in regional cluster management. So this idea you're gonna have the single management cluster that we started within the first video. On the next video, we're gonna learn how to spin up a regional clusters, each one of which would manage, for example, a different AWS uh, US region. So let me just pull out the video for that bill. We'll check it out for me. Mhm. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectures of you how to set up the management environment, prepare for the deployment deployment overview and then just to prove it, to play a regional child cluster. So, looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case AWS on the LCN components on the D you speak Cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need a regional cluster? Different platform architectures, for example aws who have been stack even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager we also Machine Manager were held. Mandel are managed as well as the actual provider logic. Mhm. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. And you see, it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster similar to what we're going to deploy now, also only has three managers once again, no workers. But as a comparison, here's a child cluster This one has three managers, but also has additional workers associate it to the cluster. All right, we need to connect. Tell bootstrap note. Preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine. All right. A few things we have to do to make sure the environment is ready. First thing we're going to see go into route. We'll go into our releases folder where we have the kozberg struck on. This was the original bootstrap used to build the original management cluster. Yeah, we're going to double check to make sure our cube con figures there once again, the one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything is working. A condom. No damages waken access to a swell. Yeah. Next we're gonna edit the machine definitions. What we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I. So that's found under the templates AWS directory. We don't need to edit anything else here. But we could change items like the size of the machines attempts. We want to use that The key items to ensure where you changed the am I reference for the junta image is the one for the region in this case AWS region for utilizing this was no construct deployment. We have to make sure we're pointing in the correct open stack images. Yeah, okay. Set the correct and my save file. Now we need to get up credentials again. When we originally created the bootstrap cluster, we got credentials from eight of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we're just exporting the AWS access key and I d. What's important is CAAs aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our cube conflict that we want to use for the management cluster. When we looked at earlier Yeah, now we're exporting that. Want to call the cluster region Is Frank Foods Socrates Frankfurt yet trying to use something descriptive It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed. Um, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at W s and waiting for that bastard and no to get started. Please. The best you nerd Onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy. Dr. Enterprise, this is probably the longest face. Yeah, seeing the second that all the nerds will go from the player deployed. Prepare, prepare. Yeah, You'll see their status changes updates. He was the first night ready. Second, just applying second already. Both my time. No waiting from home control. Let's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running the date of the U. S. All my stay. Ah, now we're playing Stockland. Switch over is done on. Done. Now I will build a child cluster in the new region very, very quickly to find the cluster will pick. Our new credential has shown up. We'll just call it Frankfurt for simplicity a key and customs to find. That's the machine. That cluster stop with three managers. Set the correct Am I for the region? Yeah, Do the same to add workers. There we go test the building. Yeah. Total bill of time Should be about fifteen minutes. Concedes in progress. It's going to expect this up a little bit. Check the events. We've created all the dependencies, machine instances, machines, a boat shortly. We should have a working cluster in Frankfurt region. Now almost a one note is ready from management. Two in progress. Yeah, on we're done. Clusters up and running. Yeah. >>Excellent. So at this point, we've now got that three tier structure that we talked about before the video. We got that management cluster that we do strapped in the first video. Now we have in this example to different regional clustering one in Frankfurt, one of one management was two different aws regions. And sitting on that you can do Strap up all those Doctor enterprise costumes that we want for our work clothes. >>Yeah, that's the key to this is to be able to have co resident with your actual application service enabled clusters the management co resident with it so that you can, you know, quickly access that he observation Elson Surfboard services like the graph, Ana and that sort of thing for your particular region. A supposed to having to lug back into the home. What did you call it when we started >>the mothership? >>The mothership. Right. So we don't have to go back to the mother ship. We could get >>it locally. Yeah, when, like to that point of aggregating things under a single pane of glass? That's one thing that again kind of sailed by in the demo really quickly. But you'll notice all your different clusters were on that same cluster. Your pain on your doctor Enterprise Container Cloud management. Uh, court. Right. So both your child clusters for running workload and your regional clusters for bootstrapping. Those child clusters were all listed in the same place there. So it's just one pane of glass to go look for, for all of your clusters, >>right? And, uh, this is kind of an important point. I was, I was realizing, as we were going through this. All of the mechanics are actually identical between the bootstrapped cluster of the original services and the bootstrapped cluster of the regional services. It's the management layer of everything so that you only have managers, you don't have workers and that at the child cluster layer below the regional or the management cluster itself, that's where you have the worker nodes. And those are the ones that host the application services in that three tiered architecture that we've now defined >>and another, you know, detail for those that have sharp eyes. In that video, you'll notice when deploying a child clusters. There's not on Lee. A minimum of three managers for high availability management cluster. You must have at least two workers that's just required for workload failure. It's one of those down get out of work. They could potentially step in there, so your minimum foot point one of these child clusters is fine. Violence and scalable, obviously, from a >>That's right. >>Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want to my last video. There's another question here about, like where these clusters can live. So again, I know these examples are very aws heavy. Honestly, it's just easy to set up down on the other us. We could do things on bare metal and, uh, open stack departments on Prem. That's what all of this still works in exactly the same way. >>Yeah, the, uh, key to this, especially for the the, uh, child clusters, is the provision hers? Right? See you establish on AWS provision or you establish a bare metal provision or you establish a open stack provision. Or and eventually that list will include all of the other major players in the cloud arena. But you, by selecting the provision or within your management interface, that's where you decide where it's going to be hosted, where the child cluster is to be hosted. >>Speaking off all through a child clusters. Let's jump into our last video in the Siri's, where we'll see how to spin up a child cluster on bare metal. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. So why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent. Provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and supports high performance workloads like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another opera visor. Lay it between so continue on the theme Why Communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p G A s G p us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. Uh, we can handle utilization in the scheduling. Better Onda we increase the performances and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project will add the bare metal hosts, including the host name. I put my credentials I pay my address the Mac address on then provide a machine type label to determine what type of machine it is for later use. Okay, let's get started. So well again. Was the operator thing. We'll go and we'll create a project for our machines to be a member off helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. So the first thing we had to be in post, Yeah, many of the machine A name. Anything you want, que experimental zero one. Provide the IAP my user name type my password. Okay. On the Mac address for the common interface with the boot interface and then the i p m I i p address These machines will be at the time storage worker manager. He's a manager. Yeah, we're gonna add a number of other machines on will. Speed this up just so you could see what the process looks like in the future. Better discovery will be added to the product. Okay. Okay. Getting back there we have it are Six machines have been added, are busy being inspected, being added to the system. Let's have a look at the details of a single note. Yeah, you can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. I see. Okay, let's go and create the cluster. Yeah, So we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So we'll credit custom. We'll give it a name, but if it were selecting bare metal on the region, we're going to select the version we want to apply. No way. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of dress range on update the address range that we want to use for the cluster. Check that the sea ideal blocks for the Cuban ladies and tunnels are what we want them to be. Enable disabled stack light. Yeah, and soothe stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here. We're focused on building communities clusters, so we're gonna put the count of machines. You want managers? We're gonna pick the label type manager and create three machines is the manager for the Cuban eighties. Casting Okay thing. We're having workers to the same. It's a process. Just making sure that the worker label host level are I'm sorry. On when Wait for the machines to deploy. Let's go through the process of putting the operating system on the notes validating and operating system deploying doctor identifies Make sure that the cluster is up and running and ready to go. Okay, let's review the bold events waken See the machine info now populated with more information about the specifics of things like storage and of course, details of a cluster etcetera. Yeah, yeah, well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build? And that brings us to the end of this particular demo. You can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>All right, so there we have it, deploying a cluster to bare metal. Much the same is how we did for AWS. I guess maybe the biggest different stepwise there is there is that registration face first, right? So rather than just using AWS financials toe magically create PM's in the cloud. You got a point out all your bare metal servers to Dr Enterprise between the cloud and they really come in, I guess three profiles, right? You got your manager profile with a profile storage profile which has been labeled as allocate. Um, crossword cluster has appropriate, >>right? And And I think that the you know, the key differentiator here is that you have more physical control over what, uh, attributes that love your cat, by the way, uh, where you have the different attributes of a server of physical server. So you can, uh, ensure that the SSD configuration on the storage nodes is gonna be taken advantage of in the best way the GP use on the worker nodes and and that the management layer is going to have sufficient horsepower to, um, spin up to to scale up the the environments, as required. One of the things I wanted to mention, though, um, if I could get this out without the choking much better. Um, is that Ah, hey, mentioned the load balancer and I wanted to make sure in defining the load balancer and the load balancer ranges. Um, that is for the top of the the cluster itself. That's the operations of the management, uh, layer integrating with your systems internally to be able to access the the Cube Can figs. I I p address the, uh, in a centralized way. It's not the load balancer that's working within the kubernetes cluster that you are deploying. That's still cube proxy or service mesh, or however you're intending to do it. So, um, it's kind of an interesting step that your initial step in building this, um and we typically use things like metal L B or in gen X or that kind of thing is to establish that before we deploy this bear mental cluster so that it can ride on top of that for the tips and things. >>Very cool. So any other thoughts on what we've seen so far today? Bruce, we've gone through all the different layers. Doctor enterprise container clouds in these videos from our management are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. Closing thoughts before we take just a very short break and run through these demos again. >>You know, I've been very exciting. Ah, doing the presentation with you. I'm really looking forward to doing it the second time, so that we because we've got a good rhythm going about this kind of thing. So I'm looking forward to doing that. But I think that the key elements of what we're trying to convey to the folks out there in the audience that I hope you've gotten out of it is that will that this is an easy enough process that if you follow the step by steps going through the documentation that's been put out in the chat, um, that you'll be able to give this a go yourself, Um, and you don't have to limit yourself toe having physical hardware on prim to try it. You could do it in a ws as we've shown you today. And if you've got some fancy use cases like, uh, you you need a Hadoop And and, uh, you know, cloud oriented ai stuff that providing a bare metal service helps you to get there very fast. So right. Thank you. It's been a pleasure. >>Yeah, thanks everyone for coming out. So, like I said we're going to take a very short, like, three minute break here. Uh, take the opportunity to let your colleagues know if they were in another session or they didn't quite make it to the beginning of this session. Or if you just want to see these demos again, we're going to kick off this demo. Siri's again in just three minutes at ten. Twenty five a. M. Pacific time where we will see all this great stuff again. Let's take a three minute break. I'll see you all back here in just two minutes now, you know. Okay, folks, that's the end of our extremely short break. We'll give people just maybe, like one more minute to trickle in if folks are interested in coming on in and jumping into our demo. Siri's again. Eso For those of you that are just joining us now I'm Bill Mills. I head up curriculum development for the training team here. Moran Tous on Joining me for this session of demos is Bruce. Don't you go ahead and introduce yourself doors, who is still on break? That's cool. We'll give Bruce a minute or two to get back while everyone else trickles back in. There he is. Hello, Bruce. >>How'd that go for you? Okay, >>Very well. So let's kick off our second session here. I e just interest will feel for you. Thio. Let it run over here. >>Alright. Hi. Bruce Matthews here. I'm the Western Regional Solutions architect for Marantz. Use A I'm the one with the gray hair and the glasses. Uh, the handsome one is Bill. So, uh, Bill, take it away. >>Excellent. So over the next hour or so, we've got a Siris of demos that's gonna walk you through your first steps with Dr Enterprise Container Cloud Doctor Enterprise Container Cloud is, of course, Miranda's brand new offering from bootstrapping kubernetes clusters in AWS bare metal open stack. And for the providers in the very near future. So we we've got, you know, just just over an hour left together on this session, uh, if you joined us at the top of the hour back at nine. A. M. Pacific, we went through these demos once already. Let's do them again for everyone else that was only able to jump in right now. Let's go. Our first video where we're gonna install Dr Enterprise container cloud for the very first time and use it to bootstrap management. Cluster Management Cluster, as I like to describe it, is our mother ship that's going to spin up all the other kubernetes clusters, Doctor Enterprise clusters that we're gonna run our workloads on. So I'm gonna do >>I'm so excited. I can hardly wait. >>Let's do it all right to share my video out here. Yeah, let's do it. >>Good day. The focus for this demo will be the initial bootstrap of the management cluster on the first regional clusters. To support AWS deployments, the management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case AWS and the Elsom components on the UCP cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a bootstrap note on its dependencies on handling the download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the ideas environment, the fourth configuring the deployment, defining things like the machine types on the fifth phase, Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node. Just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now, we're just checking through aws to make sure that the account we want to use we have the correct credentials on the correct roles set up on validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just gonna check that we can from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next we're going to run it. Yeah, I've been deployed changing into that big struck folder, just making see what's there right now we have no license file, so we're gonna get the license filed. Okay? Get the license file through more antis downloads site signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, since we've done that, we can now go ahead with the rest of the deployment. Yeah, see what the follow is there? Uh huh. Once again, checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. Alright. Next big step is violating all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running in AWS policy create. So it is part of that is creating our food trucks script. Creating this through policy files onto the AWS, just generally preparing the environment using a cloud formation script, you'll see in a second, I'll give a new policy confirmations just waiting for it to complete. And there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created. Good day. I am console. Go to the new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media access Key I. D and the secret access key, but usually then exported on the command line. Okay, Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Okay, thanks. Is key. So you could X key Right on. Let's kick it off. So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you. Um, as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the AWS side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS at the end of the process. That cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Yeah, okay. Local clusters boat. Just waiting for the various objects to get ready. Standard communities objects here. Yeah, you mentioned Yeah. So we've speed up this process a little bit just for demonstration purposes. Okay, there we go. So first note is being built the bastion host just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for AWS to create the instance. Okay. Yeah. Beauty there. Movies. Okay, sketch. Hello? Yeah, Okay. Okay. On. There we go. Question host has been built on three instances for the management clusters have now been created. Okay, We're going through the process of preparing. Those nodes were now copying everything over. See that scaling up of controllers in the big strapped cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Right? Okay. Just waiting for key. Clark. Uh huh. So finish up. Yeah. No. Now we're shutting down. Control this on the local bootstrap node on preparing our I. D. C configuration, fourth indication. So once this is completed, the last phase will be to deploy stack light into the new cluster, that glass on monitoring tool set, Then we go stack like deployment has started. Mhm. Coming to the end of the deployment mountain. Yeah, they were cut final phase of the deployment. And we are done. Yeah, you'll see. At the end, they're providing us the details of you. I log in. So there's a key Clark log in. Uh, you can modify that initial default possible is part of the configuration set up where they were in the documentation way. Go Councils up way can log in. Yeah. Yeah. Thank you very much for watching. >>All right, so at this point, what we have we got our management cluster spun up, ready to start creating work clusters. So just a couple of points to clarify there to make sure everyone caught that, uh, as advertised. That's darker. Enterprise container cloud management cluster. That's not rework loans. are gonna go right? That is the tool and you're gonna use to start spinning up downstream commodity documentary prize clusters for bootstrapping record too. >>And the seed host that were, uh, talking about the kind cluster dingy actually doesn't have to exist after the bootstrap succeeds eso It's sort of like, uh, copies head from the seed host Toothy targets in AWS spins it up it then boots the the actual clusters and then it goes away too, because it's no longer necessary >>so that bootstrapping know that there's not really any requirements, Hardly on that, right. It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, as you just said, it's just a kubernetes in docker cluster on that piece. Drop note is just gonna get torn down after the set up finishes on. You no longer need that. Everything you're gonna do, you're gonna drive from the single pane of glass provided to you by your management cluster Doctor enterprise Continue cloud. Another thing that I think is sort of interesting their eyes that the convict is fairly minimal. Really? You just need to provide it like aws regions. Um, am I? And that's what is going to spin up that spending that matter faster. >>Right? There is a mammal file in the bootstrap directory itself, and all of the necessary parameters that you would fill in have default set. But you have the option then of going in and defining a different Am I different for a different region, for example? Oh, are different. Size of instance from AWS. >>One thing that people often ask about is the cluster footprint. And so that example you saw they were spitting up a three manager, um, managing cluster as mandatory, right? No single manager set up at all. We want high availability for doctrine Enterprise Container Cloud management. Like so again, just to make sure everyone sort of on board with the life cycle stage that we're at right now. That's the very first thing you're going to do to set up Dr Enterprise Container Cloud. You're going to do it. Hopefully exactly once. Right now, you've got your management cluster running, and they're gonna use that to spend up all your other work clusters Day today has has needed How do we just have a quick look at the questions and then lets take a look at spinning up some of those child clusters. >>Okay, e think they've actually been answered? >>Yeah, for the most part. One thing I'll point out that came up again in the Dail, helpfully pointed out earlier in surgery, pointed out again, is that if you want to try any of the stuff yourself, it's all of the dogs. And so have a look at the chat. There's a links to instructions, so step by step instructions to do each and every thing we're doing here today yourself. I really encourage you to do that. Taking this out for a drive on your own really helps internalizing communicate these ideas after the after launch pad today, Please give this stuff try on your machines. Okay, So at this point, like I said, we've got our management cluster. We're not gonna run workloads there that we're going to start creating child clusters. That's where all of our work and we're gonna go. That's what we're gonna learn how to do in our next video. Cue that up for us. >>I so love Shawn's voice. >>Wasn't that all day? >>Yeah, I watched him read the phone book. >>All right, here we go. Let's now that we have our management cluster set up, let's create a first child work cluster. >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster the scaling of the cluster on how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the u I. So you can switch Project Mary only has access to development. Uh huh. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Man. Yes, this H keys, Associate ID for Mary into her team on the cloud credentials that allow you to create or access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences. Right. Let's now set up some ssh keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name. We copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our machine. A very simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, we got the clusters tab we had to create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one released version five point seven is the current release Onda Attach. Mary's Key is necessary key. We can then check the rest of the settings, confirming the provider any kubernetes c r D a r i p address information. We can change this. Should we wish to? We'll leave it default for now and then what components of stack light? I would like to deploy into my custom for this. I'm enabling stack light on logging, and I consider the retention sizes attention times on. Even at this stage, add any custom alerts for the watchdogs. Consider email alerting which I will need my smart host. Details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I convinced side on the route. Device size. There we go. My three machines are busy creating. I now need to add some workers to this cluster. So I go through the same process this time once again, just selecting worker. I'll just add to once again the am I is extremely important. Will fail if we don't pick the right. Am I for a Clinton machine? In this case and the deployment has started, we can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen. Cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here. We've created the VPC. We've created the sub nets on. We've created the Internet Gateway. It's unnecessary made of us. And we have no warnings of the stage. Okay, this will then run for a while. We have one minute past. We can click through. We can check the status of the machine balls as individuals so we can check the machine info, details of the machines that we've assigned mhm and see any events pertaining to the machine areas like this one on normal. Yeah. Just last. The community's components are waiting for the machines to start. Go back to customers. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway. And at this stage, the machines have been built on assigned. I pick up the U S. Yeah, yeah, yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. No speeding things up a little bit this whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured mhm and then we go. Cluster has been deployed. So once the classes deployed, we can now never get around. Our environment are looking into configure cluster. We could modify their cluster. We could get the end points for alert Alert Manager See here the griffon occupying and Prometheus are still building in the background but the cluster is available on You would be able to put workloads on it at this stage to download the cube conflict so that I can put workloads on it. It's again the three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster. All right, Now that the build is fully completed, we can check out cluster info on. We can see that all the satellite components have been built. All the storage is there, and we have access to the CPU. I. So if we click into the cluster, we can access the UCP dashboard, click the signing with the clock button to use the SSO. We give Mary's possible to use the name once again. Thing is an unlicensed cluster way could license at this point. Or just skip it on. Do we have the UCP dashboard? You could see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon. A data just been automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Um, so we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah, a general dashboard of Cuba Navies cluster. What If this is configurable, you can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster. All right to scale the cluster on to add a No. This is simple. Is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger group disks and you'll see that worker has been added in the provisioning state. On shortly, we will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node we would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workloads are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button validating which release you would like to update to this case. This available releases five point seven point one give you I'm kicking the update back in the background. We will coordinate. Drain each node slowly, go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Who we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already. Yeah, and in a few minutes, we'll see that the upgrade has been completed. There we go. Great. Done. If you work loads of both using proper cloud native community standards, there will be no impact. >>All right, there. We haven't. We got our first workload cluster spun up and managed by Dr Enterprise Container Cloud. So I I loved Shawn's classic warning there. When you're spinning up an actual doctor enterprise deployment, you see little errors and warnings popping up. Just don't touch it. Just leave it alone and let Dr Enterprises self healing properties take care of all those very transient temporary glitches, resolve themselves and leave you with a functioning workload cluster within victims. >>And now, if you think about it that that video was not very long at all. And that's how long it would take you if someone came into you and said, Hey, can you spend up a kubernetes cluster for development development A. Over here, um, it literally would take you a few minutes to thio Accomplish that. And that was with a W s. Obviously, which is sort of, ah, transient resource in the cloud. But you could do exactly the same thing with resource is on Prem or resource is, um physical resource is and will be going through that later in the process. >>Yeah, absolutely one thing that is present in that demo, but that I like to highlight a little bit more because it just kind of glides by Is this notion of, ah, cluster release? So when Sean was creating that cluster, and also when when he was upgrading that cluster, he had to choose a release. What does that didn't really explain? What does that mean? Well, in Dr Enterprise Container Cloud, we have released numbers that capture the entire staff of container ization tools that will be deploying to that workload costume. So that's your version of kubernetes sed cor DNs calico. Doctor Engineer. All the different bits and pieces that not only work independently but are validated toe work together as a staff appropriate for production, humanities, adopted enterprise environments. >>Yep. From the bottom of the stack to the top, we actually test it for scale. Test it for CVS, test it for all of the various things that would, you know, result in issues with you running the application services. And I've got to tell you from having, you know, managed kubernetes deployments and things like that that if you're the one doing it yourself, it can get rather messy. Eso This makes it easy. >>Bruce, you were staying a second ago. They I'll take you at least fifteen minutes to install your release. Custer. Well, sure, but what would all the other bits and pieces you need toe? Not just It's not just about pressing the button to install it, right? It's making the right decision. About what components work? Well, our best tested toe be successful working together has a staff? Absolutely. We this release mechanism and Dr Enterprise Container Cloud. Let's just kind of package up that expert knowledge and make it available in a really straightforward, fashionable species. Uh, pre Confederate release numbers and Bruce is you're pointing out earlier. He's got delivered to us is updates kind of transparent period. When when? When Sean wanted toe update that cluster, he created little update. Custer Button appeared when an update was available. All you gotta do is click. It tells you what Here's your new stack of communities components. It goes ahead. And the straps those components for you? >>Yeah, it actually even displays at the top of the screen. Ah, little header That says you've got an update available. Do you want me to apply? It s o >>Absolutely. Another couple of cool things. I think that are easy to miss in that demo was I really like the on board Bafana that comes along with this stack. So we've been Prometheus Metrics and Dr Enterprise for years and years now. They're very high level. Maybe in in previous versions of Dr Enterprise having those detailed dashboards that Ravana provides, I think that's a great value out there. People always wanted to be ableto zoom in a little bit on that, uh, on those cluster metrics, you're gonna provides them out of the box for us. Yeah, >>that was Ah, really, uh, you know, the joining of the Miranda's and Dr teams together actually spawned us to be able to take the best of what Morantes had in the open stack environment for monitoring and logging and alerting and to do that integration in in a very short period of time so that now we've got it straight across the board for both the kubernetes world and the open stack world. Using the same tool sets >>warm. One other thing I wanna point out about that demo that I think there was some questions about our last go around was that demo was all about creating a managed workplace cluster. So the doctor enterprise Container Cloud managers were using those aws credentials provisioned it toe actually create new e c two instances installed Docker engine stalled. Doctor Enterprise. Remember all that stuff on top of those fresh new VM created and managed by Dr Enterprise contain the cloud. Nothing unique about that. AWS deployments do that on open staff doing on Parramatta stuff as well. Um, there's another flavor here, though in a way to do this for all of our long time doctor Enterprise customers that have been running Doctor Enterprise for years and years. Now, if you got existing UCP points existing doctor enterprise deployments, you plug those in to Dr Enterprise Container Cloud, uh, and use darker enterprise between the cloud to manage those pre existing Oh, working clusters. You don't always have to be strapping straight from Dr Enterprises. Plug in external clusters is bad. >>Yep, the the Cube config elements of the UCP environment. The bundling capability actually gives us a very straightforward methodology. And there's instructions on our website for exactly how thio, uh, bring in import and you see p cluster. Um so it it makes very convenient for our existing customers to take advantage of this new release. >>Absolutely cool. More thoughts on this wonders if we jump onto the next video. >>I think we should move press on >>time marches on here. So let's Let's carry on. So just to recap where we are right now, first video, we create a management cluster. That's what we're gonna use to create All our downstream were closed clusters, which is what we did in this video. Let's maybe the simplest architectures, because that's doing everything in one region on AWS pretty common use case because we want to be able to spin up workload clusters across many regions. And so to do that, we're gonna add a third layer in between the management and work cluster layers. That's gonna be our regional cluster managers. So this is gonna be, uh, our regional management cluster that exists per region that we're going to manage those regional managers will be than the ones responsible for spending part clusters across all these different regions. Let's see it in action in our next video. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectural overview, how to set up the management environment, prepare for the deployment deployment overview, and then just to prove it, to play a regional child cluster. So looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case, AWS on the L C M components on the d you speak cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need original cluster? Different platform architectures, for example AWS open stack, even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager. We also machine manager. We're hell Mandel are managed as well as the actual provider logic. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. When you see it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster, similar to what we're going to deploy now. Also only has three managers once again, no workers. But as a comparison is a child cluster. This one has three managers, but also has additional workers associate it to the cluster. Yeah, all right, we need to connect. Tell bootstrap note, preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine Mhm. All right, A few things we have to do to make sure the environment is ready. First thing we're gonna pseudo into route. I mean, we'll go into our releases folder where we have the car's boot strap on. This was the original bootstrap used to build the original management cluster. We're going to double check to make sure our cube con figures there It's again. The one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything's working, condone, load our images waken access to a swell. Yeah, Next, we're gonna edit the machine definitions what we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I So that's found under the templates AWS directory. We don't need to edit anything else here, but we could change items like the size of the machines attempts we want to use but the key items to ensure where changed the am I reference for the junta image is the one for the region in this case aws region of re utilizing. This was an open stack deployment. We have to make sure we're pointing in the correct open stack images. Yeah, yeah. Okay. Sit the correct Am I save the file? Yeah. We need to get up credentials again. When we originally created the bootstrap cluster, we got credentials made of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we just exporting AWS access key and I d. What's important is Kaz aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our Q conflict that we want to use for the management cluster when we looked at earlier. Yeah, now we're exporting that. Want to call? The cluster region is Frankfurt's Socrates Frankfurt yet trying to use something descriptive? It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at us and waiting for the past, you know, to get started. Please the best your node, onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy Dr Enterprise, he says. Probably the longest face we'll see in a second that all the nodes will go from the player deployed. Prepare, prepare Mhm. We'll see. Their status changes updates. It was the first word ready. Second, just applying second. Grady, both my time away from home control that's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running a data for us? Yeah, almost a on. Now we're playing Stockland. Thanks. Whichever is done on Done. Now we'll build a child cluster in the new region very, very quickly. Find the cluster will pick our new credential have shown up. We'll just call it Frankfurt for simplicity. A key on customers to find. That's the machine. That cluster stop with three manages set the correct Am I for the region? Yeah, Same to add workers. There we go. That's the building. Yeah. Total bill of time. Should be about fifteen minutes. Concedes in progress. Can we expect this up a little bit? Check the events. We've created all the dependencies, machine instances, machines. A boat? Yeah. Shortly. We should have a working caster in the Frankfurt region. Now almost a one note is ready from management. Two in progress. On we're done. Trust us up and running. >>Excellent. There we have it. We've got our three layered doctor enterprise container cloud structure in place now with our management cluster in which we scrap everything else. Our regional clusters which manage individual aws regions and child clusters sitting over depends. >>Yeah, you can. You know you can actually see in the hierarchy the advantages that that presents for folks who have multiple locations where they'd like a geographic locations where they'd like to distribute their clusters so that you can access them or readily co resident with your development teams. Um and, uh, one of the other things I think that's really unique about it is that we provide that same operational support system capability throughout. So you've got stack light monitoring the stack light that's monitoring the stack light down to the actual child clusters that they have >>all through that single pane of glass that shows you all your different clusters, whether their workload cluster like what the child clusters or usual clusters from managing different regions. Cool. Alright, well, time marches on your folks. We've only got a few minutes left and I got one more video in our last video for the session. We're gonna walk through standing up a child cluster on bare metal. So so far, everything we've seen so far has been aws focus. Just because it's kind of easy to make that was on AWS. We don't want to leave you with the impression that that's all we do, we're covering AWS bare metal and open step deployments as well documented Craftsman Cloud. Let's see it in action with a bare metal child cluster. >>We are on the home stretch, >>right. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. Yeah, so why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and support high performance workouts like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another hyper visor layer in between. So continuing on the theme Why communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p g A s G p, us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. We can handle utilization in the scheduling better Onda. We increase the performance and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project. Will add the bare metal hosts, including the host name. I put my credentials. I pay my address, Mac address on, then provide a machine type label to determine what type of machine it is. Related use. Okay, let's get started Certain Blufgan was the operator thing. We'll go and we'll create a project for our machines to be a member off. Helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. Yeah. So the first thing we had to be in post many of the machine a name. Anything you want? Yeah, in this case by mental zero one. Provide the IAP My user name. Type my password? Yeah. On the Mac address for the active, my interface with boot interface and then the i p m i P address. Yeah, these machines. We have the time storage worker manager. He's a manager. We're gonna add a number of other machines on will speed this up just so you could see what the process. Looks like in the future, better discovery will be added to the product. Okay, Okay. Getting back there. We haven't Are Six machines have been added. Are busy being inspected, being added to the system. Let's have a look at the details of a single note. Mhm. We can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. Okay, it's going to create the cluster. Mhm. Okay, so we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So credit custom. We'll give it a name. Thank you. But he thought were selecting bare metal on the region. We're going to select the version we want to apply on. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of the dress range update the address range that we want to use for the cluster. Check that the sea idea blocks for the communities and tunnels are what we want them to be. Enable disabled stack light and said the stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here we're focused on building communities clusters. So we're gonna put the count of machines. You want managers? We're gonna pick the label type manager on create three machines. Is a manager for the Cuban a disgusting? Yeah, they were having workers to the same. It's a process. Just making sure that the worker label host like you are so yes, on Duin wait for the machines to deploy. Let's go through the process of putting the operating system on the notes, validating that operating system. Deploying Docker enterprise on making sure that the cluster is up and running ready to go. Okay, let's review the bold events. We can see the machine info now populated with more information about the specifics of things like storage. Yeah, of course. Details of a cluster, etcetera. Yeah, Yeah. Okay. Well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build, and that brings us to the end of this particular do my as you can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>Here we have a child cluster on bare metal for folks that wanted to play the stuff on Prem. >>It's ah been an interesting journey taken from the mothership as we started out building ah management cluster and then populating it with a child cluster and then finally creating a regional cluster to spread the geographically the management of our clusters and finally to provide a platform for supporting, you know, ai needs and and big Data needs, uh, you know, thank goodness we're now able to put things like Hadoop on, uh, bare metal thio in containers were pretty exciting. >>Yeah, absolutely. So with this Doctor Enterprise container cloud platform. Hopefully this commoditized scooping clusters, doctor enterprise clusters that could be spun up and use quickly taking provisioning times. You know, from however many months to get new clusters spun up for our teams. Two minutes, right. We saw those clusters gets better. Just a couple of minutes. Excellent. All right, well, thank you, everyone, for joining us for our demo session for Dr Enterprise Container Cloud. Of course, there's many many more things to discuss about this and all of Miranda's products. If you'd like to learn more, if you'd like to get your hands dirty with all of this content, police see us a training don Miranda's dot com, where we can offer you workshops and a number of different formats on our entire line of products and hands on interactive fashion. Thanks, everyone. Enjoy the rest of the launchpad of that >>thank you all enjoy.

Published Date : Sep 17 2020

SUMMARY :

So for the next couple of hours, I'm the Western regional Solutions architect for Moran At least somebody on the call knows something about your enterprise Computer club. And that's really the key to this thing is to provide some, you know, many training clusters so that by the end of the tutorial content today, I think that's that's pretty much what we had to nail down here. So the management costs was always We have to give this brief little pause of the management cluster in the first regional clusters to support AWS deployments. So in that video are wonderful field CTO Shauna Vera bootstrapped So primarily the foundation for being able to deploy So this cluster isn't yet for workloads. Read the phone book, So and just to make sure I understood The output that when it says I'm pivoting, I'm pivoting from on the bootstrap er go away afterwards. So that there's no dependencies on any of the clouds that get created thereafter. Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, The config file that that's generated the template is fairly straightforward We always insist on high availability for this management cluster the scenes without you having toe worry about it as a developer. Examples of that is the day goes on. either the the regional cluster or a We've got the management cluster, and we're gonna go straight with child cluster. as opposed to having to centralize thumb So just head on in, head on into the docks like the Dale provided here. That's going to be in a very near term I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. No, just that the fact that we're running through these individual So let's go to that video and see just how We can check the status of the machine bulls as individuals so we can check the machine the thing that jumped out to me at first Waas like the inputs that go into defining Yeah, and and And that's really the focus of our effort is to ensure that So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old of the bootstrapping as well that the processes themselves are self healing, And the worst thing you could do is panic at the first warning and start tearing things that don't that then go out to touch slack and say hi, You need to watch your disk But Sean mentioned it on the video. And And the kubernetes, uh, scaling methodology is is he adhered So should we go to the questions. Um, that's kind of the point, right? you know, set up things and deploy your applications and things. that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. to the standards that we would want to set to make sure that we're not overloading On the next video, we're gonna learn how to spin up a Yeah, Do the same to add workers. We got that management cluster that we do strapped in the first video. Yeah, that's the key to this is to be able to have co resident with So we don't have to go back to the mother ship. So it's just one pane of glass to the bootstrapped cluster of the regional services. and another, you know, detail for those that have sharp eyes. Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want all of the other major players in the cloud arena. Let's jump into our last video in the Siri's, So the first thing we had to be in post, Yeah, many of the machine A name. Much the same is how we did for AWS. nodes and and that the management layer is going to have sufficient horsepower to, are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. that's been put out in the chat, um, that you'll be able to give this a go yourself, Uh, take the opportunity to let your colleagues know if they were in another session I e just interest will feel for you. Use A I'm the one with the gray hair and the glasses. And for the providers in the very near future. I can hardly wait. Let's do it all right to share my video So the first thing is, we need those route credentials which we're going to export on the command That is the tool and you're gonna use to start spinning up downstream It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, and all of the necessary parameters that you would fill in have That's the very first thing you're going to Yeah, for the most part. Let's now that we have our management cluster set up, let's create a first We can check the status of the machine balls as individuals so we can check the glitches, resolve themselves and leave you with a functioning workload cluster within exactly the same thing with resource is on Prem or resource is, All the different bits and pieces And I've got to tell you from having, you know, managed kubernetes And the straps those components for you? Yeah, it actually even displays at the top of the screen. I really like the on board Bafana that comes along with this stack. the best of what Morantes had in the open stack environment for monitoring and logging So the doctor enterprise Container Cloud managers were Yep, the the Cube config elements of the UCP environment. More thoughts on this wonders if we jump onto the next video. Let's maybe the simplest architectures, of the regional cluster and how it connects to the management cluster on their components, There we have it. that we provide that same operational support system capability Just because it's kind of easy to make that was on AWS. Just making sure that the worker label host like you are so yes, It's ah been an interesting journey taken from the mothership Enjoy the rest of the launchpad

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MaryPERSON

0.99+

SeanPERSON

0.99+

Sean O'MaraPERSON

0.99+

BrucePERSON

0.99+

FrankfurtLOCATION

0.99+

three machinesQUANTITY

0.99+

Bill MilksPERSON

0.99+

AWSORGANIZATION

0.99+

first videoQUANTITY

0.99+

second phaseQUANTITY

0.99+

ShawnPERSON

0.99+

first phaseQUANTITY

0.99+

ThreeQUANTITY

0.99+

Two minutesQUANTITY

0.99+

three managersQUANTITY

0.99+

fifth phaseQUANTITY

0.99+

ClarkPERSON

0.99+

Bill MillsPERSON

0.99+

DalePERSON

0.99+

Five minutesQUANTITY

0.99+

NanPERSON

0.99+

second sessionQUANTITY

0.99+

Third phaseQUANTITY

0.99+

SeymourPERSON

0.99+

Bruce Basil MatthewsPERSON

0.99+

Moran TousPERSON

0.99+

five minutesQUANTITY

0.99+

hundredsQUANTITY

0.99+

Matti Paksula, supervisor.com | Mirantis Launchpad 2020


 

>> Narrator: From around the globe it's the CUBE with digital coverage of Mirantis Launchpad 2020, brought to you by Mirantis. >> Welcome back, I'm Stu Miniman, and this is the CUBE's coverage of Mirantis Launchpad 2020. And always love when we get to be able to talk to the practitioners that are using some of the technologies here. One of the interesting things we've been digging into is lens, the IDE in this space, as it's being referred to. So, happy to welcome to the program Matti Paksula. He is the founder and chief technology officer at supervisor.com. Matti, thanks so much for joining us. >> Thank you, thank you thank you for having me. >> So, if you could just, you help us understand, you know, your company as supervisor.com. What's the background as the founder? What was kind of the impetus to creating that business too? >> Sure, so, supervisor was this like super simple because we believe, and we know, that the only way to tests websites, if they can handle load, for example, eCommerce sites on black Friday, or when you, or, just about to make a product launch or that kind of stuff. Is just by sending real web browsers to the site. That's actually click and scroll and do it all the same things as a real users will do. But, and unlike, our secret thing is that we can do it, like before Black Friday. So, if somebody wants to simulate if they can handle like 2000 users or 5000 users, then they can use supervisor.com to make it happen like today. >> So, I'm just curious, you know, the concern always is about the DDoS attacks and the like. Do you help companies along that line too? Or is it more the, the testing for proper traffic and we leave the security aspect to somebody else? >> Yeah, well, like with any load testing tool, you have to verify yourself somehow. And with us, it's super easy because we integrated with Google analytics. And if you authorize us to read your Google Analytics Data, then we know that you are allowed to test your site. >> Wonderful, well, as I said in the lead, you're using lens, my understanding you've been using it since the early day, of course, a technology that closed source Mirantis has, has acquired that and the team, it's now also open source. So if you could bring us back to, you know, how did you get involved with lens? What was the, you know, the problem statement that it helped you resolve? >> Yeah, sure. So the (inaudible) super briefly is that Lens was developed by this startup called Condena, it's a finish startup, and they made a couple of attempts in container orchestration, like before Kubernetes and then Coobernetti's game. And they just felt like Kubernetes is super hard to kind of visualize or like, understand what's going on because you have these containers flying around, you have nodes going in going out. So they built this lens and then since I'd be working with those guys from 2015 or so, I was like one of the first outside users, or probably the first user outside of the company. >> So, that, pretty neat that you had that, you know, that project that they were doing. As an early user, you know, give us a little bit of that journey. What does it enable for your company? You know, how has it expanded from kind of the early use cases to where it is today. >> Yeah. So, if you're using Kubernetes traditionally, or like how most of the people who haven't yet heard about Lens use it is by or from the command line. So that's where you use keep CTL or cube control. You say cube CTL pod, and then you get the listing pod. But the problem is that, all that data is stale on the screen. So if you trend try to, for example, delete a port and you issue cutesy delete pod, blah, blah, blah, blah. And then you enter on the pod all ready, it might be gone. So Lens makes like everything real time. And like, if you try to delete something with lens, you move your mouse on top of the pod. And if it's getting deleted, you know, this, it, because it just disappears from your screen and like, it's not there anymore. And I think that's a huge a productivity boost in a way, that's how you can like get more and more stuff done every day as these kind of like, when you are a developer or CSI admin or whatever you need to kind of like, see what's happening in your cluster and house that note and pods are doing. And that. So back to your question, when you asked, like, how has the evolved lens it's like nowadays it's super stable. It handles big workloads very well. In the very, very early on, they had some performance issues with like, like large clusters, for example, when supervisor, when we run a load test with, for example, 10,000 concurrent web browsers. So basically what we have in Kubernetes is we have 10,000 pods. And then when you connect something like lens to it, it's just like started to spin up my fans until on the laptop, still about eating all the Ram. So I helped them a lot with my special use case of running like super big Ephemeral workloads there. >> Yeah. It's an interesting discussion. And in the whole, you know, container space, there's all that discussion of scale(chuckling). You know, of course everybody thinks back to Google and how they use it. So we know it can go really big, but, you know, environments, I needed to be able to work really small or youth cases like yours. I needed to be able to, you know, burst use that usage when you need it and go back on that a less density that we hope for in, in cloud. So I'm curious any, what's your expectation with it, you know, going open source, coming into Mirantis as a, as a longtime user of it, you know, what do you expect to see? >> Well, I think like Mirantis offers the right kind of home for the product, because they really get what's happening in the space. And I think they're like commercial offering on top of the open source will be around authentication. That's why, like, I kind of understood from the press release. And I think it makes sense because like, developers don't want to pay for these kinds of tools. And there are other tools that are commercial. And even if it's like just 100 bucks per year, I think that's still not going to work out with most of the developers and you kind of need this kind of long tail developer adoption for these kinds of products to succeed. And I think that, like, that's kind of like authentication, like centralized, like who can see what, and that kind of stuff. It doesn't like affect most of the startups or Indie Devs, but like for any company who was doing it like a real business, those are the features that are needed. And when you use that, the products for business, then I think it makes sense to pay also. >> Yeah, absolutely. There's always that, challenge developers of course love open source tools if they can use them. And, you know, the packaging, the monetization, isn't a question for you it's(chuckling), you know, for the Miranda's team. What would you say to your peers out there, people that are in this space, you know, what are the areas that they say, Oh, you know, if I have this type of environment, or if I have type, if I have this team, this is what lens will really be awesome for me. What are some of the things that you would recommend to your peers out there from, from all the usage that you've done? >> Yeah. So let's say three things. The first thing is what I already mentioned the real timeness that everything updates live, the second thing is the integrated metrics. So you cannot, for example, follow how much memory or CPU something is consuming. It's super helpful when you want to like, understand what's really going on and how much resources something is taken. And then the third thing is that Landis is great for debugging because once you have deployed something and something is off, and it's kind of hard to reproduce locally, especially with this kind of a microservice architecture, whatever, what you might have is that you can just like go inside at any part or note instantly from the UI. You don't have to, like, again, you don't have to use cubes sheets, the L blah, blah, blah, blah, blah. And, and you have just like in there also, because you are already in the. But its the fourth thing is that if you manage multiple Kubernetes clusters, it's super easy to accidentally connect to the wrong cluster. But like, if you have, some visual tool where you can see in I'm in this. I mean, my production cluster are I'm in my staging cluster and you make the selection like visually there, then all the cube sees and everything works against that's a cluster. So I think that's like very helpful so that you don't actually accidentally delete something from production, for example. >> Wonderful. Last question I have for you either blend specifically, or kind of the eco-system around it, what, would be on your wishlist for, as I said, either lance specifically, or to, you know, manage your environments surrounding that, you know, what, what would you be asking kind of Miranda and, the broader eco-system for? >> I know that, well, let me think. Yeah. Okay. First of all, I have like maybe 50, 60 issues still open a GitHub that I have opened there. So that's like my wish list, but like, if you, they got like longer term, I think it would just be great, if you could actually like start deployments from Lance, there are a bunch of deployment tools, like customize and help. But again, if you just wanted to get something running quickly, I think integrating that to Lance would be like, super good. Just you it's just like click like I want to deploy this app. That's, that's something I'm looking forward to. >> Yeah, absolutely. Everybody wants that simplicity. All right. Well, Hey, thank you so much. Great to hear the feedback. We always talk about the people that developed code, as well as, you know, the people that do the beta testing and the feedback. So critically important to the maturation development of everything that's based though. Thanks so much for joining us. >> Thank you. >> Stay tuned for more coverage from Mirantis Launchpad 2020 I'm Stu Miniman. And thank you for watching the cube. (upbeat music)

Published Date : Sep 16 2020

SUMMARY :

brought to you by Mirantis. One of the interesting things we've been thank you for having me. you know, your company as supervisor.com. and do it all the same things So, I'm just curious, you know, And if you authorize us to read So if you could bring because you have these containers As an early user, you know, give us And then when you connect And in the whole, you And when you use that, people that are in this space, you know, And, and you have just like in there also, or to, you know, if you could actually like as well as, you know, the people that do you for watching the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2015DATE

0.99+

Matti PaksulaPERSON

0.99+

Stu MinimanPERSON

0.99+

MattiPERSON

0.99+

5000 usersQUANTITY

0.99+

2000 usersQUANTITY

0.99+

10,000 podsQUANTITY

0.99+

MirantisORGANIZATION

0.99+

10,000 concurrent web browsersQUANTITY

0.98+

50QUANTITY

0.98+

CondenaORGANIZATION

0.98+

CUBEORGANIZATION

0.98+

todayDATE

0.98+

black FridayEVENT

0.98+

fourth thingQUANTITY

0.98+

third thingQUANTITY

0.97+

Black FridayEVENT

0.97+

first userQUANTITY

0.96+

GoogleORGANIZATION

0.95+

OneQUANTITY

0.95+

second thingQUANTITY

0.94+

first thingQUANTITY

0.92+

supervisor.comOTHER

0.91+

three thingsQUANTITY

0.9+

KubernetesTITLE

0.9+

MirandaPERSON

0.89+

Google Analytics DataTITLE

0.88+

Mirantis Launchpad 2020TITLE

0.86+

oneQUANTITY

0.86+

FirstQUANTITY

0.85+

60 issuesQUANTITY

0.84+

LandisTITLE

0.84+

LanceTITLE

0.81+

100 bucks per yearQUANTITY

0.8+

GitHubORGANIZATION

0.78+

first outsideQUANTITY

0.72+

analyticsTITLE

0.67+

Launchpad 2020TITLE

0.63+

IndieORGANIZATION

0.5+

CoobernettiORGANIZATION

0.47+

supervisor.comORGANIZATION

0.45+

LensTITLE

0.34+

Why Multi-Cloud?


 

>>Hello, everyone. My name is Rick Pew. I'm a senior product manager at Mirant. This and I have been working on the Doctor Enterprise Container Cloud for the last eight months. Today we're gonna be talking about multi cloud kubernetes. So the first thing to kind of look at is, you know, is multi cloud rial. You know, the terms thrown around a lot and by the way, I should mention that in this presentation, we use the term multi cloud to mean both multi cloud, which you know in the technical sense, really means multiple public clouds and hybrid cloud means public clouds. And on Prem, uh, we use in this presentation will use the term multi cloud to refer to all different types of multiple clouds, whether it's all public cloud or a mixture of on Prem and Public Cloud or, for that matter, multiple on Prem clouds as doctor and price container. Cloud supports all of those scenarios. So it really well, let's look at some research that came out of flex era in their 2020 State of the cloud report. You'll notice that ah, 33% state that they've got multiple public and one private cloud. 53% say they've got multiple public and multiple private cloud. So if you have those two up, you get 86% of the people say that they're in multiple public clowns and at least one private cloud. So I think at this stage we could say that multi cloud is a reality. According to 4 51 research, you know, a number of CEO stated that the strong driver their desire was to optimize cost savings across their private and public clouds. Um, they also wanted to avoid vendor lock in by operating in multiple clouds and try to dissuade their teams from taking too much advantage of a given providers proprietary infrastructure. But they also indicated that there the complexity of using multiple clouds hindered the rate of adoption of doing it doesn't mean they're not doing it. It just means that they don't go assed fast as they would like to go in many cases because of the complexity. And here it Miranda's. We surveyed our customers as well, and they're telling us similar things, you know. Risk management, through the diversification of providers, is key on their list cost optimization and the democratization of allowing their development teams, uh, to create kubernetes clusters without having to file a nightie ticket. But to give them a self service, uh, cloud like environment, even if it's on prem or multi cloud to give them the ability to create their own clusters, resize their own clusters and delete their own clusters without needing to have I t. Or of their operations teams involved at all. But there are some challenges with this, with the different clouds you know require different automation. Thio provisioned the underlying infrastructure or deploy and operating system or deployed kubernetes, for that matter, in a given cloud. You could say that they're not that complicated. They all have, you know, very powerful consoles and a P I s to do that. But did you get across three or four or five different clouds? Then you have to learn three or four or five different AP ice and Web consoles in order to make that happen on in. That scenario is difficult to provide self service for developers across all the cloud options, which is what you want to really accelerate your application innovation. So what's in it for me? You know We've got a number of roles and their prizes developers, operators and business leaders, and they have somewhat different needs. So when the developer side the need is flexibility to meet their development schedules, Number one you know they're under constant pressure to produce, and in order to do that, they need flexibility and in this case, the flexibility to create kubernetes clusters and use them across multiple clouds. Now they also have C I C D tools, and they want them to be able to be normalized on automated across all of the the on prim and public clouds that they're using. You know, in many cases they'll have a test and deployment scenario where they'll want to create a cluster, deploy their software, run their test, score the tests and then delete that cluster because the only point of that cluster, perhaps, was to test ah pipeline of delivery. So they need that kind of flexibility. From the operator's perspective, you know, they always want to be able to customize the control of their infrastructure and deployment. Uh, they certainly have the desire to optimize their optics and Capex fans. They also want to support their develops teams who many times their their customers through a p I access for on Prem and public clouds burst. Scaling is something operators are interested in, and something public clouds can provide eso the ability to scale out into public clouds, perhaps from there on prem infrastructure in a seamless manner. And many times they need to support geographic distribution of applications either for compliance or performance reasons. So having you know, data centers all across the world and be able to specifically target a given region, uh, is high on their list. Business leaders want flexibility and confidence to know that you know, they're on prim and public cloud uh, deployments. Air fully supported. They want to be able, like the operator, optimize their cloud, spends business leaders, think about disaster recovery. So having the applications running and living in different data centers gives them the opportunity to have disaster recovery. And they really want the flexibility of keeping private data under their control. On on Prem In certain applications may access that on Prem. Other applications may be able to fully run in the cloud. So what should I look for in a container cloud? So you really want something that fully automates these cluster deployments for virtual machine or bare metal. The operating system, uh, and kubernetes eso It's not just deploying kubernetes. It's, you know, how do I create my underlying infrastructure of a VM or bare metal? How do I deploy the operating system? And then, on top of all that, I want to be able to deploy kubernetes. Uh, you also want one that gives a unified cluster lifecycle management across all the clouds. So these clusters air running software gets updated. Cooper Netease has a new release cycle. Uh, they come out with something new. It's available, you know, How do you get that across all of your clusters? That air running in multiple clouds. We also need a container cloud that can provide you the visibility through logging, monitoring and alerting again across all the clouds. You know, many offerings have these for a particular cloud, but getting that across multiple clouds, uh, becomes a little more difficult. The Doctor Enterprise Container cloud, you know, is a very strong solution and really meets many of these, uh, dimensions along the left or kind of the dimensions we went through in the last slide we've got on Prem and public clouds as of RG A Today we're supporting open stack and bare metal for the on Prem Solutions and AWS in the public cloud. We'll be adding VM ware very soon for another on Prem uh, solution as well as azure and G C P. So thank you very much. Uh, look forward, Thio answering any questions you might have and we'll call that a rap. Thank you. >>Hi, Rick. Thanks very much for that. For that talk, I I am John James. You've probably seen me in other sessions. I do marketing here in Miran Tous on. I wanted to to take this opportunity while we had Rick to ask some more questions about about multi cloud. It's ah, potentially a pretty big topic, isn't it, Rick? >>Yeah. I mean, you know, the devil's in the details and there's, uh, lots of details that we could go through if you'd like, be happy to answer any questions that you have. >>Well, we've been talking about hybrid cloud for literally years. Um, this is something that I think you know, several generations of folks in the in the I. A s space doing on premise. I s, for example, with open stack the way Miran Tous Uh does, um, found, um, you know, thought that that it had a lot of potential. A lot of enterprises believed that, but there were There were things stopping people from from making it. Really, In many cases, um, it required a very, ah, very high degree of willingness to create homogeneous platforms in the cloud and on the premise. Um, and that was often very challenging. Um, but it seems like with things like kubernetes and with the isolation provided by containers, that this is beginning to shift, that that people are actually looking for some degree of application portability between their own Prem and there and their cloud environments. And that this is opening up, Uh, you know, investment on interest in pursuing this stuff. Is that the right perception? >>Yeah. So let's let's break that down a little bit. So what's nice about kubernetes is through the a. P. I s are the same. Regardless of whether it's something that Google or or a W s is offering as a platform as a service or whether you've taken the upstream open source project and deploy it yourself on parameter in a public cloud or whatever the scenario might be or could be a competitor of Frances's product, the Kubernetes A. P I is the same, which is the thing that really gives you that application portability. So you know, the container itself is contained arising, obviously your application and minimizing any kind of dependency issues that you might have And then the ability to deploy that to any of the coup bernetti clusters you know, is the same regardless of where it's running, the complexity comes and how doe I actually spend up a cluster in AWS and open stack and D M Where and gp An azure. How do I build that infrastructure and and spin that up and then, you know, used the ubiquitous kubernetes a p I toe actually deploy my application and get it to run. So you know what we've done is we've we've unified and created A I use the word normalized. But a lot of times people think that normalization means that you're kind of going to a lowest common denominator, which really isn't the case and how we've attacked the the enabling of multi cloud. Uh, you know, what we've done is that we've looked at each one of the providers and are basically providing an AP that allows you to utilize. You know, whatever the best of you know, that particular breed of provider has and not, uh, you know, going to at least common denominator. But, you know, still giving you a ah single ap by which you can, you know, create the infrastructure and the infrastructure could be on Prem is a bare metal infrastructure. It could be on preeminent open stack or VM ware infrastructure. Any of the public clouds, you know, used to have a a napi I that works for all of them. And we've implemented that a p i as an extension to kubernetes itself. So all of the developers, Dev ops and operators that air already familiar operating within the, uh, within the aapi of kubernetes. It's very, very natural. Extension toe actually be able to spend up these clusters and deploy them >>Now that's interesting. Without giving away, obviously what? Maybe special sauce. Um, are you actually using operators to do this in the Cooper 90? Sense of the word? >>Yes. Yeah, we've extended it with with C R D s, uh, and and operators and controllers, you know in the way that it was meant to be extended. So Kubernetes has a recipe on how you extend their A P I on that. That's what we used as our model. >>That, at least to me, makes enormous sense. Nick Chase, My colleague and I were digging into operators a couple of weeks ago, and that's a very elegant technology. Obviously, it's a it's evolving very fast, but it's remarkably unintimidating once you start trying to write them. We were able toe to compose operators around Cron and other simple processes and just, >>you know, >>a couple of minutes on day worked, which I found pretty astonishing. >>Yeah, I mean, you know, Kubernetes does a lot of things and they spent a lot of effort, um, in being able, you know, knowing that their a p I was gonna be ubiquitous and knowing that people wanted to extend it, uh, they spent a lot of effort in the early development days of being able to define that a p I to find what an operator was, what a controller was, how they interact. How a third party who doesn't know anything about the internals of kubernetes could add whatever it is that they wanted, you know, and follow the model that makes it work. Exactly. Aziz, the native kubernetes ap CSTO >>What's also fascinating to me? And, you know, I've I've had a little perspective on this over the past, uh, several weeks or a month or so working with various stakeholders inside the company around sessions related to this event that the understanding of how things work is by no means evenly distributed, even in a company as sort of tightly knit as Moran Tous. Um, some people who shall remain nameless have represented to me that Dr Underprice Container Cloud basically works. Uh, if you handed some of the EMS, it will make things for you, you know, and this is clearly not what's going on that that what's going on is a lot more nuanced that you are using, um, optimal resource is from each provider to provide, uh, you know, really coherent architected solutions. Um, the load balancing the d. N s. The storage that this that that right? Um all of which would ultimately be. And, you know, you've probably tried this. I certainly have hard to script by yourself in answerable or cloud formation or whatever. Um, this is, you know, this is not easy work. I I wrote a about the middle of last year for my prior employer. I wrote a dip lawyer in no Js against the raw aws a piece for deployment and configuration of virtual networks and servers. Um, and that was not a trivial project. Um, it took a long time to get thio. Uh, you know, a dependable result. And to do it in parallel and do other things that you need to do in order to maintain speed. One of the things, in fact, that I've noticed in working with Dr Enterprise Container Cloud recently, is how much parallelism it's capable of within single platforms. It's It's pretty powerful. I mean, if you want to clusters to be deployed simultaneously, that's not hard for Doc. Aerated price container cloud to dio on. I found it pretty remarkable because I have sat in front of a single laptop trying to churn out of cluster under answerable, for example, and just on >>you get into that serial nature, your >>poor little devil, every you know, it's it's going out and it's ssh, Indian Terminals and it's pretending it's a person and it's doing all that stuff. This is much more magical. Um, so So that's all built into the system to, isn't it? >>Yeah. Interesting, Really Interesting point on that. Is that you know, the complexity isn't not necessarily and just creating a virtual machine because all of these companies have, you know, spend a lot of effort to try to make that as easy as possible. But when you get into networking, load balancing, routing, storage and hooking those up, you know, two containers automating that if you were to do that in terror form or answerable or something like that is many, many, many lines of code, you know, people have to experiment. Could you never get it right the first or second or the third time? Uh, you know, and then you have to maintain that. So one of the things that we've heard from customers that have looked a container cloud was that they just can't wait to throw away their answerable or their terror form that they've been maintaining for a couple of years. The kind of enables them to do this. It's very brittle. If if the clouds change something, you know on the network side, let's say that's really buried. And it's not something that's kind of top of mind. Uh, you know, your your thing fails or maybe worse, you think that it works. And it's not until you actually go to use it that you notice that you can't get any of your containers. So you know, it's really great the way that we've simplified that for the users and again democratizing it. So the developers and Dev ops people can create these clusters, you know, with ease and not worry about all the complexities of networking and storage. >>Another thing that amazed me as I was digging into my first, uh, Dr Price container Cloud Management cluster deployment was how, uh, I want I don't want to use the word nuanced again, but I can't think of a better word. Nuanced. The the security thinking is in how things air set up. How, um, really delicate the thinking about about how much credential power you give to the deploy. Er the to the seed server that deploys your management cluster as opposed thio Um uh or rather the how much how much administrative access you give to the to the administrator who owns the entire implementation around a given provider versus how much power the seed server gets because that gets its own user right? It gets a bootstrap user specifically created so that it's not your administrator, you know, more limited visibility and permissions. And this whole hierarchy of permissions is then extended down into the child clusters that this management cluster will ultimately create. So that Dev's who request clusters will get appropriate permissions granted within. Ah, you know, a corporate schema of permissions. But they don't get the keys to the kingdom. They don't have access to anything they don't you know they're not supposed to have access to, but within their own scope, they're safe. They could do anything they want, so it's like a It's a It's a really neat kind of elegant way of protecting organizations against, for example, resource over use. Um, you know, give people the power to deploy clusters, and basically you're giving them the power toe. Make sure that a big bill hits you know, your corporate accounting office at the end of the billing cycle, um so there have to be controls and those controls exist in this, you know, in this. >>Yeah, And there's kind of two flavors of that. One is kind of the day one that you're doing the deployment you mentioned the seed servers, you know, And then it creates a bastion server, and then it creates, you know, the management cluster and so forth, you know, and how all those permissions air handled. And then once the system is running, you know, then you have full access to going into key cloak, which is a very powerful open source identity management tool on you have dozens of, you know, granular permissions that you can give to an individual user that gives them permission to do certain things and not others within the context of kubernetes eso. It's really well thought out. And the defaults, you know, our 80% right. You know, there's very few people are gonna have to go in and sort of change those defaults. You mentioned the corporate directory. You know, hooks right upto l bap or active directory can suck everybody down. So there's no kind of work from a day. One perspective of having to go add. You know everybody that you can think of different teams and groupings of of people. Uh, you know, that's kind of all given from the three interface to the corporate directory. And so it just makes kind of managing the users and and controlling who can do what? Uh, really easy. And, you know, you know, day one day two it's really almost like our one hour to write because it's just all the defaults were really well thought out. You can deploy, you know, very powerful doctor and price container cloud, you know, within an hour, and then you could just start using it. And you know, you can create users if you want. You can use the default users. That air set up a time goes on, you can fine tune that, and it's a really, really nice model again for the whole frictionless democratization of giving developers the ability to go in and get it out of, you know, kind of their way and doing what they want to do. And I t is happy to do that because they don't like dozens of tickets and saying, you know, create a cluster for this team created cluster for that team. You know, here's the size of these guys. Want to resize when you know let's move all that into a self service model and really fulfill the prophecy of, you know, speeding up application development. >>It strikes me is extremely ironic that one of the things that public cloud providers bless them, uh, have always claimed, is that their products provide this democratization when in the experience, I think my own experience and the experience of most of the AWS developers, for example, not toe you know, name names, um, that I've encountered is that an initial experience of trying to start start a virtual machine and figuring out how to log into it? A. W s could take the better part of an afternoon. It's just it's not familiar once you have it in your fingers. Boom. Two seconds, right. But, wow, that learning curve is steep and precipitous, and you slip back and you make stupid mistakes your first couple 1000 times through the loop. Um, by letting people skip that and letting them skip it potentially on multiple providers, in a sense, I would think products like this are actually doing the public cloud industry is, you know, a real surface Hide as much of that as you can without without taking the power away. Because ultimately people want, you know, to control their destiny. They want choice for a reason. Um, and and they want access to the infinite services And, uh, and, uh, innovation that AWS and Azure and Google are all doing on their platforms. >>Yeah, you know, and they're solving, uh, very broad problems in the public clouds, you know, here were saying, you know, this is a world of containers, right? This is a world of orchestration of these containers. And why should I have to worry about the underlying infrastructure, whether it's a virtual machine or bare metal? You know, I shouldn't care if I'm an application developer developing some database application. You know, the last thing I wanna worry about is how do I go in and create a virtual machine? Oh, this is running. And Google. It's totally different than the one I was creating. An AWS I can't find. You know where I get the I P address in Google. It's not like it was an eight of us, you know, and you have to relearn the whole thing. And that's really not what your job is. Anyways, your job is to write data base coat, for example. And what you really want to do is just push a button, deploy a nor kiss traitor, get your app on it and start debugging it and getting it >>to work. Yep. Yeah, it's It's powerful. I've been really excited to work with the product the past week or so, and, uh, I hope that folks will look at the links at the bottoms of our thank you slides and, uh, and, uh, avail themselves of of free trial downloads of both Dr Enterprise Container, Cloud and Lens. Thank you very much for spending this extra time with me. Rick. I I think we've produced some added value here for for attendees. >>Well, thank you, John. I appreciate your help. >>Have a great rest of your session by bike. >>Okay, Thanks. Bye.

Published Date : Sep 16 2020

SUMMARY :

the first thing to kind of look at is, you know, is multi cloud rial. For that talk, I I am John James. And that this is opening up, Uh, you know, investment on interest in pursuing any of the coup bernetti clusters you know, is the same regardless of where it's running, Um, are you actually using operators to do this in the Cooper 90? and and operators and controllers, you know in the way that it was meant to be extended. but it's remarkably unintimidating once you start trying whatever it is that they wanted, you know, and follow the model that makes it work. And, you know, poor little devil, every you know, it's it's going out and it's ssh, Indian Terminals and it's pretending Is that you know, the complexity isn't not necessarily and just creating a virtual machine because all of these companies Make sure that a big bill hits you know, your corporate accounting office at the And the defaults, you know, our 80% right. I would think products like this are actually doing the public cloud industry is, you know, a real surface you know, and you have to relearn the whole thing. bottoms of our thank you slides and, uh, and, uh, avail themselves of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rick PewPERSON

0.99+

RickPERSON

0.99+

John JamesPERSON

0.99+

JohnPERSON

0.99+

GoogleORGANIZATION

0.99+

Nick ChasePERSON

0.99+

AWSORGANIZATION

0.99+

fourQUANTITY

0.99+

86%QUANTITY

0.99+

80%QUANTITY

0.99+

fiveQUANTITY

0.99+

firstQUANTITY

0.99+

MirantORGANIZATION

0.99+

threeQUANTITY

0.99+

Two secondsQUANTITY

0.99+

one hourQUANTITY

0.99+

53%QUANTITY

0.99+

33%QUANTITY

0.99+

2020DATE

0.99+

each providerQUANTITY

0.99+

secondQUANTITY

0.99+

TodayDATE

0.99+

third timeQUANTITY

0.99+

AzizPERSON

0.98+

ThioPERSON

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

twoQUANTITY

0.98+

eightQUANTITY

0.97+

OneQUANTITY

0.97+

first thingQUANTITY

0.97+

first couple 1000 timesQUANTITY

0.96+

two flavorsQUANTITY

0.96+

Prem SolutionsORGANIZATION

0.96+

MirandaORGANIZATION

0.96+

single platformsQUANTITY

0.95+

last yearDATE

0.95+

dozens of ticketsQUANTITY

0.95+

dozensQUANTITY

0.94+

past weekDATE

0.93+

a dayQUANTITY

0.93+

KubernetesTITLE

0.92+

CapexORGANIZATION

0.92+

each oneQUANTITY

0.92+

single laptopQUANTITY

0.92+

last eight monthsDATE

0.92+

couple of weeks agoDATE

0.91+

One perspectiveQUANTITY

0.91+

two containersQUANTITY

0.91+

an hourQUANTITY

0.9+

AzureORGANIZATION

0.9+

a monthQUANTITY

0.88+

three interfaceQUANTITY

0.87+

azureORGANIZATION

0.87+

FrancesPERSON

0.87+

dayQUANTITY

0.83+

Dr Enterprise ContainerORGANIZATION

0.82+

PremORGANIZATION

0.82+

RG AORGANIZATION

0.81+

WORGANIZATION

0.8+

Miran TousORGANIZATION

0.79+

Cooper NeteasePERSON

0.78+

Kubernetes A.TITLE

0.77+

CronTITLE

0.76+

Dr Underprice Container CloudORGANIZATION

0.76+

one dayQUANTITY

0.75+

five different cloudsQUANTITY

0.72+

Moran TousPERSON

0.7+

single apQUANTITY

0.68+

Miran TousPERSON

0.67+

Dr EnterpriseORGANIZATION

0.65+

G C P.ORGANIZATION

0.61+

90COMMERCIAL_ITEM

0.61+

weeksQUANTITY

0.61+

LensORGANIZATION

0.61+

CI/CD: Getting Started, No Matter Where You Are


 

>>Hello, everyone. My name is John Jane Shake. I work from Iran. Tous Andi. I am here this afternoon very gratefully with Anders Vulcan, who is VP of technology strategy for cloud bees, a Miranda's partner and a well known company in the space that we're going to be discussing. Anders is also a well known entity in this space, which is continuous integration and continuous delivery. Um, you've seen already today some sessions that focus on specific implementations of continuous integration and delivery, um, particularly around security. And, uh, we think this is a critically important topic for anyone in the cloud space, particularly in this increasingly complicated kubernetes space. To understand, um, Miranda's thanks, Uh, if I can recapitulate our own our own strategy and, uh, and language that with complexity on uncertainty consistently increasing with the depth of the technology stacks that we have to deal with consistently, um um elaborating themselves that navigating this requires, um first three implementation of automation to increase speed, which is what C and C d do. Um, and that this speed ba leveraged toe let us ship and iterate code faster. Since that's ultimately the business that all of us air in one way or another. I would like, I guess, toe open this conversation by asking Onders what does he think of that core strategy? >>You know, I think you know, hitting the security thing, right? Right off the bat. You know, security doesn't happen by accident. You know, security is not something that you know, Like a like a server in a restaurant. You know, Sprinkles a little bit of Parmesan cheese right before they serve you the the food. It's not something you Sprinkle on at the end. It's something that has to be baked in from the beginning, not just in the kitchen, but in the supply chain from from from the very beginning. So the you know it's a feature, and if you don't build it, if you're not going to get an outcome that you're not gonna be happy with and I think the you know it's increasingly it's obviously increasingly important and increasingly visible. You know, the you know, the kinds of security problems that we that we see these days can can be, you know, life altering, for for people that are subject to them and and can be, you know, life or death for a company that that's exposed to it. So it's it's it's very, very important. Thio pay attention to it and to work to achieve that as an explicit outcome of the software delivery process. And I think, you know, C i n c d as as process as tooling as culture plays a big part in that because ah, lot of it has to do with, you know, set things up, right? Um run them the same way over and over, you know, get the machine going. Turned the crane. Now, you wanna you wanna make improvements over over time. You know, it's not just, you know, set it and forget it. You know, we got that set up. We don't have to worry about it anymore, but it really is a question of, you know, get the human out of the loop a lot of the times because if if you're dealing with configuring complex systems, you wanna make sure that you get them set up configured, you know, documented Ideally, you know, as code, whether it's a domain specific language or or something like that. And then that's something that you contest against that you can verify against that you can that you can difficult. And then that becomes the basis for for your, you know, for yourself, for pipelines, for your automation around, you know, kind of the software factory floor. So I think automation is a key aspect of that because it, you know, it takes a lot of the drudgery out of it, for one thing, So now the humans have more time to spend on doing on the on the creative things on the things that we're good at a zoo. Humans and it also make sure that, you know, one of the things that computers are really good at is doing the same thing over and over and over and over. Eso that kind of puts that responsibility into the hands of the entity that that knows how to do that well, which is which is the machine eso I think it's, you know, it's a light. It's a deep, deep topic, obviously, but, you know, automation plays into it. Uh, you know, small batch sizes play into it, you know, being able to test very frequently whether that's testing in. You're kind of you're C I pipeline where you're sort of doing building mostly unit testing, maybe some integration testing, but also in layering in the mawr. Serious kinds of testing in terms of security scanning, penetration, testing, vulnerability, scanning. You know those sorts of things which, you know, maybe you do on every single see I Bill. But most people don't because those things tend toe take a little bit longer on. And you know you want your sea ice cycle to be as fast as possible because that's really in service of the developer who has committed code and wants toe kind of see the thumbs up from the system saying it. And, um, so most organizations most organizations are are are focusing on, you know, making sure that there's a follow on pipeline to follow on set of tests that happened after the C I passes successfully and and that's, you know, where a lot of the security scanning and those sorts of things happen. >>It's a It's an interesting problem. I mean, you mentioned, um, what almost sounds like a Lawrence Lessig Ian kind of idea that, you know, code is law in enterprises today, code particularly see, I code ends up being policy, but At the same time, there's, Ah, it seems to me there's a an alternative peril, which is, as you increase speed, particularly when you become more and more dependent on things like containers and layering technology to provide components and capabilities that you don't have to build yourself to your build pipeline, that there are new vulnerabilities, potentially that creep in and can creep in despite automation. Zor at least 1st. 1st order automation is attempts toe to prevent them from creeping in. You don't wanna you wanna freeze people on a six month old version of a key container image. But on the other hand, if the latest version has vulnerabilities, that could be a problem. >>Yeah, I mean, it's, you know, it's it's a it's a it's a double edged sword. It's two sides of the same coin. I think you know, when I talked to a lot of security people, um, you know, people to do it for a living is supposed to mean I just talk about it, um, that Z not completely true. But, um, the ah, lot of times the problem is old vulnerabilities. The thing that I think keeps a lot of people up at night isn't necessarily that the thing at the tip of the releases for particular, you know, well known open source, library or something like that. But that's gonna burn you all the vast majority of the time. And I want to say, like, 80 85% of the time. The vulnerability is that you that you get hosed by are ones that have been known about for years. And so I think the if I had to pick. So if you know, in that sort of two sides of that coin, if I had to pick, I would say Be aggressive in making sure that your third party dependencies are updated frequently and and continuously right, because that is the biggest, biggest cause of of of security vulnerabilities when it comes to third party code. Um, now you know the famous saying, You know, move fast and break things Well, there's certain things you don't want to break. You know you don't want to break a radiation machine that's going to deliver radio radiotherapy to someone because that will endanger their health. So So those sorts of systems, you know, naturally or subject a little bit more kind of caution and scrutiny and rigor and process those sorts of things. The micro service that I run that shows my little avatar when I log in, that one probably gets a little less group. You know, Andre rightfully so. So I think a lot of it has to do. And somebody once said in a I think it was, Ah, panel. I was on a PR say conference, which was, which was kind of a wise thing to say it was Don't spend a million dollars protecting a $5 assets. You know, you wanna be smart and you wanna you wanna figure out where your vulnerabilities they're going to come from and in my experience, and and you know, what I hear from a lot of the security professionals is pay attention to your supply chain. You're you want to make sure that you're up to date with the latest patches of, of all of your third party, you know, open source or close source. It doesn't really matter. I mean, if anything, you know, open source is is more open. Eso You could inspect things a little bit better than the close source, but with both kinds of streams of code that you consume and and use. You wanna make sure that you're you're more up to date as opposed to a less up to date? Um, that generally will be better. Now, can a new version of the library cause problems? You know, introduce bugs? You know, those sorts of things? Yes. That's why we have tests. That's what we have automated tests, regression, sweets, You know, those sorts of things. And so you wanna, you know, you wanna live in a in a world where you feel the confidence as a as a developer, that if I update this library from, you know, one debt owed at 3 to 1 debt owed at 10 to pick up a bunch of, you know, bug fixes and patches and those sorts of things. But that's not going to break some on demand in the test suites that that will run against that ought to cover that that sort of functionality. And I'd rather be in that world of Oh, yeah, we tried to update to that, but it But it broke the tests and then have to go spend time on that, then say, Oh, it broke the test. So let's not update. And then six months later, you do find out. Oh, geez. There was a problem in one that owed at three. And it was fixed in one. That about four. If only we had updated. Um, you know, you look at the, um you look at some of the highest profile security breaches that are out there that you sort of can trace toe third party libraries. It's almost always gonna be that it was out of date and hadn't been patched. That's so that's my you know, opinionated. Take on that. Sure. >>What are the parts of modern C I c D. As opposed to what one would encounter 56 years ago? Maybe if we can imagine that is being before the micro services and containers revolution really took off. >>You know, I think e think you're absolutely right that, you know, not the whole world is not doing. See, I Yeah, and certainly the whole world is not doing city yet. Um, you know, I think you know, as you say, we kind of live in a little bit of an ivory tower. You know, we live in an echo chamber in a little bit of a bubble Aziz vendors in this space. The truth is that I would say less than 50% of the software organizations out there do real. See, I do real CD. The number's probably less than that. Um, you know, I don't have anything to back that up other than just I talked to a lot of folks and work with, you know, with a lot of organizations and like, Yeah, that team does see I that team does Weekly builds You know, those sorts of things. It's it's really all over the place, Onda. Lot of times there's There's definitely, in my experience, a high correlation there with the amount of time that a team or a code base has been around, and the amount of sort of modern technologies and processes and and and so on that are that are brought to it on. And that sort of makes sense. I mean, if you if you're starting with the green field with a blank sheet of paper, you're gonna adopt, you know, the technologies and the processes and the cultures of today. A knot of 5, 10 15 15 years ago, Um but but most organizations air moving in that direction. Right? Andi, I think you know what? What? What? What's really changed in the last few years is the level of integration between the various tools between the various pieces and the amount of automation that you could bring to bear. I mean, I you know, I remember, you know, five or 10 years ago having all kinds of conversations with customers and prospects and and people of conferences and so on and they said, Oh, yeah, we'd like to automate our our software development life cycle, but, you know, we can't We have a manual thing here. We have a manual thing there. We do this kind of testing that we can automate it, and then we have this system, but it doesn't have any guy. So somebody has to sit and click on the screen. And, you know, and I used to say e used to say I don't accept No for an answer of can you automate this right? Everything. Anything can be automated. Even if you just get the little drinking bird. You know that just pokes the mouse. Everyone something. You can automate it, and I Actually, you know, I had one customer who was like, Okay, and we had a discussion and and and and they said, Well, we had this old Windows tool. We Its's an obscure tool. It's no longer updated, but it's it's it's used in a critical part of the life cycle and it can't be automated. And I said, Well, just install one of those Windows tools that allows you to peek and poke at the, you know, mass with my aunt I said so I don't accept your answer. And I said, Well, unfortunately, security won't allow us to install those tools, Eh? So I had to accept No, at that point, but But I think the big change were one of the biggest changes that's happened in the last few years is the systems now have all I'll have a p i s and they all talk to each other. So if you've gotta, you know, if you if you've got a scanning tool, if you've got a deployment tool, if you have a deployment, you know, infrastructure, you know, kubernetes based or, you know, kind of sitting in front of our around kubernetes thes things. I'll talk to each other and are all automated. So one of the things that's happened is we've taken out a lot of the weight states. A lot of the pauses, right? So if you you know, if you do something like a value stream mapping where you sit down and I'll date myself here and probably lose some of the audience with this analogy. But if you remember Schoolhouse Rock cartoons in in the late seventies, early eighties, there was one which was one of my favorites, and and the guy who did the music for this passed away last year, sadly, But, uh, the it was called How a bill Becomes a Law and they personified the bill. So the bill, you know, becomes a little person and, you know, first time passed by the house and then the Senate, and then the president either signs me or doesn't and or he vetoes, and it really sort of did this and what I always talk about with respect to sort of value stream mapping and talking about your processes, put a GoPro camera on your source codes head, and then follow that source code all the way through to your customer understand all of the stuff that happens to it, including nothing, right? Because a lot of times in that elapsed time, nothing keeps happening, right. If we build software the way we were sorry. If we build cars the way we build software, we would install the radio in a car, and then we would park it in a corner of the factory for three weeks. And then we might remember to test the radio before we ship the car out to the customer. Right, Because that's how a lot of us still develop some for. And I think one thing that's changed in the in the last few years is that we don't have these kind of, Well, we did the bill. So now we're waiting for somebody to create an environment and rack up some hardware and install an operating system and install. You know, this that and the other. You know, that that went from manual to we use Scheffer puppet to do it, which then went to we use containers to do it, which then went to we use containers and kubernetes to do it. So whole swaths of elapsed time in our software development life cycles basically went to nothing, right and went to the point where we can weaken, weaken, configure them way to the left and and and follow them all the way through. And that the artifact that we're delivering isn't necessarily and execute herbal. It could be a container, right? So now that starts to get interesting for us in terms of being able to test against that container scan against that container, def. Against that container, Um, you know, and it, you know, it does bring complexity to in terms of now you've got a layered file system in there. Well, what all is in there, you know, And so there's tools for scanning those kinds of things, But But I think that one of the biggest things that's happened is a lot of the natural pause. Points are no longer natural. Pause points their unnatural pause points, and they're now just delays in yourself for delivery. And so what? What a lot of organizations are working on is kind of getting to the point where those sorts of things get get automated and connected, and that's now possible. And it wasn't 55 or 10 years ago. >>So It sounds like a great deal of the speed benefit, which has been quantified many different ways. But is once you get one of these systems working, as we've all experienced enormous, um, is actually done by collapsing out what would have been unused time in a prior process or non paralyze herbal stuff has been made parallel. >>I remember doing a, uh, spent some time with a customer, and they did a value stream mapping, and they they found out at the end that of the 30 days of elapsed time they were spending three days on task. Everything else was waiting, waiting for a build waiting foran install, waiting for an environment, waiting for an approval, having meetings, you know, those sorts of things. And I thought to myself, Oh, my goodness, you know, 90% of the elapsed time is doing nothing. And I was talking to someone Gene Kim, actually, and I said, Oh my God, it was terrible that these you know, these people are screwed and he says, 0 90%. That's actually pretty good, you know? So So I think you know, if you if you think today, you know, if you If you if you look at the teams that are doing just really pure continuous delivery, you know, write some code committed, gets picked up by the sea ice system and passes through CIA goes through whatever coast, see I processing, you need to do security scanning and so on. It gets staged and it gets pushed into production. That stuff can happen in minutes, right? That's new. That's different. Now, if you do that without having the right automated gates in place around security and and and and those sorts of things you know, then you're living a little bit dangerously, although I would argue not necessarily any more dangerously, than just letting that insecure coat sit around for a week before your shipment, right? It's not like that problem is going to fix itself if you just let it sit there, Um, but But, you know, you definitely operated at a higher velocity. Now that's a lot of the benefit that you're tryingto trying to get out of it, right? You can get stuff out to the market faster, or if you take a little bit more time, you get more out to the market in, in in the same amount of time you could turn around and fix problems faster. Um, if you have a vulnerability, you can get it fixed and pushed out much more quickly. If you have a competitive threat that you need to address, you can you know, you could move that that much faster if you have a critical bug. You know, I mean, all security issues or bugs, sort of by definition. But, you know, if you have a functionality bug, you can you can get that pushed out faster. Eso So I think kind of all factors of the business benefit from from this increase in speed. And I think developers due to because anybody you know, any human that has a context switch and step away from something for for for, you know, duration of time longer than a few minutes, you know, you're gonna you're gonna you're gonna you're gonna have to load back up again. And so that's productivity loss. Now, that's a soft cost. But man, is it Is it expensive and is a painful So you see a lot of benefit there. Think >>if you have, you know, an organization that is just starting this journey What would you ask that organization to consider in orderto sort of move them down this path? >>It's by far the most frequent and almost always the first question I get at the end of the talk or or a presentation or something like that is where do we start? How do I know where to start? And and And there's a couple of answers to that. What one is Don't boil the ocean, right? Don't try to fix everything all at once. You know that because that's not agile, right? The be agile about your transformation Here, you know, pick, pick a set of problems that you have and and make a, you know, basically make a burn down list and and do them in order. So find find a pain point that you have right and, you know, just go address that and and try to make it small and actionable and especially early on when you're trying to affect change. And you're tryingto convinced teams that this is the way to go and you may have some naysayers, or you may have people who are skeptical or have been through these processes before that have been you know failures released, not the successes that they that they were supposed to be. You know, it's important to have some wind. So what I always say is look, you know, if you have a pebble in your shoe, you've got a pain point. You know how to address that. You know, you're not gonna address that by changing out your wardrobe or or by buying a new pair of shoes. You know, you're gonna address that by taking your shoe off, shaking it until the pebble falls out there putting the shoe back on. So look for those kinds of use cases, right? So if you're engineers are complaining that whenever I check in the build is broken and we're not doing see, I well, then let's look at doing C I Let's do see eye, right? If you're not doing that. And for most organizations, you know, setting up C I is a very manageable, very doable thing. There's lots of open source tooling out there. There's lots of commercial tooling out there. Thio do that to do it for small teams to do it for large teams and and everything in between. Um, if the problem is Gosh, Every time we push a change, we break something. You know where every time something works in staging it doesn't work in production. Then you gotta look at Well, how are these systems being configured? If you're If you're configuring them manually, stop automate the configuration of them. Um, you know, if you're if you're fixing system manually, don't you know, as a friend of mine says, don't fix, Repave? Um, you know, you don't wanna, you know, there's a story of, you know how how Google operates in their data centers. You know, they don't they don't go look for a broken disk drive and swap it out. You know, when it breaks, they just have a team of people that, like once a month or something, I don't know what the interval is. They just walked through the data center and they pull out all the dead stuff and they throw it out, and what they did was they assume that if the scale that they operate, things are always going to break physical things are always going to break. You have to build a software to assume that breakage and any system that assumes that we're going to step in when a disk drive is broken and fix it so that we can get back to running just isn't gonna work at scale. There's a similarity. There's sort of ah, parallel to that in in software, which is you know, any time you have these kinds of complex systems, you have to assume that they're gonna break and you have to put the things in place to catch those things. The automated testing, whether it's, you know, whether you have 10,000 tests that you that you've written already or whether you have no tests and you just need to go right, your first test that that journey, you've got to start somewhere. But my answer thio their questions generally always just start small, pick a very specific problem. Build a plan around it, you know, build a burned down list of things that you wanna address and just start working your way down that the same way that you would for any, you know, kind of agile project, your transformation of your own processes of your own internal systems. You should use agile processes for those as well, because if you if you go off for six months and and build something. By the time you come back, it's gonna be relevant. Probably thio the problems that you were facing six months ago. >>A Then let's consider the situation of, ah, company that's using C I and maybe sea ice and C d together. Um, and they want to reach what you might call the next level. Um, they've seen obvious benefits they're interested in, you know, in increasing their investment in, you know and cycles devoted to this technology. You don't have to sell them anymore, but they're looking for a next direction. What would you say that direction should be? I >>think oftentimes what organizations start to do is they start to look at feedback loops. So on DAT starts to go into the area of sort of metrics and analytics and those sorts of things. You know what we're we're always concerned about? You know, we're always affected by things like meantime to recovery. Meantime, the detection, what are our cycle times from, you know, ideation, toe codecommit. What's the cycle? Time from codecommit the production, those sorts of things. And you know you can't change what you don't measure eso so a lot of times the next step after kind of getting the rudimentary zoo of C I Orsini or some combination of both in places start to measure. Stop you, Um, and and then but But there. I think you know, you gotta be smart about it, because what you don't want to do is kind of just pull all the metrics out that exists. Barf them up on the dashboard. And the giant television screens say boom metrics, right. You know, Mike, drop go home. That's the wrong way to do it. You want to use metrics very specifically to achieve outcomes. So if you have an outcome that you want to achieve and you can tie it to a metric start looking at that metric and start working that problem once you saw that problem, you can take that metric. And you know, if that's the metric you're showing on the big you know, the big screen TV, you can pop that off and pick the next one and put it up there. I I always worry when you know a little different when you're in a knock or something like that. When when you're looking at the network stuff and so on. But I'm always leery of when I walk into to a software development organization. You know, just a Brazilian different metrics, this whole place because they're not all relevant. They're not all relevant at the same time. Some of them you wanna look at often, some of them you just want to kind of set an alarm on and make sure that, you know, I mean, you don't go down in your basement every day to check that the sump pump is working. What you do is you put a little water detector in there and you have an alarm go off if the water level ever rises above a certain amount. Well, you want to do the same thing with metrics, right? Once you've got in the water out of your basement, you don't have to go down there and look at it all the time. You put the little detector in, and then you move on and you worry about something else. And so organizations as they start to get a little bit more sophisticated and start to look at the analytics, the metrics, um, start to say, Hey, look, if our if our cycle time from from, you know, commit to deploy is this much. And we want it to be this much. What happens during that time, And where can we take slices out of that? You know, without without affecting the outcomes in terms of quality and so on, or or if it's, you know, from from ideation, toe codecommit. You know what? What can we do there? Um, you start to do that. And and then as you get those sort of virtuous cycles of feedback loops happening, you know, you get better and better and better, but you wanna be careful with metrics, you know, you don't wanna, you know, like I said, you don't wanna barf a bunch of metrics up just to say, Look, we got metrics. Metrics are there to serve a particular outcome. And once you've achieved that outcome, and you know that you can continue to achieve that outcome, you turn it into an alarm or a trigger, and you put it out of sight. And you know that. You know, you don't need to have, like, a code coverage metric prominently displayed you you pick a code coverage number that you're happy with you work to achieve that. Once you achieve it, you just worry about not going below that threshold again. So you can take that graph off and just put a trigger on this as if we ever get below this, you know, raising alarm or fail a build or fail a pipeline or something like that and then start to focus on improving another man. Uh, or another outcome using another matter >>makes enormous sense. So I'm afraid we are getting to be out of time. I want to thank you very much on this for joining us today. This has been certainly informative for me, and I hope for the audience, um, you know, thank you very, very much for sharing your insulin.

Published Date : Sep 15 2020

SUMMARY :

Um, and that this speed ba leveraged toe let us ship and iterate You know, the you know, the kinds of security problems that we that we see these days what almost sounds like a Lawrence Lessig Ian kind of idea that, you know, I think you know, when I talked to a lot of security people, um, you know, What are the parts of modern C I c D. As opposed to what one would encounter I mean, I you know, I remember, you know, five or 10 years ago having all kinds of conversations But is once you get one of these systems working, So So I think you know, if you if you think today, you know, if you If you if you look at the teams that are doing Um, you know, you don't wanna, you know, there's a story of, Um, they've seen obvious benefits they're interested in, you know, I think you know, you gotta be smart about it, you know, thank you very, very much for sharing your insulin.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John Jane ShakePERSON

0.99+

$5QUANTITY

0.99+

three weeksQUANTITY

0.99+

Gene KimPERSON

0.99+

GoogleORGANIZATION

0.99+

MikePERSON

0.99+

Anders VulcanPERSON

0.99+

threeQUANTITY

0.99+

30 daysQUANTITY

0.99+

last yearDATE

0.99+

IranLOCATION

0.99+

10,000 testsQUANTITY

0.99+

three daysQUANTITY

0.99+

todayDATE

0.99+

Tous AndiPERSON

0.99+

GoProORGANIZATION

0.99+

less than 50%QUANTITY

0.99+

two sidesQUANTITY

0.99+

3QUANTITY

0.99+

oneQUANTITY

0.99+

80QUANTITY

0.99+

late seventiesDATE

0.99+

first testQUANTITY

0.99+

six months laterDATE

0.99+

six monthsQUANTITY

0.98+

six months agoDATE

0.98+

CIAORGANIZATION

0.98+

90%QUANTITY

0.98+

SenateORGANIZATION

0.98+

1QUANTITY

0.98+

first questionQUANTITY

0.98+

bothQUANTITY

0.98+

WindowsTITLE

0.98+

56 years agoDATE

0.98+

GoshPERSON

0.98+

early eightiesDATE

0.98+

AndrePERSON

0.97+

once a monthQUANTITY

0.97+

10QUANTITY

0.97+

one customerQUANTITY

0.97+

10 years agoDATE

0.96+

first threeQUANTITY

0.96+

55DATE

0.95+

fiveDATE

0.95+

this afternoonDATE

0.94+

one thingQUANTITY

0.94+

a weekQUANTITY

0.93+

both kindsQUANTITY

0.92+

Schoolhouse RockTITLE

0.91+

one wayQUANTITY

0.91+

first timeQUANTITY

0.89+

agileTITLE

0.88+

six month oldQUANTITY

0.86+

million dollarsQUANTITY

0.85+

15 years agoDATE

0.84+

Lawrence Lessig IanPERSON

0.83+

MirandaPERSON

0.81+

AndersORGANIZATION

0.81+

5DATE

0.81+

CTITLE

0.79+

BrazilianOTHER

0.78+

SchefferTITLE

0.78+

about fourQUANTITY

0.77+

singleQUANTITY

0.76+

aboutQUANTITY

0.74+

85%QUANTITY

0.73+

C I OrsiniLOCATION

0.72+

0QUANTITY

0.71+

No Matter Where You AreTITLE

0.7+

doubleQUANTITY

0.7+

last few yearsDATE

0.7+

OndaORGANIZATION

0.69+

billTITLE

0.67+

AndiPERSON

0.67+

OndersPERSON

0.64+

favoritesQUANTITY

0.64+

C ITITLE

0.63+

MirandaORGANIZATION

0.63+

for yearsQUANTITY

0.62+

lastDATE

0.62+

minutesQUANTITY

0.61+

Securing Your Cloud, Everywhere


 

>>welcome to our session on security titled Securing Your Cloud. Everywhere With Me is Brian Langston, senior solutions engineer from Miranda's, who leads security initiatives from Renta's most security conscious customers. Our topic today is security, and we're setting the bar high by talking in some depth about the requirements of the most highly regulated industries. So, Brian four Regulated industries What do you perceive as the benefits of evolution from classic infra za service to container orchestration? >>Yeah, the adoption of container orchestration has given rise to five key benefits. The first is accountability. Think about the evolution of Dev ops and the security focused version of that team. Deb. SEC ops. These two competencies have emerged to provide, among other things, accountability for the processes they oversee. The outputs that they enable. The second benefit is audit ability. Logging has always been around, but the pervasiveness of logging data within container or container environments allows for the definition of audit trails in new and interesting ways. The third area is transparency organizations that have well developed container orchestration pipelines are much more likely to have a higher degree of transparency in their processes. This helps development teams move faster. It helped operations teams operations teams identify and resolve issues easier and help simplify the observation and certification of security operations by security organizations. Next is quality. Several decades ago, Toyota revolutionized the manufacturing industry when they implemented the philosophy of continuous improvement. Included within that philosophy was this dependency and trust in the process as the process was improved so that the quality of the output Similarly, the refinement of the process of container orchestration yields ah, higher quality output. The four things have mentioned ultimately points to a natural outcome, which is speed when you don't have to spend so much time wondering who does what or who did what. When you have the clear visibility to your processes and because you can continuously improve the quality of your work, you aren't wasting time in a process that produces defects or spending time and wasteful rework phases. You can move much faster than we've seen this to be the case with our customers. >>So what is it specifically about? Container orchestration that gives these benefits, I guess. I guess I'm really asking why are these benefits emerging now around these technologies? What's enabling them, >>right? So I think it boils down to four things related to the orchestration pipelines that are also critical components. Two successful security programs for our customers and related industry. The first one is policy. One of the core concepts and container orchestration is this idea of declaring what you want to happen or declaring the way you want things done? One place where declarations air made our policies. So as long as we can define what we want to happen, it's much easier to do complementary activities like enforcement, which is our second enabler. Um, tools that allow you to define a policy typically have a way to enforce that policy. Where this isn't the case, you need to have a way of enforcing and validating the policies objectives. Miranda's tools allow custom policies to be written and also enforce those policies. The third enabler is the idea of a baseline. Having a well documented set of policies and processes allows you to establish a baseline. Um, it allows you to know what's normal. Having a baseline allows you to measure against it as a way of evaluating whether or not you're achieving your objectives with container orchestration. The fourth enabler of benefits is continuous assessment, which is about measuring constantly back to what I said a few minutes ago. With the toilet away measuring constantly helps you see whether your processes and your target and state are being delivered as your output deviates from that baseline, your adjustments can be made more quickly. So these four concepts, I think, could really make or break your compliance status. >>It's a really way interesting way of thinking about compliance. I had thought previously back compliance, mostly as a as a matter of legally declaring and then trying to do something. But at this point, we have methods beyond legal boilerplate for asserting what we wanna happen, as you say, and and this is actually opening up new ways to detect, deviation and and enforce failure to comply. That's really exciting. Um, so you've you've touched on the benefits of container orchestration here, and you've provided some thoughts on what the drivers on enablers are. So what does Miranda's fit in all this? How does how are we helping enable these benefits, >>right? Well, our goal and more antis is ultimately to make the world's most compliant distribution. We we understand what our customers need, and we have developed our product around those needs, and I could describe a few key security aspects about our product. Um, so Miranda's promotes this idea of building and enabling a secure software supply chain. The simplified version of that that pertains directly to our product follows a build ship run model. So at the build stage is doctor trusted registry. This is where images are stored following numerous security best practices. Image scanning is an optional but highly recommended feature to enable within D T R. Image tags can be regularly pruned so that you have the most current validated images available to your developers. And the second or middle stage is the ship stage, where Miranda's enforces policies that also follow industry best practices, as well as custom image promotion policies that our customers can write and align to their own internal security requirements. The third and final stages to run stage. And at this stage, we're talking about the engine itself. Docker Engine Enterprise is the Onley container, run time with 51 40 dash to cryptography and has many other security features built in communications across the cluster across the container platform are all secure by default. So this build ship stage model is one way of how our products help support this idea of a secure supply chain. There are other aspects of the security supply chain that arm or customer specific that I won't go into. But that's kind of how we could help our product. The second big area eso I just touched on the secure supply chain. The second big area is in a Stig certification. Um, a stick is basically an implementation or configuration guide, but it's published by the U. S government for products used by the US government. It's not exclusive to them, but for customers that value security highly, especially in a regulated industry, will understand the significance and value that the Stig certification brings. So in achieving the certification, we've demonstrated compliance or alignment with a very rigid set of guidelines. Our fifth validation, the cryptography and the Stig certification our third party at two stations that our product is secure, whether you're using our product as a government customer, whether you're a customer in a regulated industry or something else, >>I did not understand what the Stig really Waas. It's helpful because this is not something that I think people in the industry by and large talk about. I suspect because these things are hard to get and time consuming to get s so they don't tend to bubble up to the top of marketing speak the way glitzy new features do that may or may not >>be secure. >>The, uh so then moving on, how has container orchestration changed? How your customers approach compliance assessment and reporting. >>Yeah, This has been an interesting experience and observation as we've worked with some of our customers in these areas. Eso I'll call out three areas. One is the integration of assessment tooling into the overall development process. The second is assessment frequency and then the third is how results are being reported, which includes what data is needed to go into the reporting. There are very likely others that could be addressed. But those are three things that I have noticed personally and working with customers. >>What do you mean exactly? By integration of assessment tooling. >>Yeah. So our customers all generally have some form of a development pipeline and process eso with various third party and open source tools that can be inserted at various phases of the pipeline to do things like status static source would analysis or host scanning or image scanning and other activities. What's not very well established in some cases is how everything fits within the overall pipeline framework. Eso fit too many customers, ends up having a conversation with us about what commands need should be run with what permissions? Where in the environment should things run? How does code get there that does this scanning? Where does the day to go? Once the out once the scan is done and how will I consume it? Thies Real things where we can help our customers understand? Um, you know what? Integration? What? Integration of assessment. Tooling really means. >>It is fascinating to hear this on, baby. We can come back to it at the end. But what I'm picking out of this Ah, this the way you speak about this and this conversation is this kind of re emergence of these Japanese innovations in product productivity in in factory floor productivity. Um, like, just in time delivery and the, you know, the Toyota Miracle and, uh, and that kind of stuff. Fundamentally, it's someone Yesterday, Anders Wahlgren from cloud bees, of course. The C I. C D expert told me, um, that one of the things he likes to tell his, uh consult ease and customers is to put a GoPro on the head of your code and figure out where it's going and how it's spending its time, which is very reminiscent of these 19 fifties time and motion studies, isn't it that that that people, you know pioneered accelerating the factory floor in the industrial America of the mid century? The idea that we should be coming back around to this and doing it at light speed with code now is quite fascinating. >>Yeah, it's funny how many of those same principles are really transferrable from 50 60 70 years ago to today. Yeah, quite fascinating. >>So getting back to what you were just talking about integrating, assessment, tooling, it sounds like that's very challenging. And you mentioned assessment frequency and and reporting. What is it about those areas that that's required? Adaptation >>Eso eso assessment frequency? Um, you know, in legacy environments, if we think about what those look like not too long ago, uh, compliance assessment used to be relatively infrequent activity in the form of some kind of an audit, whether it be a friendly peer review or intercompany audit. Formal third party assessments, whatever. In many cases, these were big, lengthy reviews full of interview questions, Um, it's requests for information, periods of data collection and then the actual review itself. One of the big drawbacks to this lengthy engagement is an infrequent engagement is that vulnerabilities would sometimes go unnoticed or unmitigated until these reviews at it. But in this era of container orchestration, with the decomposition of everything in the software supply chain and with clearer visibility of the various inputs to the build life cycle, our customers can now focus on what tooling and processes can be assembled together in the form of a pipeline that allows constant inspection of a continuous flow of code from start to finish. And they're asking how our product can integrate into their pipeline into their Q A frameworks to help simplify this continuous assessment framework. Eso that's that kind of addresses the frequency, uh, challenge now regarding reporting, our customers have had to reevaluate how results are being reported and the data that's needed in the reporting. The root of this change is in the fact that security has multiple stakeholder groups and I'll just focus on two of them. One is development, and their primary focus, if you think about it, is really about finding and fixing defects. That's all they're focused on, really, is there is there pushing code? The other group, though, is the Security Project Management Office, or PMO. This group is interested in what security controls are at risk due to those defects. So the data that you need for these two stakeholder groups is very different. But because it's also related, it requires a different approach to how the data is expressed, formatted and ultimately integrated with sometimes different data sources to be able to appease both use cases. >>Mhm. So how does Miranda's help improve the rate of compliance assessment? Aziz? Well, as this question of the need for differential data presentation, >>right, So we've developed on exposed a P I S that helped report the compliance status of our product as it's implemented in our customers on environment. So through these AP eyes, we express the data and industry standard formats using plastic out Oscar is a relatively new project out of the mist organization. It's really all about standardizing a set of standards instead of formats that expresses control information. So in this way our customers can get machine and human readable information related to compliance, and that data can then be massaged into other tools or downstream processes that our customers might have. And what I mean by downstream processes is if you're a development team and you have the inspection tools, the process is to gather findings defects related to your code. A downstream process might be the ticketing system with the era that might log a formal defect or that finding. But it all starts with having a common, standard way of expressing thes scan output. And the findings such that both development teams and and the security PMO groups can both benefit from the data. So essentially we've been following this philosophy of transparency, insecurity. What we mean by that is security isn't or should not be a black box of information on Lee, accessible and consumable by security professionals. Assessment is happening proactively in our product, and it's happening automatically. We're bringing security out of obscurity by exposing the aspects of our product that ultimately have a bearing on your compliance status and then making that information available to you in very user friendly ways. >>It's fascinating. Uh uh. I have been excited about Oscar's since, uh, since first hearing about it, Um, it seems extraordinarily important to have what is, in effect, a ah query capability. Um, that that let's that that lets different people for different reasons formalize and ask questions of a system that is constantly in flux, very, very powerful. So regarding security, what do you see is the basic requirements for container infrastructure and tools for use in production by the industries that you are working with, >>right? So obviously, you know, the tools and infrastructure is going to vary widely across customers. But Thio generalize it. I would refer back to the concept I mentioned earlier of a secure software supply chain. There are several guiding principles behind us that are worth mentioning. The first is toe have a strategy for ensuring code quality. What this means is being able to do static source code analysis, static source code analysis tools are largely language specific, so there may be a few different tools that you'll need to have to be able to manage that, um, second point is to have a framework for doing regular testing or even slightly more formal security assessments. There are plenty of tools that can help get a company started doing this. Some of these tools are scanning engines like open ESCAP that's also a product of n'est open. ESCAP can use CS benchmarks as inputs, and these tools do a very good job of summarizing and visualizing output, um, along the same family or idea of CS benchmarks. There's many, many benchmarks that are published. And if you look at your own container environment, um, there are very likely to be many benchmarks that can form the core platform, the building blocks of your container environment. There's benchmarks for being too, for kubernetes, for Dr and and it's always growing. In fact, Mirante is, uh, editing the benchmark for container D, so that will be a formal CSCE benchmark coming up very shortly. Um, next item would be defining security policies that line with your organization's requirements. There are a lot of things that come out of box that comes standard that comes default in various products, including ours, but we also give you through our product. The ability to write your own policies that align with your own organization's requirements, uh, minimizing your tax surface. It's another key area. What that means is only deploying what's necessary. Pretty common sense. But sometimes it's overlooked. What this means is really enabling required ports and services and nothing more. Um, and it's related to this concept of least privilege, which is the next thing I would suggest focusing on these privileges related to minimizing your tax service. It's, uh, it's about only allowing permissions to those people or groups that excuse me that are absolutely necessary. Um, within the container environment, you'll likely have heard this deny all approach. This denial approach is recommended here, which means deny everything first and then explicitly allow only what you need. Eso. That's a very common, uh uh, common thing that sometimes overlooked in some of our customer environments. Andi, finally, the idea of defense and death, which is about minimizing your plast radius by implementing multiple layers of defense that also are in line with your own risk management strategy. Eso following these basic principles, adapting them to your own use cases and requirements, uh, in our experience with our customers, they could go a long way and having a secure software supply chain. >>Thank you very much, Brian. That was pretty eye opening. Um, and I had the privilege of listening to it from the perspective of someone who has been working behind the scenes on the launch pad 2020 event. So I'd like to use that privilege to recommend that our listeners, if you're interested in this stuff certainly if you work within one of these regulated industries in a development role, um, that you may want to check out, which will be easy for you to do today, since everything is available once it's been presented. Matt Bentley's live presentation on secure Supply Chain, where he demonstrates one possible example of a secure supply chain that permits image. Signing him, Scanning on content Trust. Um, you may want to check out the session that I conducted with Andres Falcon at Cloud Bees who talks about thes um, these industrial efficiency factory floor time and motion models for for assessing where software is in order to understand what policies can and should be applied to it. Um, and you will probably want to frequent the tutorial sessions in that track, uh, to see about how Dr Enterprise Container Cloud implements many of these concentric security policies. Um, in order to provide, you know, as you say, defense in depth. There's a lot going on in there, and, uh, and it's ah, fascinating Thio to see it all expressed. Brian. Thanks again. This has been really, really educational. >>My pleasure. Thank you. >>Have a good afternoon. >>Thank you too. Bye.

Published Date : Sep 15 2020

SUMMARY :

about the requirements of the most highly regulated industries. Yeah, the adoption of container orchestration has given rise to five key benefits. So what is it specifically about? or declaring the way you want things done? on the benefits of container orchestration here, and you've provided some thoughts on what the drivers So in achieving the certification, we've demonstrated compliance or alignment I suspect because these things are hard to get and time consuming How your customers approach compliance assessment One is the integration of assessment tooling into the overall development What do you mean exactly? Where does the day to go? America of the mid century? Yeah, it's funny how many of those same principles are really transferrable So getting back to what you were just talking about integrating, assessment, One of the big drawbacks to this lengthy engagement is an infrequent engagement is that vulnerabilities Well, as this question of the need for differential the process is to gather findings defects related to your code. the industries that you are working with, finally, the idea of defense and death, which is about minimizing your plast Um, and I had the privilege of listening to it from the perspective of someone who has Thank you. Thank you too.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrianPERSON

0.99+

Brian LangstonPERSON

0.99+

Matt BentleyPERSON

0.99+

Anders WahlgrenPERSON

0.99+

ToyotaORGANIZATION

0.99+

Andres FalconPERSON

0.99+

Cloud BeesORGANIZATION

0.99+

OneQUANTITY

0.99+

two stationsQUANTITY

0.99+

U. S governmentORGANIZATION

0.99+

50DATE

0.99+

bothQUANTITY

0.99+

thirdQUANTITY

0.99+

second pointQUANTITY

0.99+

ESCAPTITLE

0.99+

firstQUANTITY

0.99+

four thingsQUANTITY

0.99+

third areaQUANTITY

0.98+

US governmentORGANIZATION

0.98+

secondQUANTITY

0.98+

five key benefitsQUANTITY

0.98+

MirandaORGANIZATION

0.98+

second enablerQUANTITY

0.98+

todayDATE

0.97+

second benefitQUANTITY

0.97+

fifth validationQUANTITY

0.97+

OscarORGANIZATION

0.97+

three thingsQUANTITY

0.97+

MiracleCOMMERCIAL_ITEM

0.97+

ThioPERSON

0.97+

MiranteORGANIZATION

0.97+

AzizPERSON

0.97+

StigORGANIZATION

0.97+

one wayQUANTITY

0.96+

two competenciesQUANTITY

0.96+

Several decades agoDATE

0.95+

two stakeholder groupsQUANTITY

0.95+

YesterdayDATE

0.95+

four conceptsQUANTITY

0.94+

second bigQUANTITY

0.93+

fourth enablerQUANTITY

0.93+

19 fiftiesDATE

0.92+

RentaORGANIZATION

0.92+

both useQUANTITY

0.91+

three areasQUANTITY

0.9+

Securing Your CloudTITLE

0.9+

oneQUANTITY

0.9+

One placeQUANTITY

0.87+

51 40 dashQUANTITY

0.87+

D TTITLE

0.86+

launch pad 2020EVENT

0.86+

GoProORGANIZATION

0.86+

mid centuryDATE

0.85+

70 years agoDATE

0.84+

first oneQUANTITY

0.83+

few minutesDATE

0.83+

OscarEVENT

0.82+

two of themQUANTITY

0.81+

JapaneseOTHER

0.8+

Everywhere With MeTITLE

0.79+

60DATE

0.78+

Security Project Management OfficeORGANIZATION

0.77+

third enablerQUANTITY

0.75+

one possibleQUANTITY

0.74+

StigTITLE

0.67+

DebPERSON

0.66+

PMOORGANIZATION

0.62+

Two successful security programsQUANTITY

0.62+

AndiPERSON

0.61+

Dr Enterprise Container CloudORGANIZATION

0.6+

fourQUANTITY

0.6+

Docker EngineORGANIZATION

0.59+

AmericaLOCATION

0.53+

ThiesQUANTITY

0.5+

EsoORGANIZATION

0.49+

LeeORGANIZATION

0.48+

MirandaPERSON

0.47+

WINNING ROADMAP RACE FINAL


 

>>Well, thank you, everyone. And welcome to winning the roadmap race. How? Toe work with tech vendors to get the features that you need. We're here today with representatives or RBC Capital Markets. We will share some of their best practices for collaborating with technology vendors. I am Ada Mancini, solution architect here at Mirant us. And we're joined by Tina Bustamante, senior production manager, RBC Capital Markets and Minnows Agarwal, head of capital markets. Compute and data fabric. Um, RBC has been using docker since about 2016 and you've been closely involved with that effort. What moved you to begin, contain arising applications. >>Okay, uh, higher that. Thank you for having us. Um, back in 2016 when we started our journey one off our major focus, Syria was measuring develops capabilities And what we, uh what we found was it was challenging. Toe adopt develops across applications with different shapes and sizes, different text tax. And as the financial industry, we do have, um, a large presence of rental applications. So making it making that work was challenging. This is where containers were appealing. Tow us. In those early days, we started looking at containers as a possible solution to create a standardization across across different applications to have a consistent format. Other than that, we also saw containers as a potential technology that could be adopted across across enterprise, not just a small subset of applications. Uh, so that that was very interesting. Interesting. Tow us. In addition to that, uh, containers came with schedulers like kubernetes or swarm, which were, uh, which we're doing a lot more than all, which would do a lot more than the traditional traditional schedulers. As an example, resource management fell over management or scaling up and down, depending on a application or business requirements. So all those things were very appealing. It looked like a solutions to a number of problems that are number of challenges that we're facing. So that's when we got started with containers. >>So what subsequently motivated you to start utilizing swarm and then kubernetes? >>Yeah, other than resource management, the follower Management Aziz, you can imagine managing followers. D are Those are never difficult, Never easy on with containers. We saw that, as the container schedule is, we saw that it's a kind of becomes a manage manage service for us. Um, other aspect We are heavily regulated industry in capital markets, especially so creating an audit trail off events. Who did what? When? Uh, that's important. And containers seem to provide all those all those aspects tow us out of the box. Um, the another thing that we saw with containers under the schedulers, we could simplify our risk management. We could control what, what application on which container gets deployed, where, how they run on when they run. So all all those aspects of schedule er they simplify are seen to simplify at that time a lot off a lot off the traditional challenges, and that that's what was very appealing to us. >>Eso what kind of changes were required in the development culture and in operations in order to enable these new this new platform in this new delivery method? >>Yeah, that that's a good question, and any change obviously requires a lot of education. And this was not just a change across our developers or operations, but it was the change across throughout the change, starting with project managers, business analyst developers, Q A, uh, Cuba and our support personal. In addition, I talked about the risk and security Management so it it is. It is a change across the organization. It's, uh it's a cultural change. So the collaboration other than education collaboration was extremely, extremely important. So across those two, we started first with internal education, using something like internal lunch and learns. We did some external workshops or some hands on workshops. So a lot of those exercises were done in collaboration across all those all those things. The next item that we focused on is how do we get our high end developers the awareness of this technology on, uh, make sure they can. They can see, uh, the use cases. Or they can identify the use cases that can benefit from this technology. So we picked high end developers, noticed application and kind of try before you buy type of scenarios. So we ran through some applications to make sure they get their hands study. They feel comfortable with it on. Then they can broadcast that message. The broader organization, the next thing we did it waas getting the management buying. So obviously any change is going to require investment on uh, making sure there's a value proposition that's clear to our management as well as our business was critical very early on in in container option face. So that that that was that was another item that we focused heavily on. And the last thing I would say is a clearly defining strategy benefits so defining a roadmap off how we will proceed, How do we go from our low risk to high risk application or low risk medium risk applications? And what other strategy benefits are these purely operational? Are these purely cause best benefit? Or it's a modernization of the underlying technical facts. So if the containers do check all those three boxes So that that was that was our fourth item on the left that, uh, that, I would say, changed, um, in a container adoption journey. >>So as as people are getting onto the container ization process and as this is starting to gain traction, what things did your developers embrace as the real tangible benefits, um, of moving the containers of container platforms? >>It's interesting. The benefits are not just for developers. And the way I will answer this question is not from development operations. But let me answer it from the operations to developers. So operationally the moment developers saw that application can be deployed with containers relatively quickly without without having them on the collar without them writing a long release notes. They started seeing that benefit right away, but I don't need to be there late in the evening. I don't need to be there on call to create the environment or deploying, uh, deploying Q A versus production versus the are to them because, like do it right one on then repeat that success factor of different environments. So that was that. That was a big eye opening, um, eye opening for them. And they started realizing that Say, Look, I can free up my time now I can focus. I can focus on my core development, and I don't need to deal with the traditional traditional operational operational issues. So that's what that what? That was quite eye opening for all of us, not just for developers. And we started seeing those, uh, that are very early on. Another thing, I would say the developers talked about waas. Hey, I can validate this application on my laptop. I don't need to be I don't need to be on, uh, on on servers. I don't need all these servers. I don't need to share my service. I don't need to depend on infrastructure teams or other teams to get their check is done. Before I kept start my work, I can validate on my on my laptop. That was that was another very powerful feature. Um, that that empowered them. The last thing I would say is that the software defined aspect, uh, aspect off, um, off technology as an example, Network or storage. Although a lot of these traditional things that something Democrats have to call someone they have to wait on, then they have to deal with tickets. Now, they can do a lot of these things themselves. They can define it themselves, and that's very empowering. So they are perspective. Our move towards left, Um, s o the more control developers have, the better the product is. The better the quality of the product. The time to market improves on just the overall experience on the business benefits. They also start to They all start toe, um improved last part. One extra point. I would like to make here the success success of this waas so interesting, uh, to the development community even our developers from business. They they came along and they have shown interest in adopting containers. Whether it's, uh, the development developers from the quartz are the data science developers. They all started realizing the value value proposition of containers. So it was It was quite eye opening, I would have to say. >>And so while this while this process is happening while you're moving to container platforms, um, you started looking for new ways to try and deliver some of the benefits of containers and distributed systems orchestration more widely across the organization. And I think you identified a couple areas where, um, the doctor Enterprise kubernetes service wasn't meeting the features that you anticipated or it hadn't planned on integrating the features that you required. Um, can you tell us about that situation? >>Certainly. Haida. Thanks for having us again. Um, from the product management perspective, I would say products are always evolving and the capabilities can We have different stages of maturity. So when we reviewed what our application teams what are businesses looking to dio? One area that stood out was definitely the state of science space. Um, are quantum data science is really wanted to expand our risk analysis models. Um, they were looking for larger scales, uh, to compute like a lot more computing power. And we tried to see, um, come up with a way to be ableto facilitate their needs. Um, one thing, and it really, really came from like an early concept was the idea of being able to leverage GPU. Um, we stood up like a small R and D team, trying to see if there was something that would be feasible for our on our end. Um, but based on different factors and considerations and, you know, technical thinking involved in this we just realized that the complexity that it would bring to our you know, our overall technical back is not something, um that we would be, um, best suitable, I would say to do it on our own. So we reached out Thio Tim Aransas and brought forth, like, the concept of being able to scale the kubernetes pods on GPS. We relied on there authorities on their engineers Thio, you know, think about being able to expand, uh, kubernetes there kubernetes offering to be able to scale and potentially support running the pods and GPS um, definitely was not something that came from one day to the next that it did involve a number of conversations. Um, but, you know, I'm happy to say I was saying the recent months it has become part of the KUBERNETES product offering. >>Yeah, I believe that that effort, um, did take ah, while took a ah lot of engineering effort. Um, and I think initially all had done some internal r and D to try to work on those features, but ultimately, you decided to go with a different strategy and rely on the vendor to produce those assed part of the vendors product. Um, can you elaborate on the things that you found in that internal R and D? >>Well, we definitely saw the potential for there was definitely potential there. But, you know, the longevity of actually maintaining that GPU, uh, scaling using communities on our own was just not 100% like, in our expertise, expertise of something that we wanted to collaborate more closely with the vendor. Um, you know, technology is always evolving, So it's just the longevity of keeping up with, like, the the up to date features or capabilities testing que involved was just not something that we thought it would be. Something that we should be taking on on our own. >>Okay, So, like spending the time and engineering effort, focusing on the data science, the quantity of analysis parts I see. Um, and then ultimately, um, working with the vendor produced a release and where these features are now available. Um, how what did that engagement look like? Um, with RBC s involvement, >>I would say the engagement started off with, you know, discussing bringing it forth, being very open, you know, having transparency. So that delivery was always a little bit was the focus. Um, but it definitely, um, started office, you know, discussing what it would be like the business case. Why we would require the feature. Definitely the representative. Those and others engaged from them. A ransom side had their own, Um, you know, thoughts and opinions. Um, it had to be being able to run the work clothes, um, on GPU would be something that they would ultimately, as I mentioned, have to support on their end. Um, so we did work with them very closely. There was a very much a willingness collaborate we held a number of meetings. We discuss how the CPU support would would actually evolved. So it wasn't something that came about within like one sprint. No, that was never like our expectation. It did take a couple weeks to be able to see, like a beta product opine on it, see a demo, review it, discuss it further. Um, as you know, sometimes there might be a relief where this capability maybe offered, but there are delays. It's just, you know, part of off of our industry in a cent. Um, we're very much risk versus the nose mentioned, you know, >>when >>you are a financial institution. So we just wanted to make sure it was a viable product, that it was definitely available off the shelf, and then we would be able to leverage it. Um, but yeah, the key point, I would say, in terms of being able to bring the feature forward with definitely constant communication with Miranda, >>that's excellent. I'm glad that were ableto help bring that feature forward. I think that it's something that a lot of people have been asking for and like you said, it enables ah, whole new class of uh, problem solving. Okay. Uh, Meno je Tina, Thank you for your time today. It's been wonderful talking to you again. Uh, that is our session on working with your vendors. I want to thank everyone who's watching this for taking the time Thio contribute to our conference. Uh, awesome. Thank you, kitty.

Published Date : Sep 15 2020

SUMMARY :

get the features that you need. Uh, so that that was very interesting. Um, the another thing that we saw with containers under So that that that was that was another item that So it was It was quite eye opening, I would have to say. Um, can you tell us about that situation? complexity that it would bring to our you know, our overall technical back Um, can you elaborate on the things that you found in that internal testing que involved was just not something that we thought it would be. focusing on the data science, the quantity of analysis parts I I would say the engagement started off with, you know, discussing bringing that it was definitely available off the shelf, and then we would be able to leverage it. Thank you for your time today.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tina BustamantePERSON

0.99+

RBC Capital MarketsORGANIZATION

0.99+

Ada ManciniPERSON

0.99+

RBCORGANIZATION

0.99+

2016DATE

0.99+

fourth itemQUANTITY

0.99+

100%QUANTITY

0.99+

twoQUANTITY

0.99+

ThioPERSON

0.99+

three boxesQUANTITY

0.99+

todayDATE

0.99+

firstQUANTITY

0.98+

MirandaPERSON

0.97+

AzizPERSON

0.96+

DemocratsORGANIZATION

0.96+

CubaLOCATION

0.94+

one dayQUANTITY

0.93+

Thio Tim AransasPERSON

0.89+

one thingQUANTITY

0.88+

Meno jePERSON

0.86+

AgarwalORGANIZATION

0.82+

One extra pointQUANTITY

0.81+

One areaQUANTITY

0.75+

KUBERNETESORGANIZATION

0.74+

MirantORGANIZATION

0.72+

one sprintQUANTITY

0.7+

couple weeksQUANTITY

0.68+

oneQUANTITY

0.68+

aboutDATE

0.65+

TinaPERSON

0.63+

SyriaORGANIZATION

0.62+

quartzORGANIZATION

0.58+

MinnowsPERSON

0.56+

cent.QUANTITY

0.53+

ROADMAP RACEEVENT

0.46+

DOCKER CLI FINAL


 

>>Hello, My name is John John Sheikh from Iran Tous. Welcome to our session on new extensions for doctors CLI as we all know, containers air everywhere. Kubernetes is coming on strong and the CNC F cloud landscape slide has become a marvel to behold its complexities about to surpass that of the photo. Letha dies used to fabricate the old intel to 86 and future generations of the diagram will be built out and up into multiple dimensions using extreme ultraviolet lithography. Meanwhile, complexity is exploding and uncertainty about tools, platform details, processes and the economic viability of our companies in changing and challenging times is also increasing. Mirant ous, as you've already heard today, believes that achieving speed is critical and that speed results from balancing choice with simplicity and security. You've heard about Dr Enterprise Container Cloud, a new framework built on kubernetes, the less you deploy compliant, secure by default. Cooper nineties clusters on any infrastructure, providing a seamless self service capable cloud experience to developers. Get clusters fast, Justus, you need them, Update them seamlessly. Scale them is needed all while keeping workloads running smoothly. And you've heard how Dr Enterprise Container Cloud also provides all the day one and Day two and observe ability, tools, the integration AP ICE and Top Down Security, Identity and Secrets management to run operations efficiently. You've also heard about Lens, an open source i D for kubernetes. Aimed at speeding up the most banding, tightest inner loop of kubernetes application development. Lens beautifully meets the needs of a new class of developers who need to deal with multiple kubernetes clusters. Multiple absent project sufficiently developers who find themselves getting bogged down and seal I only coop CTL work flows and context switches into and out of them. But what about Dr Developers? They're working with the same core technologies all the time. They're accessing many of the same amenities, including Docker, engine Enterprise, Docker, Trusted registry and so on. Sure, their outer loop might be different. For example, they might be orchestrating on swarm. Many companies are our future of Swarm session talks about the ongoing appeal of swarm and Miranda's commitment to maintaining and extending the capabilities of swarm Going forward. Dr Enterprise Container Cloud can, of course, deployed doctor enterprise clusters with 100% swarm orchestration on computes just Aziza Leah's. It can provide kubernetes orchestration or mixed swarming kubernetes clusters. The problem for Dr Dev's is that nobody's given them an easy way to use kubernetes without a learning curve and without getting familiar with new tools and work flows, many of which involved buoys and are somewhat tedious for people who live on the command line and like it that way until now. In a few moments you'll meet my colleagues Chris Price and Laura Powell, who enact a little skit to introduce and demonstrate our new extended docker CLI plug in for kubernetes. That plug in offers seamless new functionality, enabling easy context management between the doctor Command Line and Dr Enterprise Clusters deployed by Dr Enterprise Container Cloud. We hope it will help Dev's work faster, help them adapt decay. TSA's they and their organizations manage platform coexistence or transition. Here's Chris and Laura, or, as we like to call them, developer A and B. >>Have you seen the new release of Docker Enterprise Container Cloud? I'm already finding it easier to manage my collection of UCP clusters. >>I'm glad it's helping you. It's great we can manage multiple clusters, but the user interface is a little bit cumbersome. >>Why is that? >>Well, if I want to use docker cli with a cluster, I need to download a client bundle from UCP and use it to create a contact. I like that. I can see what's going on, but it takes a lot of steps. >>Let me guess. Are these the steps? First you have to navigate to the web. You i for docker Enterprise Container Cloud. You need to enter your user name and password. And since the cluster you want to access is part of the demo project, you need to change projects. Then you have to choose a cluster. So you choose the first demo cluster here. Now you need to visit the U C p u I for that cluster. You can use the link in the top right corner of the page. Is that about right? >>Uh yep. >>And this takes you to the UCP you. I log in page now you can enter your user name and password again, but since you've already signed in with key cloak, you can use that instead. So that's good. Finally, you've made it to the landing page. Now you want to download a client bundle what you can do by visiting your user profile, you'll generate a new bundle called Demo and download it. Now that you have the bundle on your local machine, you can import it to create a doctor context. First, let's take a look at the context already on your machine. I can see you have the default context here. Let's import the bundle and call it demo. If we look at our context again, you can see that the demo context has been created. Now you can use the context and you'll be able to interact with your UCP cluster. Let's take a look to see if any stacks are running in the cluster. I can see you have a stack called my stack >>in >>the default name space running on Kubernetes. We can verify that by checking the UCP you I and there it iss my stack in the default name space running on Kubernetes. Let's try removing the stack just so we could be sure we're dealing with the right cluster and it disappears. As you can see. It's easy to use the Docker cli once you've created a context, but it takes quite a bit of effort to create one in the first place. Imagine? >>Yes. Imagine if you had 10 or 20 or 50 clusters toe work with. It's a management nightmare. >>Haven't you heard of the doctor Enterprise Container Cloud cli Plug in? >>No, >>I think you're going to like it. Let me show you how it works. It's already integrated with the docker cli You start off by setting it up with your container cloud Instance, all you need to get started is the base. You are all of your container cloud Instance and your user name and password. I'll set up my clothes right now. I have to enter my user name and password this one time only. And now I'm all set up. >>But what does it actually dio? >>Well, we can list all of our clusters. And as you can see, I've got the cluster demo one in the demo project and the cluster demo to in the Demo project Taking a look at the web. You I These were the same clusters we're seeing there. >>Let me check. Looks good to me. >>Now we can select one of these clusters, but let's take a look at our context before and after so we can understand how the plug in manages a context for us. As you can see, I just have my default contact stored right now, but I can easily get a context for one of our clusters. Let's try demo to the plug in says it's created a context called Container Cloud for me and it's pointing at the demo to cluster. Let's see what our context look like now and there's the container cloud context ready to go. >>That's great. But are you saying once you've run the plug in the doctor, cli just works with that cluster? >>Sure. Let me show you. I've got a doctor stack right here and it deploys WordPress. Well, the play it to kubernetes for you. Head over to the U C P u I for the cluster so you can verify for yourself. Are you ready? >>Yes. >>First I need to make sure I'm using the context >>and >>then I can deploy. And now we just have to wait for the deployment to complete. It's as easy as ever. >>You weren't lying. Can you deploy the same stack to swarm on my other clusters? >>Of course. And that should also show you how easy it is to switch between clusters. First, let's just confirm that our stack has reported as running. I've got a stack called WordPress demo in the default name space running on Kubernetes to deploy to the other cluster. First I need to select it that updates the container cloud context so I don't even need to switch contexts, since I'm already using that one. If I check again for running stacks, you can see that our WordPress stack is gone. Bring up the UCP you I on your other cluster so you can verify the deployment. >>I'm ready. >>I'll start the deployment now. It should be appearing any moment. >>I see the services starting up. That's great. It seems a lot easier than managing context manually. But how do I know which cluster I'm currently using? >>Well, you could just list your clusters like So do you see how this one has an asterisk next to its name? That means it's the currently selected cluster >>I'm sold. Where can I get the plug in? >>Just go to get hub dot com slash miran tous slash container dash cloud dash cli and follow the instructions

Published Date : Sep 15 2020

SUMMARY :

built on kubernetes, the less you deploy compliant, secure by default. Have you seen the new release of Docker Enterprise Container Cloud? but the user interface is a little bit cumbersome. I can see what's going on, but it takes a lot of steps. Then you have to choose a cluster. what you can do by visiting your user profile, you'll generate the UCP you I and there it iss my stack It's a management nightmare. Let me show you how it works. I've got the cluster demo one in the demo project and the cluster demo to in Looks good to at the demo to cluster. But are you saying once you've run the plug in the doctor, Head over to the U C P u I for the cluster so you can verify for yourself. And now we just have to wait for the deployment to complete. Can you deploy the same stack to swarm And that should also show you how easy it is to switch between clusters. I'll start the deployment now. I see the services starting up. Where can I get the plug in?

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Laura PowellPERSON

0.99+

ChrisPERSON

0.99+

Chris PricePERSON

0.99+

John John SheikhPERSON

0.99+

LauraPERSON

0.99+

100%QUANTITY

0.99+

10QUANTITY

0.99+

20QUANTITY

0.99+

FirstQUANTITY

0.99+

Aziza LeahPERSON

0.97+

50 clustersQUANTITY

0.97+

docker Enterprise Container CloudTITLE

0.95+

KubernetesTITLE

0.94+

86QUANTITY

0.94+

WordPressORGANIZATION

0.93+

todayDATE

0.92+

oneQUANTITY

0.91+

one timeQUANTITY

0.9+

Docker Enterprise Container CloudTITLE

0.89+

Dr Enterprise Container CloudTITLE

0.88+

first demo clusterQUANTITY

0.88+

MirandaPERSON

0.85+

Iran TousORGANIZATION

0.84+

intelORGANIZATION

0.84+

LensTITLE

0.83+

TSAORGANIZATION

0.83+

Cooper ninetiesORGANIZATION

0.81+

Day twoQUANTITY

0.78+

DrTITLE

0.73+

DockerORGANIZATION

0.73+

first placeQUANTITY

0.71+

WordPressTITLE

0.71+

Enterprise Container CloudCOMMERCIAL_ITEM

0.65+

LethaPERSON

0.59+

CloudCOMMERCIAL_ITEM

0.58+

DOCKER CLITITLE

0.57+

MirantTITLE

0.57+

dayQUANTITY

0.55+

TrustedORGANIZATION

0.51+

Dr Enterprise ClustersTITLE

0.47+

Dr EnterpriseTITLE

0.46+

CloudTITLE

0.43+

DrPERSON

0.42+

EnterpriseCOMMERCIAL_ITEM

0.33+

SwarmORGANIZATION

0.33+

SEAGATE AI FINAL


 

>>C G technology is focused on data where we have long believed that data is in our DNA. We help maximize humanity's potential by delivering world class, precision engineered data solutions developed through sustainable and profitable partnerships. Included in our offerings are hard disk drives. As I'm sure many of you know, ah, hard drive consists of a slider also known as a drive head or transducer attached to a head gimbal assembly. I had stack assembly made up of multiple head gimbal assemblies and a drive enclosure with one or more platters, or just that the head stacked assembles into. And while the concept hasn't changed, hard drive technology has progressed well beyond the initial five megabytes, 500 quarter inch drives that Seagate first produced. And, I think 1983. We have just announced in 18 terabytes 3.5 inch drive with nine flatters on a single head stack assembly with dual head stack assemblies this calendar year, the complexity of these drives further than need to incorporate Edge analytics at operation sites, so G Edward stemming established the concept of continual improvement and everything that we do, especially in product development and operations and at the end of World War Two, he embarked on a mission with support from the US government to help Japan recover from its four time losses. He established the concept of continual improvement and statistical process control to the leaders of prominent organizations within Japan. And because of this, he was honored by the Japanese emperor with the second order of the sacred treasure for his teachings, the only non Japanese to receive this honor in hundreds of years. Japan's quality control is now world famous, as many of you may know, and based on my own experience and product development, it is clear that they made a major impact on Japan's recovery after the war at Sea Gate. The work that we've been doing and adopting new technologies has been our mantra at continual improvement. As part of this effort, we embarked on the adoption of new technologies in our global operations, which includes establishing machine learning and artificial intelligence at the edge and in doing so, continue to adopt our technical capabilities within data science and data engineering. >>So I'm a principal engineer and member of the Operations and Technology Advanced Analytics Group. We are a service organization for those organizations who need to make sense of the data that they have and in doing so, perhaps introduce a different way to create an analyzed new data. Making sense of the data that organizations have is a key aspect of the work that data scientist and engineers do. So I'm a project manager for an initiative adopting artificial intelligence methodologies for C Gate manufacturing, which is the reason why I'm talking to you today. I thought I'd start by first talking about what we do at Sea Gate and follow that with a brief on artificial intelligence and its role in manufacturing. And I'd like them to discuss how AI and machine Learning is being used at Sea Gate in developing Edge analytics, where Dr Enterprise and Cooper Netease automates deployment, scaling and management of container raised applications. So finally, I like to discuss where we are headed with this initiative and where Mirant is has a major role in case some of you are not conversant in machine learning, artificial intelligence and difference outside some definitions. To cite one source, machine learning is the scientific study of algorithms and statistical bottles without computer systems use to effectively perform a specific task without using explicit instructions, relying on patterns and inference Instead, thus, being seen as a subset of narrow artificial intelligence were analytics and decision making take place. The intent of machine learning is to use basic algorithms to perform different functions, such as classify images to type classified emails into spam and not spam, and predict weather. The idea and this is where the concept of narrow artificial intelligence comes in, is to make decisions of a preset type basically let a machine learn from itself. These types of machine learning includes supervised learning, unsupervised learning and reinforcement learning and in supervised learning. The system learns from previous examples that are provided, such as images of dogs that are labeled by type in unsupervised learning. The algorithms are left to themselves to find answers. For example, a Siris of images of dogs can be used to group them into categories by association that's color, length of coat, length of snout and so on. So in the last slide, I mentioned narrow a I a few times, and to explain it is common to describe in terms of two categories general and narrow or weak. So Many of us were first exposed to General Ai in popular science fiction movies like 2000 and One, A Space Odyssey and Terminator General Ai is a I that can successfully perform any intellectual task that a human can. And if you ask you Lawn Musk or Stephen Hawking, this is how they view the future with General Ai. If we're not careful on how it is implemented, so most of us hope that is more like this is friendly and helpful. Um, like Wally. The reality is that machines today are not only capable of weak or narrow, a I AI that is focused on a narrow, specific task like understanding, speech or finding objects and images. Alexa and Google Home are becoming very popular, and they can be found in many homes. Their narrow task is to recognize human speech and answer limited questions or perform simple tasks like raising the temperature in your home or ordering a pizza as long as you have already defined the order. Narrow. AI is also very useful for recognizing objects in images and even counting people as they go in and out of stores. As you can see in this example, so artificial intelligence supplies, machine learning analytics inference and other techniques which can be used to solve actual problems. The two examples here particle detection, an image anomaly detection have the potential to adopt edge analytics during the manufacturing process. Ah, common problem in clean rooms is spikes in particle count from particle detectors. With this application, we can provide context to particle events by monitoring the area around the machine and detecting when foreign objects like gloves enter areas where they should not. Image Anomaly detection historically has been accomplished at sea gate by operators in clean rooms, viewing each image one at a time for anomalies, creating models of various anomalies through machine learning. Methodologies can be used to run comparative analyses in a production environment where outliers can be detected through influence in an automated real Time analytics scenario. So anomaly detection is also frequently used in machine learning to find patterns or unusual events in our data. How do you know what you don't know? It's really what you ask, and the first step in anomaly detection is to use an algorithm to find patterns or relationships in your data. In this case, we're looking at hundreds of variables and finding relationships between them. We can then look at a subset of variables and determine how they are behaving in relation to each other. We use this baseline to define normal behavior and generate a model of it. In this case, we're building a model with three variables. We can then run this model against new data. Observations that do not fit in the model are defined as anomalies, and anomalies can be good or bad. It takes a subject matter expert to determine how to classify the anomalies on classify classification could be scrapped or okay to use. For example, the subject matter expert is assisting the machine to learn the rules. We then update the model with the classifications anomalies and start running again, and we can see that there are few that generate these models. Now. Secret factories generate hundreds of thousands of images every day. Many of these require human toe, look at them and make a decision. This is dull and steak prone work that is ideal for artificial intelligence. The initiative that I am project managing is intended to offer a solution that matches the continual increased complexity of the products we manufacture and that minimizes the need for manual inspection. The Edge Rx Smart manufacturing reference architecture er, is the initiative both how meat and I are working on and sorry to say that Hamid isn't here today. But as I said, you may have guessed. Our goal is to introduce early defect detection in every stage of our manufacturing process through a machine learning and real time analytics through inference. And in doing so, we will improve overall product quality, enjoy higher yields with lesser defects and produce higher Ma Jin's. Because this was entirely new. We established partnerships with H B within video and with Docker and Amaranthus two years ago to develop the capability that we now have as we deploy edge Rx to our operation sites in four continents from a hardware. Since H P. E. And in video has been an able partner in helping us develop an architecture that we have standardized on and on the software stack side doctor has been instrumental in helping us manage a very complex project with a steep learning curve for all concerned. To further clarify efforts to enable more a i N M l in factories. Theobald active was to determine an economical edge Compute that would access the latest AI NML technology using a standardized platform across all factories. This objective included providing an upgrade path that scales while minimizing disruption to existing factory systems and burden on factory information systems. Resource is the two parts to the compute solution are shown in the diagram, and the gateway device connects to see gates, existing factory information systems, architecture ER and does inference calculations. The second part is a training device for creating and updating models. All factories will need the Gateway device and the Compute Cluster on site, and to this day it remains to be seen if the training devices needed in other locations. But we do know that one devices capable of supporting multiple factories simultaneously there are also options for training on cloud based Resource is the stream storing appliance consists of a kubernetes cluster with GPU and CPU worker notes, as well as master notes and docker trusted registries. The GPU nodes are hardware based using H B E l 4000 edge lines, the balance our virtual machines and for machine learning. We've standardized on both the H B E. Apollo 6500 and the NVIDIA G X one, each with eight in video V 100 GP use. And, incidentally, the same technology enables augmented and virtual reality. Hardware is only one part of the equation. Our software stack consists of Docker Enterprise and Cooper Netease. As I mentioned previously, we've deployed these clusters at all of our operations sites with specific use. Case is planned for each site. Moran Tous has had a major impact on our ability to develop this capability by offering a stable platform in universal control plane that provides us, with the necessary metrics to determine the health of the Kubernetes cluster and the use of Dr Trusted Registry to maintain a secure repository for containers. And they have been an exceptional partner in our efforts to deploy clusters at multiple sites. At this point in our deployment efforts, we are on prem, but we are exploring cloud service options that include Miranda's next generation Docker enterprise offering that includes stack light in conjunction with multi cluster management. And to me, the concept of federation of multi cluster management is a requirement in our case because of the global nature of our business where our operation sites are on four continents. So Stack Light provides the hook of each cluster that banks multi cluster management and effective solution. Open source has been a major part of Project Athena, and there has been a debate about using Dr CE versus Dr Enterprise. And that decision was actually easy, given the advantages that Dr Enterprise would offer, especially during a nearly phase of development. Cooper Netease was a natural addition to the software stack and has been widely accepted. But we have also been a work to adopt such open source as rabbit and to messaging tensorflow and tensor rt, to name three good lab for developments and a number of others. As you see here, is well, and most of our programming programming has been in python. The results of our efforts so far have been excellent. We are seeing a six month return on investment from just one of seven clusters where the hardware and software cost approached close to $1 million. The performance on this cluster is now over three million images processed per day for their adoption has been growing, but the biggest challenge we've seen has been handling a steep learning curve. Installing and maintaining complex Cooper needs clusters in data centers that are not used to managing the unique aspect of clusters like this. And because of this, we have been considering adopting a control plane in the cloud with Kubernetes as the service supported by Miranda's. Even without considering, Kubernetes is a service. The concept of federation or multi cluster management has to be on her road map, especially considering the global nature of our company. Thank you.

Published Date : Sep 15 2020

SUMMARY :

at the end of World War Two, he embarked on a mission with support from the US government to help and the first step in anomaly detection is to use an algorithm to find patterns

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SeagateORGANIZATION

0.99+

hundreds of yearsQUANTITY

0.99+

two partsQUANTITY

0.99+

pythonTITLE

0.99+

six monthQUANTITY

0.99+

World War TwoEVENT

0.99+

C GateORGANIZATION

0.99+

oneQUANTITY

0.99+

Stephen HawkingPERSON

0.99+

Sea GateORGANIZATION

0.99+

JapanLOCATION

0.99+

Lawn MuskPERSON

0.99+

TerminatorTITLE

0.99+

1983DATE

0.99+

one partQUANTITY

0.99+

two examplesQUANTITY

0.99+

A Space OdysseyTITLE

0.99+

five megabytesQUANTITY

0.99+

3.5 inchQUANTITY

0.99+

second partQUANTITY

0.99+

18 terabytesQUANTITY

0.99+

first stepQUANTITY

0.99+

hundredsQUANTITY

0.99+

bothQUANTITY

0.98+

NVIDIAORGANIZATION

0.98+

over three million imagesQUANTITY

0.98+

firstQUANTITY

0.98+

each siteQUANTITY

0.98+

H B E. Apollo 6500COMMERCIAL_ITEM

0.98+

each clusterQUANTITY

0.98+

each imageQUANTITY

0.98+

one sourceQUANTITY

0.98+

todayDATE

0.98+

G X oneCOMMERCIAL_ITEM

0.98+

CooperPERSON

0.98+

second orderQUANTITY

0.98+

JapanORGANIZATION

0.98+

HamidPERSON

0.97+

Dr EnterpriseORGANIZATION

0.97+

Cooper NeteaseORGANIZATION

0.97+

eachQUANTITY

0.97+

OneTITLE

0.97+

TheobaldPERSON

0.97+

nine flattersQUANTITY

0.97+

one devicesQUANTITY

0.96+

SirisTITLE

0.96+

hundreds of thousands of imagesQUANTITY

0.96+

Docker EnterpriseORGANIZATION

0.95+

DockerORGANIZATION

0.95+

seven clustersQUANTITY

0.95+

two years agoDATE

0.95+

US governmentORGANIZATION

0.95+

MirantORGANIZATION

0.95+

Operations and Technology Advanced Analytics GroupORGANIZATION

0.94+

four time lossesQUANTITY

0.94+

WallyPERSON

0.94+

JapaneseOTHER

0.93+

two categoriesQUANTITY

0.93+

H B E l 4000COMMERCIAL_ITEM

0.9+

H BORGANIZATION

0.9+

three variablesQUANTITY

0.9+

General AiTITLE

0.87+

G EdwardPERSON

0.87+

Google HomeCOMMERCIAL_ITEM

0.87+

$1 millionQUANTITY

0.85+

MirandaORGANIZATION

0.85+

Sea GateLOCATION

0.85+

AlexaTITLE

0.85+

500 quarter inch drivesQUANTITY

0.84+

KubernetesTITLE

0.83+

single headQUANTITY

0.83+

eightQUANTITY

0.83+

DrTITLE

0.82+

variablesQUANTITY

0.81+

this calendar yearDATE

0.78+

H P. E.ORGANIZATION

0.78+

2000DATE

0.73+

Project AthenaORGANIZATION

0.72+

Rx SmartCOMMERCIAL_ITEM

0.69+

dualQUANTITY

0.68+

V 100COMMERCIAL_ITEM

0.65+

closeQUANTITY

0.65+

four continentsQUANTITY

0.64+

GPQUANTITY

0.62+

Adrian and Adam Keynote v4 fixed audio blip added slide


 

>>Welcome everyone. Good morning. Good evening to all of you around the world. I am so excited to welcome you to launch bad our annual conference for customers, for partners, for our own colleagues here at Mirandes. This is meant to be a forum for learning, for sharing for discovery. One of openness. We're incredibly excited. Do you have you here with us? I want to take a few minutes this morning and opened the conference and share with you first and foremost where we're going as a company. What is our vision then? I also want to share with you on update on what we have been up to you for the past year. Especially with two important acquisitions, Doc Enterprise and then container and lens. And what are some of the latest developments at Mirandes? And then I'll close also with an exciting announcement that we have today, which we hope is going to be interesting and valuable for all of you. But let me start with our mission. What are we here to Dio? It's very simple. We want to help you the ship code faster. This is something that we're very excited about, something that we have achieved for many of you around the world. And we just want thio double down on. We feel this is a mission that's very much worthwhile and relevant and important to you. Now, how do we do that? How do we help you ship code faster? There are three things we believe in. We believe in this world of cloud. Um, choice is incredibly important. We all know that developers want to use the latest tools. We all know that cloud technology is evolving very quickly and new innovations appear, um, very, very quickly, and we want to make them available to you. So choice is very important. At the same time, consuming choice can be difficult. So our mission is to make choice simple for you to give developers and operators simplicity and then finally underpinning everything that we dio is security. These are the three big things that we invest in and that we believe that choice, simplicity and security and the foundation technology that we're betting on to make that happen for you is kubernetes many of you, many of our customers use kubernetes from your aunties today and they use it at scale. And this is something we want to double down on the fundamental benefit. The our key promise we want to deliver for you is Speed. And we feel this is very relevant and important and and valuable in the world that we are in today. So you might also be interested in what have been our priorities since we acquired Doc Enterprise. What has happened for the past year at Miranda's And there are three very important things we focused on as a company. The first one is customer success. Um, when we acquired Doc Enterprise, the first thing we did is listen to you connect with the most important customers and find out what was your sentiment. What did you like? What were you concerned about? What needed to improve? How can we create more value and a better experience for you? So, customers success has been a top of our list of priorities ever since. And here is what we've heard here is what you've told us. You've told us that you very much appreciated the technology that you got a lot of value out of the technology, but that at the same time, there are some things that we can do better. Specifically, you wanted better. Sele's better support experience. You also wanted more clarity on the road map. You also wanted to have a deeper alignment and a deeper relationship between your needs and your requirements and our our technical development that keep people in our development organization are most important engineers. So those three things are were very, very important to you and they were very important to us here. So we've taken that to heart and over the past 12 months, we believe, as a team, we have dramatically improved the customer support experience. We introduced new SLS with prod care. We've rolled out a roadmap to many many of our customers. We've taken your requirements of the consideration and we've built better and deeper relationships with so many of you. And the evidence for that that we've actually made some progress is in a significant increase off the work clothes and in usage of all platforms. I was so fortunate that we were able to build better and stronger relationships and take you to the next level of growth for companies like Visa like soc T general, like nationwide, like Bosch, like Axa X l like GlaxoSmithKline, like standard and Poor's, like Apple A TNT. So many, many off you, Many of all customers around the world, I believe over the past 12 months have experienced better, better, better support strong s L. A s a deeper relationship and a lot more clarity on our roadmap and our vision forward. The second very big priority for us over the last year has been product innovation. This is something that we are very excited about that we've invested. Most of our resource is in, and we've delivered some strong proof points. Doc Enterprise 3.1 has been the first release that we have shipped. Um, as Mirant is as the unified company, Um, it's had some big innovative features or Windows support or a I and machine learning use cases and a significant number off improvements in stability and scalability earlier this year. We're very excited to have a quiet lens and container team, which is by far the most popular kubernetes. I'd, um, in the world today and every day, 600 new users are starting to use lens to manage the community's clusters to deploy applications on top of communities and to dramatically simplify the experience for communities for operators and developers alike. That is a very big step forward for us as a company. And then finally, this week at this conference, we announcing our latest product, which we believe is a huge step forward for Doc Enterprise and which we call Doc Enterprise, Container Cloud, and you will hear a lot more about that during this conference. The third vector of development, the third priority for us as a company over the past year was to become mawr and Mawr developer centric. As we've seen over the past 10 years, developers really move the world forward. They create innovation, they create new software. And while our platform is often managed and run and maybe even purchased by RT architects and operators and I T departments, the actual end users are developers. And we made it our mission a za company, to become closer and closer to developers to better understand their needs and to make our technology as easy and fast to consume as possible for developers. So as a company, we're becoming more and more developers centric, really. The two core products which fit together extremely well to make that happen, or lens, which is targeted squarely at a new breed off kubernetes developers sitting on the desktop and managing communities, environments and the applications on top on any cloud platform anywhere and then DACA enterprise contain a cloud which is a new and radically innovative, contain a platform which we're bringing to market this week. So with this a za background, what is the fundamental problem which we solve for you, for our customers? What is it that we feel are are your pain points that can help you resolve? We see too very, very big trends in the world today, which you are experiencing. On one side, we see the power of cloud emerging with more features mawr innovation, more capabilities coming to market every day. But with those new features and new innovations, there is also an exponential growth in cloud complexity and that cloud complexity is becoming increasingly difficult to navigate for developers and operators alike. And at the same time, we see the pace of change in the economy continuing to accelerate on bits in the economy and in the technology as well. So when you put these two things together on one hand, you have MAWR and Mawr complexity. On the other hand, you have fast and faster change. This makes for a very, very daunting task for enterprises, developers and operators to actually keep up and move with speed. And this is exactly the central problem that we want to solve for you. We want to empower you to move with speed in the middle off rising complexity and change and do it successfully and with confidence. So with that in mind, we are announcing this week at LAUNCHPAD a big and new concept to take the company forward and take you with us to create value for you. And we call this your cloud everywhere, which empowers you to ship code faster. Dr. Enterprise Container Cloud is a lynch bit off your cloud everywhere. It's a radical and new container platform, which gives you our customers a consistent experience on public clouds and private clouds alike, which enables you to ship code faster on any infrastructure, anywhere with a cohesive cloud fabric that meets your security standards that offers a choice or private and public clouds and offer you a offers you a simple, an extremely easy and powerful to use experience. for developers. All of this is, um, underpinned by kubernetes as the foundation technology we're betting on forward to help you achieve your goals at the same time. Lens kubernetes e. It's also very, very well into the real cloud. Every concept, and it's a second very strong linchpin to take us forward because it creates the developing experience. It supports developers directly on their desktop, enabling them Thio manage communities workloads to test, develop and run communities applications on any infrastructure anywhere. So Doc, Enterprise, Container, Cloud and Lens complement each other perfectly. So I'm very, very excited to share this with you today and opened the conference for you. And with this I want to turn it over to my colleague Adam Parker, who runs product development at Mirandes to share a lot more detail about Doc Enterprise Container Cloud. Why we're excited about it. Why we feel is a radical step forward to you and why we feel it can add so much value to your developers and operators who want to embrace the latest kubernetes technology and the latest container technology on any platform anywhere. I look forward to connecting with you during the conference and we should all the best. Bye bye. >>Thanks, Adrian. My name is Adam Parco, and I am vice president of engineering and product development at Mirant ISS. I'm extremely excited to be here today And to present to you Dr Enterprise Container Cloud Doc Enterprise Container Cloud is a major leap forward. It Turpal charges are platform. It is your cloud everywhere. It has been completely designed and built around helping you to ship code faster. The world is moving incredibly quick. We have seen unpredictable and rapid changes. It is the goal of Docker Enterprise Container Cloud to help navigate this insanity by focusing on speed and efficiency. To do this requires three major pillars choice, simplicity and security. The less time between a line of code being written and that line of code running in production the better. When you decrease that cycle, time developers are more productive, efficient and happy. The code is higher, quality contains less defects, and when bugs are found are fixed quicker and more easily. And in turn, your customers get more value sooner and more often. Increasing speed and improving developer efficiency is paramount. To do this, you need to be able to cycle through coding, running, testing, releasing and monitoring all without friction. We enabled us by offering containers as a service through a consistent, cloudlike experience. Developers can log into Dr Enterprise Container Cloud and, through self service, create a cluster No I T. Tickets. No industry specific experience required. Need a place to run. A workload simply created nothing quicker than that. The clusters air presented consistently no matter where they're created, integrate your pipelines and start deploying secure images everywhere. Instantly. You can't have cloud speed if you start to get bogged down by managing, so we offer fully automated lifecycle management. Let's jump into the details of how we achieve cloud speed. The first is cloud choice developers. Operators add mons users they all want. In fact, mandate choice choice is extremely important in efficiency, speed and ultimately the value created. You have cloud choice throughout the full stack. Choice allows developers and operators to use the tooling and services their most familiar with most efficient with or perhaps simply allows them to integrate with any existing tools and services already in use, allowing them to integrate and move on. Doc Enterprise Container Cloud isn't constructive. It's open and flexible. The next important choice we offer is an orchestration. We hear time and time again from our customers that they love swarm. That's simply enough for the majority of their applications. And that just works that they have skills and knowledge to effectively use it. They don't need to be or find coop experts to get immediate value, so we will absolutely continue to offer this choice and orchestration. Our existing customers could rest assure their workloads will continue to run. Great as always. On the other hand, we can't ignore the popularity that growth, the enthusiasm and community ecosystem that has exploded with communities. So we will also be including a fully conforming, tested and certified kubernetes going down the stock. You can't have choice or speed without your choice and operating system. This ties back to developer efficiency. We want developers to be able to leverage their operating system of choice, were initially supporting full stack lifecycle management for a bun, too, with other operating systems like red hat to follow shortly. Lastly, all the way down at the bottom of stack is your choice in infrastructure choice and infrastructure is in our DNA. We have always promoted no locking and flexibility to run where needed initially were supporting open stock AWS and full life cycle management of bare metal. We also have a road map for VM Ware and other public cloud providers. We know there's no single solution for the unique and complex requirements our customers have. This is why we're doubling down on being the most open platform. We want you to truly make this your cloud. If done wrong, all this choice at speed could have been extremely complex. This is where cloud simplification comes in. We offer a simple and consistent as a service cloud experience, from installation to day to ops clusters Air created using a single pane of glass no matter where they're created, giving a simple and consistent interface. Clusters can be created on bare metal and private data centers and, of course, on public cloud applications will always have specific operating requirements. For example, data protection, security, cost efficiency edge or leveraging specific services on public infrastructure. Being able to create a cluster on the infrastructure that makes the most sense while maintaining a consistent experience is incredibly powerful to developers and operators. This helps developers move quick by being able to leverage the infra and services of their choice and operators by leveraging, available, compute with the most efficient and for available. Now that we have users self creating clusters, we need centralized management to support this increase in scale. Doc Enterprise Container cloud use is the single pane of glass for observe ability and management of all your clusters. We have day to ops covered to keep things simple and new. Moving fast from this single pane of glass, you can manage the full stack lifecycle of your clusters from the infra up, including Dr Enterprise, as well as the fully automated deployment and management of all components deployed through it. What I'm most excited about is Doc Enterprise Container Cloud as a service. What do I mean by as a service doctor? Enterprise continue. Cloud is fully self managed and continuously delivered. It is always up to date, always security patched, always available new features and capabilities pushed often and directly to you truly as a service experience anywhere you want, it run. Security is of utmost importance to Miranda's and our customers. Security can't be an afterthought, and it can't be added later with Doctor and a price continued cloud, we're maintaining our leadership and security. We're doing this by leveraging the proven security and Dr Enterprise. Dr. Enterprise has the best and the most complete security certifications and compliance, such as Stig Oscar, How and Phipps 1 $40 to thes security certifications allows us to run in the world's most secure locations. We are proud and honored to have some of the most security conscious customers in the world from all industries into. She's like insurance, finance, health care as well as public, federal and government agencies. With Dr Enterprise Container Cloud. We put security as our top concern, but importantly, we do it with speed. You can't move fast with security in the way so they solve this. We've added what we're calling invisible security security enabled by default and configured for you as part of the platform. Dr Price Container Cloud is multi tenant with granular are back throughout. In conjunction with Doc Enterprise, Docker Trusted Registry and Dr Content Trust. We have a complete end to end secured software supply chain Onley run the images that have gone through the appropriate channels that you have authorized to run on the most secure container engine in the >>industry. >>Lastly, I want to quickly touch on scale. Today. Cluster sprawl is a very real thing. There are test clusters, staging clusters and, of course, production clusters. There's also different availability zones, different business units and so on. There's clusters everywhere. These clusters are also running all over the place. We have customers running Doc Enterprise on premise there, embracing public cloud and not just one cloud that might also have some bare metal. So cloud sprawl is also a very real thing. All these clusters on all these clouds is a maintenance and observe ability. Nightmare. This is a huge friction point to scaling Dr Price. Container Cloud solves these issues, lets you scale quicker and more easily. Little recap. What's new. We've added multi cluster management. Deploy and attach all your clusters wherever they are. Multi cloud, including public private and bare metal. Deploy your clusters to any infra self service cluster creation. No more I T. Tickets to get resources. Incredible speed. Automated Full stack Lifecycle management, including Dr Enterprise Container, cloud itself as a service from the in for up centralized observe ability with a single pane of glass for your clusters, their health, your APs and most importantly to our existing doc enterprise customers. You can, of course, add your existing D clusters to Dr Enterprise Container Cloud and start leveraging the many benefits it offers immediately. So that's it. Thank you so much for attending today's keynote. This was very much just a high level introduction to our exciting release. There is so much more to learn about and try out. I hope you are as excited as I am to get started today with Doc Enterprise. Continue, Cloud, please attend the tutorial tracks up Next is Miska, with the world's most popular Kubernetes E Lens. Thanks again, and I hope you enjoy the rest of our conference.

Published Date : Sep 15 2020

SUMMARY :

look forward to connecting with you during the conference and we should all the best. We want you to truly make this your cloud. This is a huge friction point to scaling Dr Price.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AdrianPERSON

0.99+

BoschORGANIZATION

0.99+

Adam ParcoPERSON

0.99+

Adam ParkerPERSON

0.99+

GlaxoSmithKlineORGANIZATION

0.99+

firstQUANTITY

0.99+

AWSORGANIZATION

0.99+

VisaORGANIZATION

0.99+

AdamPERSON

0.99+

standard and Poor'sORGANIZATION

0.99+

secondQUANTITY

0.99+

MirantORGANIZATION

0.99+

first releaseQUANTITY

0.99+

600 new usersQUANTITY

0.99+

last yearDATE

0.99+

MirandesORGANIZATION

0.98+

threeQUANTITY

0.98+

two thingsQUANTITY

0.98+

LAUNCHPADORGANIZATION

0.98+

MawrORGANIZATION

0.98+

Dr EnterpriseORGANIZATION

0.98+

todayDATE

0.98+

this weekDATE

0.97+

TodayDATE

0.97+

Mirant ISSORGANIZATION

0.97+

Doc EnterpriseORGANIZATION

0.97+

third vectorQUANTITY

0.97+

third priorityQUANTITY

0.97+

first oneQUANTITY

0.97+

two important acquisitionsQUANTITY

0.97+

WindowsTITLE

0.96+

two core productsQUANTITY

0.96+

Axa X lORGANIZATION

0.96+

one cloudQUANTITY

0.96+

MirandaORGANIZATION

0.96+

three thingsQUANTITY

0.96+

one sideQUANTITY

0.96+

mawrORGANIZATION

0.96+

Apple A TNTORGANIZATION

0.95+

MiskaPERSON

0.94+

single paneQUANTITY

0.93+

single solutionQUANTITY

0.92+

DocORGANIZATION

0.91+

Dr. EnterpriseORGANIZATION

0.9+

past yearDATE

0.9+

How and PhippsORGANIZATION

0.89+

past yearDATE

0.89+

LensORGANIZATION

0.88+

this morningDATE

0.87+

ContainerORGANIZATION

0.87+

earlier this yearDATE

0.85+

Doc Enterprise 3.1TITLE

0.85+

Dr Content TrustORGANIZATION

0.84+

Doc EnterpriseTITLE

0.84+

Stig OscarORGANIZATION

0.84+

Docker Enterprise Container CloudTITLE

0.83+

Dr PriceORGANIZATION

0.82+

soc T generalORGANIZATION

0.82+

Container CloudORGANIZATION

0.81+

Doc Enterprise Container CloudTITLE

0.81+

EnterpriseORGANIZATION

0.79+

three major pillarsQUANTITY

0.78+

Enterprise Container Cloud Doc Enterprise Container CloudTITLE

0.78+

ContainerTITLE

0.77+

one handQUANTITY

0.76+

monthsDATE

0.75+

$40QUANTITY

0.74+

RTORGANIZATION

0.73+

Dr Price ContainerORGANIZATION

0.72+

DioORGANIZATION

0.7+

SelePERSON

0.7+

ON DEMAND R AND D DATA PLATFORM GSK FINAL2


 

>>Hey, everyone, Thanks for taking them to join the story. Hope you and your loved ones are safe during these tough times. Let me start by introducing myself. My name is Michelle. When I walk for GlaxoSmithKline, GSK as an engineering manager in my current role, A little protocol platform A P s, which is part of the already data platform here in G S, K R and D Tech. I live in Dallas, Texas. I have a Masters degree in computer science on a bachelor's in electronics and communication engineering. I started my career as a software developer on over these years again a lot of experience in leading and building, not scale and predicts products and solutions. I also have a complete accountability for container platforms here at GSK or any tick. I've been working very closely with Dr Enterprise, which is no Miranda's for more than three years to enable container platforms that yes, came on mainly in our own Itek. So that's me. Let >>me give you a quick overview on agenda for today's talk. I'll start with what we do here at GSK on what is RND data platform. Then I'll give you an overview on What are the business drivers that >>motivated US toe? Take this container Germany on some insight into learnings on accomplishments over these years. Working with Dr Enterprise on the container platforms Lately, you must have seen a lot of articles off there which talk about how ts case liberating technologies like artificial intelligence, mission learning, UN data and analytics for the Douglas Corey process. I'm very excited to see the progress we have made in technology, but what makes us truly unique is our commitment to the patient. >>We're G escape, help millions of people, do more, feel better and live longer. Wear a global company that is focused on three were tickles pharmaceuticals vaccines on consumer healthcare. Our main intent is to lower the >>burden on the impact of diseases on the patients. Here at GSK, we allow science to drive the technology. This helps us toe build innovative products. That's helps our scientists to make better and faster additions throughout the drug discovery by plane. >>With that, let me give you some >>context on what currently data platform is how it is enabled. A T escape started in mid 2016. What used to be called us are any information platform whose main focus was to centralize curate on rationalized all the data produced within the others are in the business systems in orderto drive, a strategic business value, standardization of clinical trials, Genome Wide Association Study Analysis, also known as Jesus Storage and Crossing Off Rheal. World Evidence data some of the examples off how the only platform was used to deliver the business value four years later. No, a new set off business rivals of changing our landscape. The irony Information Platform is evolving to be a hybrid, multi cloud solution and is known as already did a platform refering to 20 >>19 GSK's annual report. These are the four teams that there are any platform will be mainly focused on. We're expanding our data capabilities to support the use. Escape by a former company on evolving into a hybrid medical platform is one of the many steps that we're taking to be future ready. Our key focus will still be making >>greater recommendations better and faster by using that wants us. We're making the areas like artificial intelligence and machine learning. No doc brings us toe. What is Germany is important. Why are we taking this German with that? Let me take you to the next topic off. Like the process of discovery, Francisco is not an easy process. Talking about the recent events occurred over the last few months on the way. How all our lives are impacted. It is a lot of talk on information going about. Why did drug discovery process is so tough working for a global health care company? I get asked this question very frequently. From many people I interact with. Question is like, Why is that? This car is so tough on why it takes so much time. Drug discovery is a complex process that involves multiple different stages on at each and every stage. There is huge amounts of data that the scientists have took process to make some decisions. Studies have shown that only 3% off small molecules entering the human studies actually become medicines. If you're new to drug discovery, you may ask, like what is the targets? Targets so low? We humans are very complex species, >>not going into the details of the process. We're G escape >>have made a lot of investments into technology that enabled us to make data river conditions. Throw the drug Discovery pipeline >>as we implement. As we started implementing these tools and technologies to enable already did a platform, we started to get a better appreciation off how these tools in track on integrate >>with each other. Our goal wants to make this platform a jail, the platform that can work at scale so that we can provide a great user experience and contribute back to the bread discovery pipeline so that the scientists can make faster editions. We want our ardently users to consume the data, and the service is available on the platform seamlessly in a self service fashion. And we also have to accomplish this by establishing trust. And then we have to end also enable the academic partnerships, acquisitions, collaborations that DSK has, which actually brings a lot of data on value to our scientists. So when we talk about so many collaborations and a lot of these systems, what this brings in is wide range off systems and platforms that are fundamentally built on different infrastructure. This is where Doctor comes into fiction on our containers significance. >>We have realized the power of containers on how we can simplify this complex ecosystem by using containers and provide a faster access off data to war scientists who didn't go >>back and contribute back to the drug discovery by play. >>With that, let me take talk to you about >>the containers journey and she escaped. So we started our container journey in late 2017. We started working with Dr Enterprise to enable the container platform. This is on our on prem infrastructure Back then, or first year or so we walked through multiple Pelosis did a lot of testing to make sure our platform is stable before we onboard either the data or the user applications. I was part of this complete journey on Dr Stream has worked with us very closely towards you. The first milestone off establishing a stable container platform. A tsk. Now, getting into 2019 we started deploying our applications in production environment. I cannot go into the details of what this Absar, but they do include both data pipelines as well as Web services. You know, initial days we have worked a lot on swamp, but in 2019 is when we started looking into communities in the same year, we enable kubernetes orchestration on the doctor and replace platform here at GSK and also made it as a de facto orchestra coming into 2020. All our micro service applications are undead. A pipelines are migrated to the container platforms on all of these are orchestrated by Cuban additional on these air applications that are running in production. As of today, we have made the container forced approach as an architectural standard across already taking GSK. We also started deploying our AML training models onto containers on All this work is happening on our Doctor Enterprise platform. Also as part off are currently platforms hybrid multicolored journey. We started enabling container and kubernetes based platforms on public clubs. Now going into 2021 on future. Enabling our RND users to easily access data and applications in a platform agnostic way is very crucial for our success because previously we had only onto him. Now we have public clothes that are getting involved on One of >>the many steps we're taking through this journey is to >>watch allies the data on ship data and containers or kubernetes volumes on demand to our our end users of scientists. And this allows us to deliver data to our scientists wherever they want in a very security on. We're leveraging doctor to do it. So that's >>our future. Learning on with that, let's take a deep dive into fuel for >>our accomplishments over these years. I want to start with a general demand and innovative one very interesting use case that we developed on Dr. This is a rapid prototyping capability that enabled our scientists seamlessly to Monday cluster communication. This was one off the biggest challenges which way his face for a long time and with the help of containers, were able to solve this on provide this as a capability to our scientists. We actually have shockers this capability in one of the doctor conferences before next. As I've said before, by migrating all over web services into containers, we not only achieved horizontal scalability for those specific services, but also saved more than 50% in support costs for the applications which we have migrated by making Docker image as an immutable artifact In our bill process, we are now able to deploy our APS or models in any container or Cuban, its base platform, either in on Prem or in a public club. We also made significant improvements towards the process. A not a mission By leveraging docker containers, containers have played a significant role in keeping US platform agnostic and thus enabling our hybrid multi cloud Germany valuable for out already did scientists. As I mentioned before, data virtualization is another viewpoint we have in terms off our next steps off where we want to take kubernetes on where we wanna leverage open it. Us. What you see here are just a few off many accomplishments which we have our, um, achieved by using containers for the past three years or so. So with that before I close all the time and acknowledge all our internal partners who has contributed a lot to this journey mainly are in the business are on the deck on the broader take. Organizations that escape also want to time document present Miranda's for being such a great partner throughout this journey and also giving us an opportunity to share this success story today. Lastly, thanks for everyone to listening to the stop and please feel free to reach out. If you have any questions or suggestions, let's be fit safe. Thank you

Published Date : Sep 14 2020

SUMMARY :

Hey, everyone, Thanks for taking them to join the story. What are the business drivers that our commitment to the patient. Our main intent is to lower the burden on the impact of diseases on the patients. World Evidence data some of the examples off how the only platform was evolving into a hybrid medical platform is one of the many steps that we're taking to be There is huge amounts of data that the scientists have took process to not going into the details of the process. have made a lot of investments into technology that enabled us to make data river conditions. enable already did a platform, we started to get a better appreciation off how these And then we have to end also enable the academic partnerships, I cannot go into the details of what this Absar, but they do include both data pipelines We're leveraging doctor to do it. Learning on with that, let's making Docker image as an immutable artifact In our bill process, we are now able to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DSKORGANIZATION

0.99+

MichellePERSON

0.99+

2019DATE

0.99+

2020DATE

0.99+

GSKORGANIZATION

0.99+

late 2017DATE

0.99+

2021DATE

0.99+

G SORGANIZATION

0.99+

threeQUANTITY

0.99+

mid 2016DATE

0.99+

K RORGANIZATION

0.99+

MondayDATE

0.99+

more than 50%QUANTITY

0.99+

D TechORGANIZATION

0.99+

Dallas, TexasLOCATION

0.99+

four teamsQUANTITY

0.99+

more than three yearsQUANTITY

0.98+

GlaxoSmithKlineORGANIZATION

0.98+

four years laterDATE

0.98+

USLOCATION

0.98+

todayDATE

0.98+

first milestoneQUANTITY

0.98+

Dr StreamORGANIZATION

0.97+

millions of peopleQUANTITY

0.97+

3%QUANTITY

0.97+

oneQUANTITY

0.96+

MirandaPERSON

0.95+

GermanyLOCATION

0.94+

20QUANTITY

0.94+

ItekORGANIZATION

0.93+

both data pipelinesQUANTITY

0.92+

Dr EnterpriseORGANIZATION

0.92+

FranciscoPERSON

0.88+

MirandaORGANIZATION

0.84+

eachQUANTITY

0.82+

CubanOTHER

0.82+

G escapeORGANIZATION

0.78+

first yearQUANTITY

0.75+

OneQUANTITY

0.74+

lastDATE

0.72+

past three yearsDATE

0.71+

monthsDATE

0.7+

Crossing Off RhealTITLE

0.68+

GSKTITLE

0.67+

GermanOTHER

0.65+

Douglas CoreyPERSON

0.62+

same yearDATE

0.59+

CubanLOCATION

0.56+

Wide AssociationTITLE

0.55+

Jesus StorageTITLE

0.55+

RORGANIZATION

0.5+

19 GSKQUANTITY

0.5+

GenomeORGANIZATION

0.48+

DoctorTITLE

0.45+

PelosisLOCATION

0.42+

ON DEMAND MIRANTIS OPENSTACK ON K8S FINAL


 

>> Hi, I'm Adrienne Davis, Customer Success Manager on the CFO-side of the house at Mirantis. With me today is Artem Andreev, Product Manager and expert, who's going to enlighten us today. >> Hello everyone. It's great to hear all of you listening to our discussion today. So my name is Artem Andreev. I'm a Product Manager for Mirantis OpenStack line of products. That includes the current product line that we have in the the next generation product line that we're about to launch quite soon. And actually this is going to be the topic of our presentation today. So the new product that we are very, very, very excited about, and that is going to be launched in a matter of several weeks, is called Mirantis OpenStack on Kubernetes. For those of you who have been in Mirantis quite a while already, Mirantis OpenStack on Kubernetes is essentially a reincarnation of our Miranti Cloud Platform version one, as we call it these days. So, and the theme has reincarnated into something more advanced, more robust, and altogether modern, that provides the same, if not more, value to our customers, but packaged in a different shape. And well, we're very excited about this new launch, and we would like to share this excitement with you Of course. As you might know, recently a few months ago, Mirantis acquired Docker Enterprise together with the advanced Kubernetes technology that Docker Enterprise provides. And we made this technology the piece and parcel of our product suite, and this naturally includes OpenStack Mirantis, OpenStack on Kubernetes as well, since this is a part of our product suite. And well, the Kubernetes technology in question, we call Docker Enterprise Container Cloud these days, I'm going to refer to this name a lot over the course of the presentation. So I would like to split today's discussions to several major parts. So for those of you who do not know what OpenStack is in general, a quick recap might be helpful to understand the value that it provides. I will discuss why someone still needs OpenStack in 2020. We will talk about what a modern OpenStack distribution is supposed to do to the expectation that is there. And of course, we will go into a bit of details of how exactly Mirantis OpenStack on Kubernetes works, how it helps to deploy and manage OpenStack clouds. >> So set the stage for me here. What's the base environment we were trying to get to? >> So what is OpenStack? One can think of OpenStack as a free and open source alternative to VMware, and it's a fair comparison. So OpenStack, just as VMware, operates primarily on Virtual Machines. So it gives you as a user, a clean and crispy interface to launch a virtual VM, to configure the virtual networking to plug this VM into it to configure and provision virtual storage, to attach to your VM, and do a lot of other things that actually a modern application requires to run. So the idea behind OpenStack is that you have a clean and crispy API exposed to you as a user, and alters little details and nuances of the physical infrastructure configuration provision that need to happen just for the virtual application to work are hidden, and spread across multiple components that comprise OpenStack per se. So as compared again, to a VMware, the functionality is pretty much similar, but actually OpenStack can do much more than just Vms, and it does that, at frankly speaking much less price, if we do the comparison. So what OpenStack has to offer. Naturally, the virtualization, networking, storage systems out there, it's just the basic entry level functionality. But of course, what comes with it is the identity and access management features, or practical user interface together with the CLI and command line tools to manage the cloud, orchestration functionality, to deploy your application in the form of templates, ability to manage bare metal machines, and of course, some nice and fancy extras like DNSaaS service, Metering, Secret Management, and Load Balancing. And frankly speaking, OpenStack can actually do even more, depending on the needs that you have. >> We hear so much about containers today. Do applications even need VMs anymore? Can't Kubernetes provide all these services? And even if IaaS is still needed, why would one bother with building their own private platform, if there's a wide choice of public solutions for virtualization, like Amazon web services, Microsoft Azure, and Google cloud platform? >> Well, that's a very fair question. And you're absolutely correct. So the whole trend (audio blurs) as the States. Everybody's talking about containers, everybody's doing containers, but to be realistic, yes, the market still needs VMs. There are certain use cases in the modern world. And actually these use cases are quite new, like 5G, where you require high performance in the networking for example. You might need high performance computing as well. So when this takes quite special hardware and configuration to be provided within your infrastructure, that is much more easily solved with the Vms, and not containers. Of course not to mention that, there are still legacy applications that you need to deal with, and that well, they have just switched from the server-based provision into VM-based provision, and they need to run somewhere. So they're not just ready for containers. And well, if we think about, okay, VMs are still needed, but why don't I just go to a public infrastructure as a service provider and run my workloads there? Now if you can do that, but well, you have to be prepared to pay a lot of money, once you start running your workloads at scale. So public IaaSes, they actually tend to hit your pockets heavily. And of course, if you're working in a highly regulated area, like enterprises cover (audio blurs) et cetera, so you have to comply with a lot of security regulations and data placement regulations. And well, public IaaSes, let's be frank, they're not good at providing you with this transparency. So you need to have full control over your whole stack, starting from the hardware to the very, very top. And this is why private infrastructure as a service is still a theme these days. And I believe that it's going to be a theme for at least five years more, if not more. >> So if private IaaSes are useful and demanded, why does Mirantis just stick to the OpenStack that we already have? Why did we decide to build a new product, rather than keep selling the current one? >> Well, to answer this question, first, we need to see what actually our customers believe more in infrastructure as a service platform should be able to provide. And we've compiled this list into like five criteria. Naturally, private IaaS needs to be reliable and robust, meaning that whatever happens on the underneath the API, that should not be impacting the business generated workloads, this is a must, or impacting them as little as possible, the platform needs to be secure and transparent, going back to the idea of working in the highly regulated areas. And this is again, a table stake to enter the enterprise market. The platform needs to be simple to deploy (audio blurs) 'cause well, you as an operator, you should not be thinking about the internals, but try to focus in on enabling your users with the best possible experience. Updates, updates are very important. So the platform needs to keep up with the latest software patches, bug fixes, and of course, features, and upgrading to a new version must not take weeks or months, and has as little impact on the running workloads as possible. And of course, to be able to run modern application, the platform needs to provide the comparable set of services, just as a public cloud so that you can move your application across your terms in the private or public cloud without having to change it severally, so-called the feature parity, it needs to be there. And if we look at the architecture of OpenStack, and we know OpenStack is powerful, it can do a lot. We've just discussed that, right? But the architecture of OpenStack is known to be complex. And well, tell me, how would you enable the robustness and robustness and reliability in this complex system? It's not easy, right? So, and actually this diagrams shelves, just like probably a third part of the modern update OpenStack cloud. So it's just a little illustration. It's not the whole picture. So imagine how hard it is to make a very solid platform out of this architecture. And well, naturally this also imposes some challenges to provide the transparency and security, 'cause well, the more complex the system is, the harder it is to manage, and well the harder it is to see what's on the inside, and well upgrades, yeah. One of the biggest challenges that we learned from our past previous history, well that many of our customers prefer to stay on the older version of OpenStack, just because, well, they were afraid of upgraded, cause they saw upgrades as time-consuming and risky and divorce. And well, instead of just switching to the latest and greatest software, they preferred reliability by sticking to the old stuff. Well, why? Well, 'cause potentially that meant implied certain impact on their workloads and well an upgrade required thorough planning and execution, just to be as as riskless as possible. And we are solving all of these challenges, of managing a system as complex as OpenStack is with Kubernetes. >> So how does Kubernetes solve these problems? >> Well, we look at OpenStack as a typical microservice architecture application, that is organized into multiple little moving parts, demons that are connected to each other and that talk to each other through the standard API. And altogether, that feels as very good feet to run on top of a Kubernetes cluster, because many of the modern applications, they fall exactly on the same pattern. >> How exactly did you put OpenStack on Kubernetes? >> Well, that's not easy. I'm going to be frank with you. And if you look at the architectural diagram, so this is a stack of Miranda's products represented with a focus of course, on the Mirantis OpenStack, as a central part. So what you see in the middle shelving pink, is Mirantis OpenStack on Kubernetes itself. And of course around that are supporting components that are needed to be there, to run OpenStack on Kubernetes successfully. So on the very bottom, there is hardware, networking, storage, computing, hardware that somebody needs to configure provision and manage, to be able to deploy the operating system on top of it. And this is just another layer of complexity that abstracts the Mirantis OpenStack on Kubernetes just from the under lake. So once we have operating system there, there needs to be a Kubernetes cluster, deployed and managed. And as I mentioned previously, we are using the capabilities that this Kuberenetes cluster provides to run OpenStack itself, the control plane that way, because everything in Mirantis OpenStack on Kuberentes is a container, or whatever you can think of. Of course naturally, it doesn't sound like an easy task to manage this multi-layered pie. And this is where Docker Enterprise Container Cloud comes into play, 'cause this is our single pane of glass into day one and day two operations for the hardware itself, for the operating system, and for Docker Enterprise Kubernetes. So it solves the need to have this underlay ready and prepared. And once the underlay is there, you go ahead, and deploy Mirantis OpenStack on Kubernetes, just as another Kubernetes application, application following the same practices and tools as you use with any other applications. So naturally of course, once you have OpenStack up and running, you can use it to create your own... To give your users ability to create their own private little Kubernetes clusters inside OpenStack projects. And this is one of the measure just cases for OpenStack these days, again, being an underlay for containers. So if you look at the operator experience, how does it look like for a human operator who is responsible for deployment the management of the cloud to deal with Mirantis OpenStack on Kubernetes? So first, you deploy Docker Enterprise Container Cloud, and you use the built-in capabilities that it provides to provision your physical infrastructure, that you discover the hardware nodes, you deploy operating system there, you do configuration of the network interfaces in storage devices there, and then you deploy Kubernetes cluster on top of that. This Kubernetes cluster is going to be dedicated to Mirantis OpenStack on Kuberenetes itself. So it's a special (indistinct) general purpose thing, that well is dedicated to OpenStack. And that means that inside of this cluster, there are a bunch of life cycle management modules, running as Kubernetes operators. So OpenStack itself has its own LCM module or operator. There is a dedicated operator for Ceph, cause Ceph is our major storage solution these days, that we integrate with. Naturally, there is a dedicated lifecycle management module for Stack Light. Stack Light is our operator, logging monitoring alerting solution for OpenStack on Kubernetes, that we bundle toegether with the whole product suite. So Kubernetes operators, directly through, it keeps the TL command or through the practical records that are provided by Docker Enterprise Container Cloud, as a part of it, to deploy the OpenStack staff and Stack Light clusters one by one, and connect them together. So instead of dealing with hundreds of YAML files, while it's five definitions, five specifications, that you're supposed to provide these days and that's safe. And although data management is performed through these APIs, just as the deployment as easily. >> All of this assumes that OpenStack has containers. Now, Mirantis was containerizing back long before Kubernetes even came along. Why did we think this would be important? >> That is true. Well, we've been containerizing OpenStack for quite a while already, it's not a new thing at all. However, is the way that we deploy OpenStack as a Kubernetes application that matters, 'cause Kubernetes solves a whole bunch of challenges that we have used to deal with, with MCP1, when deploying OpenStack on top of bare operating systems as packages. So, naturally Kubernetes provides us with... Allows us to achieve reliability through the self (audio blurs) auto-scaling mechanisms. So you define a bunch of policies that describe the behavior of OpenStack control plane. And Kubernetes follows these policies when things happen, and without actually any need for human interaction. So isolation of the dependencies or OpenStack services within Docker images is a good thing, 'cause previously we had to deal with packages and conflicts in between the versions of different libraries. So now we just ship everything together as a Docker image, and I think that early in updates is an advanced feature that Kubernetes provides natively. So updating OpenStack has never been as easy as with Kubernetes. Kubernetes also provides some fancy building blocks for network and like hold balancing, and of course, collegial tunnels, and service meshes. They're also quite helpful when dealing with such a complex application like OpenStack when things need to talk to each other and without any problem in the configuration. So Helm Reconciling is a place that also has a great deal of role. So it actually is our soul for Kubernetes. We're using Helm Bubbles, which are for opens, provide for OpenStack into upstream, as our low level layer of logic to deploy OpenStack app services and connect them to each other. And they'll naturally automatic scale-up of control plane. So adding in, YouNote is easy, you just add a new Kubernetes work up with a bunch of labels there and well, it handles the distribution of the necessary service automatically. Naturally, there are certain drawbacks. So there's fancy features come at a cost. Human operators, they need to understand Kubernetes and how it works. But this is also a good thing because everything is moving towards Kubernetes these days, so you would have to learn at some point anyway. So you can use this as a chance to bring yourself to the next level of knowledge. OpenStack is not 100% Cloud Native Application by itself. Unfortunately, there are certain components that are stateful like databases, or NOAA compute services, or open-the-switch demons, and that have to be dealt with very carefully when doing operates, updates, and all the whole deployment. So there's extra life cycle management logic build team that handles these components carefully for you. So, a bit of a complexity we had to have. And naturally, Kubernetes requires resources, and keeping the resources itself to run. So you need to have this resources available and dedicated to Kubernetes control plane, to be able to control your application, that is all OpenStack and stuff. So a bit of investment is required. >> Can anybody just containerize OpenStack services and get these benefits? >> Well, yes, the idea is not new, there's a bunch of OpStream open, sorry, community projects doing pretty much the same thing. So we are not inventing a rocket here, let's be fair. However, it's only the way that Kubernetes cooks OpenStack, gives you the robustness and reliability that enterprise and like big customers actually need. And we're doing a great deal of a job, ultimating all the possible day to work polls and all these caveats complexities of the OpenStack management inside our products. Okay, at this point, I believe we shall wrap this discussion a bit up. So let me conclude for you. So OpenStack is an opensource infrastructure as a service platform, that still has its niche in 2020th, and it's going to have it's niche for at least five years. OpenStack is a powerful but very complex tool. And the complexities of OpenStack and OpenStack life cycle management, are successfully solved by Mirantis, through the capabilities of Kubernetes distribution, that provides us with the old necessary primitives to run OpenStack, just as another containerized application these days.

Published Date : Sep 14 2020

SUMMARY :

on the CFO-side of the house at Mirantis. and that is going to be launched So set the stage for me here. So as compared again, to a VMware, And even if IaaS is still needed, and they need to run somewhere. So the platform needs to keep up and that talk to each other of the cloud to deal with All of this assumes that and keeping the resources itself to run. and it's going to have it's

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Adrienne DavisPERSON

0.99+

Artem AndreevPERSON

0.99+

2020DATE

0.99+

five specificationsQUANTITY

0.99+

five definitionsQUANTITY

0.99+

MirantisORGANIZATION

0.99+

100%QUANTITY

0.99+

OpenStackTITLE

0.99+

hundredsQUANTITY

0.99+

CephORGANIZATION

0.99+

MicrosoftORGANIZATION

0.98+

todayDATE

0.98+

OneQUANTITY

0.98+

five criteriaQUANTITY

0.98+

firstQUANTITY

0.98+

KubernetesTITLE

0.97+

2020thDATE

0.96+

oneQUANTITY

0.95+

GoogleORGANIZATION

0.93+

MCP1TITLE

0.92+

twoQUANTITY

0.92+

Mirantis OpenStackTITLE

0.91+

Mirantis OpenStackTITLE

0.91+

YouNoteTITLE

0.9+

Docker EnterpriseORGANIZATION

0.9+

Helm BubblesTITLE

0.9+

KubernetesORGANIZATION

0.9+

least five yearsQUANTITY

0.89+

singleQUANTITY

0.89+

Mirantis OpenStack on KubernetesTITLE

0.88+

few months agoDATE

0.86+

OpenStack on KubernetesTITLE

0.86+

Docker EnterpriseTITLE

0.85+

K8STITLE

0.84+

Speed K8S Dev Ops Secure Supply Chain


 

>>this session will be reviewing the power benefits of implementing a secure software supply chain and how we can gain a cloud like experience with flexibility, speed and security off modern software delivery. Hi, I'm Matt Bentley, and I run our technical pre sales team here. Um Iran. Tous I spent the last six years working with customers on their container ization journey. One thing almost every one of my customers is focused on how they can leverage the speed and agility benefits of contain arising their applications while continuing to apply the same security controls. One of the most important things to remember is that we are all doing this for one reason, and that is for our applications. So now let's take a look at how we could provide flexibility all layers of the stack from the infrastructure on up to the application layer. When building a secure supply chain for container focus platforms, I generally see two different mindsets in terms of where the responsibilities lie between the developers of the applications and the operations teams who run the middleware platforms. Most organizations are looking to build a secure yet robust service that fits the organization's goals around how modern applications are built and delivered. Yeah. First, let's take a look at the developer or application team approach. This approach follows Mawr of the Dev ops philosophy, where a developer and application teams are the owners of their applications. From the development through their life cycle, all the way to production. I would refer this more of a self service model of application, delivery and promotion when deployed to a container platform. This is fairly common organizations where full stack responsibilities have been delegated to the application teams, even in organizations were full stack ownership doesn't exist. I see the self service application deployment model work very well in lab development or non production environments. This allows teams to experiment with newer technologies, which is one of the most effective benefits of utilizing containers and other organizations. There's a strong separation between responsibilities for developers and I T operations. This is often do the complex nature of controlled processes related to the compliance and regulatory needs. Developers are responsible for their application development. This can either include doctorate the development layer or b'more traditional throw it over the wall approach to application development. There's also quite a common experience around building a center of excellence with this approach, where we can take container platforms and be delivered as a service to other consumers inside of the I T organization. This is fairly prescriptive, in the manner of which application teams would consume it. When examining the two approaches, there are pros and cons to each process. Controls and appliance are often seen as inhibitors to speak. Self service creation, starting with the infrastructure layer, leads to inconsistency, security and control concerns, which leads to compliance issues. While self service is great without visibility into the utilization and optimization of those environments, it continues the cycles of inefficient resource utilization and the true infrastructure is a code. Experience requires Dev ops related coding skills that teams often have in pockets but maybe aren't ingrained in the company culture. Luckily for us, there is a middle ground for all of this Doc Enterprise Container Cloud provides the foundation for the cloud like experience on any infrastructure without all of the out of the box security and controls that are professional services Team and your operations team spend their time designing and implementing. This removes much of the additional work and worry Run, ensuring that your clusters and experiences are consistent while maintaining the ideal self service model, no matter if it is a full stack ownership or easing the needs of I T operations. We're also bringing the most natural kubernetes experience today with winds to allow for multi cluster visibility that is both developer and operator friendly. Let's provides immediate feedback for the health of your applications. Observe ability for your clusters. Fast context, switching between environments and allowing you to choose the best in tool for the task at hand. Whether is three graphical user interface or command line interface driven. Combining the cloud like experience with the efficiencies of a secure supply chain that meet your needs brings you the best of both worlds. You get Dave off speed with all the security controls to meet the regulations your business lives by. We're talking about more frequent deployments. Faster time to recover from application issues and better code quality, as you can see from our clusters we have worked with were able to tie these processes back to real cost savings, riel efficiency and faster adoption. This all adds up to delivering business value to end users in the overall perceived value. Now let's look at see how we're able to actually build a secure supply chain. Help deliver these sorts of initiatives in our example. Secure Supply chain. We're utilizing doctor desktop to help with consistency of developer experience. Get hub for our source Control Jenkins for a C A C D. Tooling the doctor trusted registry for our secure container registry in the universal control playing to provide us with our secure container run time with kubernetes and swarm. Providing a consistent experience no matter where are clusters are deployed. You work with our teams of developers and operators to design a system that provides a fast, consistent and secure experience for my developers that works for any application. Brownfield or Greenfield monolith or micro service on boarding teams could be simplified with integrations into enterprise authentication services. Calls to get help repositories. Jenkins Access and Jobs, Universal Control Plan and Dr Trusted registry teams and organizations. Cooper down his name space with access control, creating doctor trusted registry named spaces with access control, image scanning and promotion policies. So now let's take a look and see what it looks like from the C I c D process, including Jenkins. So let's start with Dr Desktop from the doctor desktop standpoint, what should be utilizing visual studio code and Dr Desktop to provide a consistent developer experience. So no matter if we have one developer or 100 we're gonna be able to walk through the consistent process through docker container utilization at the development layer. Once we've made our changes to our code will be able to check those into our source code repository in this case, abusing Get up. Then, when Jenkins picks up, it will check out that code from our source code repository, build our doctor containers, test the application that will build the image, and then it will take the image and push it toward doctor trusted registry. From there, we can scan the image and then make sure it doesn't have any vulnerabilities. Then we consign them. So once we signed our images, we've deployed our application to Dev. We can actually test their application deployed in our real environment. Jenkins will then test the deployed application, and if all tests show that is good, will promote the r R Dr and Mr Production. So now let's look at the process, beginning from the developer interaction. First of all, let's take a look at our application as is deployed today. Here, we can see that we have a change that we want to make on our application. So marketing Team says we need to change containerized injure next to something more Miranda's branded. So let's take a look at visual studio coat, which will be using for I D to change our application. So here's our application. We have our code loaded, and we're gonna be able to use Dr Desktop on our local environment with our doctor desktop plug in for visual studio code to be able to build our application inside of doctor without needing to run any command line. Specific tools here is our code will be able to interact with docker, make our changes, see it >>live and be able to quickly see if our changes actually made the impact that we're expecting our application. Let's find our updated tiles for application and let's go and change that to our Miranda sized into next. Instead of containerized in genetics, so will change in the title and on the front page of the application, so that we save. That changed our application. We can actually take a look at our code here in V s code. >>And as simple as this, we can right click on the docker file and build our application. We give it a name for our Docker image and V s code will take care of the automatic building of our application. So now we have a docker image that has everything we need in our application inside of that image. So here we can actually just right click on the image tag that we just created and do run this winter, actively run the container for us and then what's our containers running? We could just right click and open it up in a browser. So here we can see the change to our application as it exists live. So once we can actually verify that our applications working as expected, weaken, stop our container. And then from here, we can actually make that change live by pushing it to our source code repository. So here we're going to go ahead and make a commit message to say that we updated to our Mantis branding. We will commit that change and then we'll push it to our source code repository again. In this case we're using get Hub to be able to use our source code repository. So here in V s code will have that pushed here to our source code repository. And then we'll move on to our next environment, which is Jenkins. Jenkins is gonna be picking up those changes for our application, and it checked it out from our source code repository. So get Hub Notifies Jenkins. That there is a change checks out. The code builds our doctor image using the doctor file. So we're getting a consistent experience between the local development environment on our desktop and then and Jenkins or actually building our application, doing our tests, pushing in toward doctor trusted registry, scanning it and signing our image. And our doctor trusted registry, then 2.4 development environment. >>So let's actually take a look at that development environment as it's been deployed. So here we can see that our title has been updated on our application so we can verify that looks good and development. If we jump back here to Jenkins, will see that Jenkins go >>ahead and runs our integration tests for a development environment. Everything worked as expected, so it promoted that image for production repository and our doctor trusted registry. Where then we're going to also sign that image. So we're signing that. Yes, we have signed off that has made it through our integration tests, and it's deployed to production. So here in Jenkins, we could take a look at our deployed production environment where our application is live in production. We've made a change automated and very secure manner. >>So now let's take a look at our doctor trusted registry where we can see our game Space for application are simple in genetics repository. From here we will be able to see information about our application image that we've pushed into the registry, such as Thean Midge signature when it was pushed by who and then we'll also be able to see the scan results of our image. In this case, we can actually see that there are vulnerabilities for our image and we'll actually take a look at that. Dr Trusted registry does binary level scanning, so we get detailed information about our individual image layers. From here, these image layers give us details about where the vulnerabilities were located and what those vulnerabilities actually are. So if we click on the vulnerability, we can see specific information about that vulnerability to give us details around the severity and more information about what, exactly is vulnerable inside of our container. One of the challenges that you often face around vulnerabilities is how, exactly we would remediate that and secure supply chain. So let's take a look at that and the example that we were looking at the vulnerability is actually in the base layer of our image. In order to pull in a new base layer of our image, we need to actually find the source of that and updated. One of the ways that we can help secure that is a part of the supply chain is to actually take a look at where we get our base layers of our images. Dr. Help really >>provides a great source of content to start from, but opening up docker help within your organization opens up all sorts of security concerns around the origins of that content. Not all images are made equal when it comes to the security of those images. The official images from Docker, However, curated by docker, open source projects and other vendors, one of the most important use cases is around how you get base images into your environment. It is much easier to consume the base operating system layer images than building your own and also trying to maintain them instead of just blindly trusting the content from doctor. How we could take a set >>of content that we find useful, such as those base image layers or content from vendors, and pull that into our own Dr trusted registry using our rearing feature. Once the images have been mirrored into a staging area of our DACA trusted registry, we can then scan them to ensure that the images meet our security requirements and then, based off the scan result, promote the image toe a public repository where we can actually sign the images and make them available to our internal consumers to meet their needs. This allows us to provide a set of curated content that we know a secure and controlled within our environment. So from here we confined our updated doctor image in our doctor trust registry, where we can see that the vulnerabilities have been resolved from a developers point of view, that's about a smooth process gets. Now let's take a look at how we could provide that secure content for developers and our own Dr Trusted registry. So in this case, we're taking a look at our Alpine image that we've mirrored into our doctor trusted registry. Here we're looking at the staging area where the images get temporarily pulled because we have to pull them in order to actually be able to scan them. So here we set up nearing and we can quickly turn it on by making active. Then we can see that our image mirroring will pull our content from Dr Hub and then make it available in our doctor trusted registry in an automatic fashion. So from here, we can actually take a look at the promotions to be able to see how exactly we promote our images. In this case, we created a promotion policy within docker trusted registry that makes it so. That content gets promoted to a public repository for internal users to consume based off of the vulnerabilities that are found or not found inside of the docker image. So are actually users. How they would consume this content is by taking a look at the public to them official images that we've made available here again, Looking at our Alpine image, we can take a look at the tags that exist. We could see that we have our content that has been made available, so we've pulled in all sorts of content from Dr Hub. In this case, we have even pulled in the multi architectural images, which we can scan due to the binary level nature of our scanning solution. Now let's take a look at Len's. Lens provides capabilities to be able to give developers a quick, opinionated view that focuses around how they would want to view, manage and inspect applications to point to a Cooper Days cluster. Lindsay integrates natively out of the box with universal control playing clam bundles so you're automatically generated. Tell certificates from UCP. Just work inside our organization. We want to give our developers the ability to see their applications and a very easy to view manner. So in this case, let's actually filter down to the application that we just deployed to our development environment. Here we can see the pot for application and we click on that. We get instant, detailed feedback about the components and information that this pot is utilizing. We can also see here in Linz that it gives us the ability to quickly switch context between different clusters that we have access to. With that, we also have capabilities to be able to quickly deploy other types of components. One of those is helm charts. Helm charts are a great way to package of applications, especially those that may be more complex to make it much simpler to be able to consume inversion our applications. In this case, let's take a look at the application that we just built and deployed. This case are simple in genetics. Application has been bundled up as a helm chart and has made available through lens here. We can just click on that description of our application to be able to see more information about the helm chart so we can publish whatever information may be relevant about our application, and through one click, we can install our helm chart here. It will show us the actual details of the home charts. So before we install it, we can actually look at those individual components. So in this case, we could see that's created ingress rule. And then it's well, tell kubernetes how to create the specific components of our application. We just have to pick a name space to to employ it, too. And in this case, we're actually going to do a quick test here because in this case, we're trying to deploy the application from Dr Hub in our universal Control plane. We've turned on Dr Content Trust Policy Enforcement. So this is actually gonna fail to deploy because we're trying to deploy application from Dr Hub. The image hasn't been properly signed in our environment. So the doctor can to trust policy enforcement prevents us from deploying our doctor image from Dr Hub. In this case, we have to go through our approved process through our secure supply chain to be able to ensure that we know our image came from, and that meets our quality standards. So if we comment out the doctor Hub repository and comment in our doctor trusted registry repository and click install, it will then install the helm chart with our doctor image being pulled from our GTR, which then has a proper signature, we can see that our application has been successfully deployed through our home chart releases view. From here, we can see that simple in genetics application, and in this case we'll get details around the actual deploy and help chart. The nice thing is that Linds provides us this capability here with home. To be able to see all the components that make up our application from this view is giving us that single pane of glass into that specific application so that we know all the components that is created inside of kubernetes. There are specific details that can help us access the applications, such as that ingress world that we just talked about gives us the details of that. But it also gives us the resource is such as the service, the deployment in ingress that has been created within kubernetes to be able to actually have the application exist. So to recap, we've covered how we can offer all the benefits of a cloud like experience and offer flexibility around dev ups and operations controlled processes through the use of a secure supply chain, allowing our developers to spend more time developing and our operators mawr time designing systems that meet our security and compliance concerns

Published Date : Sep 12 2020

SUMMARY :

So now let's take a look at how we could provide flexibility all layers of the stack from the and on the front page of the application, so that we save. So here we can see the change to our application as it exists live. So here we can So here in Jenkins, we could take a look at our deployed production environment where our application So let's take a look at that and the example that we were looking at of the most important use cases is around how you get base images into your So in this case, let's actually filter down to the application that we just deployed to our development environment.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt BentleyPERSON

0.99+

UCPORGANIZATION

0.99+

MawrPERSON

0.99+

FirstQUANTITY

0.99+

CooperPERSON

0.99+

OneQUANTITY

0.99+

100QUANTITY

0.99+

one reasonQUANTITY

0.99+

two approachesQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.99+

Dr HubORGANIZATION

0.98+

DavePERSON

0.98+

oneQUANTITY

0.98+

JenkinsTITLE

0.97+

twoQUANTITY

0.97+

LindsORGANIZATION

0.97+

IranLOCATION

0.97+

One thingQUANTITY

0.97+

one developerQUANTITY

0.96+

DACATITLE

0.95+

each processQUANTITY

0.95+

Dr DesktopTITLE

0.93+

one clickQUANTITY

0.92+

single paneQUANTITY

0.92+

both worldsQUANTITY

0.91+

Thean MidgePERSON

0.91+

dockerTITLE

0.89+

three graphical userQUANTITY

0.86+

MantisORGANIZATION

0.85+

last six yearsDATE

0.84+

DrORGANIZATION

0.82+

MirandaORGANIZATION

0.81+

BrownfieldORGANIZATION

0.8+

this winterDATE

0.75+

waysQUANTITY

0.75+

CTITLE

0.74+

one ofQUANTITY

0.74+

LindsayORGANIZATION

0.72+

ingressTITLE

0.71+

AlpineORGANIZATION

0.69+

most important use casesQUANTITY

0.67+

Cooper DaysORGANIZATION

0.66+

JenkinsPERSON

0.65+

mindsetsQUANTITY

0.63+

GreenfieldLOCATION

0.62+

MirandaPERSON

0.62+

RPERSON

0.59+

C A CTITLE

0.59+

LinzTITLE

0.59+

every oneQUANTITY

0.56+

challengesQUANTITY

0.53+

EnterpriseCOMMERCIAL_ITEM

0.5+

2.4OTHER

0.5+

HubORGANIZATION

0.48+

K8STITLE

0.48+

LensTITLE

0.44+

DocORGANIZATION

0.4+

HelpPERSON

0.39+

DockerORGANIZATION

0.37+

AlpineOTHER

0.35+

Willem du Plessis V1


 

>>from around the globe. It's the Cube with digital coverage of Miranda's launchpad 2020 brought to you by more antis. >>Hi, I'm stupid man. And this is the cubes coverage of Miran Tous Launchpad 2020. Happy to welcome to the program First time guest William Do places. He's the head of customer success in operations with Miran Tous William. Thanks so much for joining us, >>Steve. Thanks for two. Thanks for having me. >>Yeah, why don't we start with a little bit? You know, customer success operations. Tell us what that's entail, What's what's under your purview, Right? >>So is everything basically, you know, post sales, right? So after a customer has portions, their their subscription, we basically take it from there Going forward, you know, looking after the relationship with the customer, ensure they you know, the whole, you know, subscription fulfillment element off it. Whether that is just bored with that is to cut the relationship management from a post sales perspective and so on, so forth. So that is basically into end from the point off purchase to the renewal face. Would would would fall with any supporting operations. >>Well, that's such an important piece of the whole cloud conversation. Of course, people, you know, we talked for such a long time cap ex op X. We talk about descriptions and manage services. Of course, has been a riel. You know, growth segment of the market place. Love to hear a little bit, you know, What are you hearing from your customers? And, you know, give us the lay of the land As to the various options that that that Miranda is offering today and we'll get into any of the new pieces also. >>Yeah. So the the the options that we're making available for our customers primary called Prod Care, which is a 24 7 mission critical support subscription. Andi Ops Care, which is a fully managed service offering. Um, what we hear from our customers is is, you know, the the the notion of having a development environment and a production environment with different, you know, sls and entitlements and so on. You know that that that notion is disappearing because your divots chain or pipeline is all connected. So you could just think for yourself. If you have a group of developers like 50 or 100 or 1000 developers that are basically standing there still, because they cannot push code because there's a problem or a new issue on the development cloud. But the development cloud is not. Beings is not seen or is treated as a mission critical platform. You know, those developers are standing date stole, so that is a very expensive problem for a customer at that point in time, so that the whole chain, the whole pipeline that makes part off your your develop cycle, should be seen as one entity. And that's what we've seen in the market at the moment that we're realizing with a large customers that are really embracing the kind of the approach to modern applications. Andi. This is why we're making these options available. Thio, our Doctor Enterprise customers We've been running with them for quite a while on the on the Miranda's cloud platform, which is our Infrastructures service offering on. We've had some great success with that, and we now in a position to make that available to our customers. So it's really providing a customer that true enterprise mission critical regardless of time off they the day of the week availability off support whether that is a my question or whether that is really an outage or a failure. You know, you know, you've got that safety net that is that is online and available for you. Thio to sort Whatever problem you have about, you know, that is from the support perspective, you know. And if we go over into the manage service offering we have for on up Skate that is a really hardened Eitel based, um infrastructure or platform as a service offering that we provide eso. We've had some great success. Like I said on the on the Miranda's cloud platform Peace. And we're now making that available for Doc Enterprise customers as well. So that is taking the whole the whole chain through. We look after the the whole platform for the customer and allow the customer to get on with what is important to them. You know, how do they develop their applications? You know, optimize that for their business instead, off spending their times and keep spending their time on keeping the lights on, so to speak, you know? So we take care of all of that. They have that responsibility over to us on DWI. We manage that as our own and we basically could become an extension of their business. So we have a fully integrated into the environment, the whole logging and not monitoring piece we take over the whole life cycle. Management off the environment. We take over, we do the whole change Management piece Incident Management Incident Management piece on. This whole process is truly transparent to the customer. At no point are they, you know, in the dark what's going on where we're going and we have the and the whole pieces wrapped around Bio customer success Manager which is bringing this whole sense off ownership Onda priority. That customer, you've got a single point of contact that is your business partner and that the only piece, the only metric that that individuals measured on is the success off that customer with our our product. So that in a nutshell, at a very high level what these are. But these offerings are all about >>well, we all know these days how important it is toe, you know, make your developers productive. It's funny listening to you, I think back to the Times where you talked about making sure that it's mission critical environment You know, years ago it was like, Oh, well, the developer just gets whatever old hardware we have, and they do it on their own. Now, of course, you know you want Dev in production toe have a very similar environment. And as you said, those manage services offering and be, you know, so important because we want to be able to shift left, let my platform let my vendors take care of some of the things that's gonna be able to enable me to build my new applications toe, respond to the business and do. In fact, I don't want my developers getting bogged down. So do you have any, You know, what are some of the successes there? How do your customers measure that? They Hey, I'm getting great value for going to manage service. Obviously, you know, you talked about that, That technical manager that helps them there. Anybody that's used, you know, enterprise offerings. There's certain times where it's like, Hey, I use it a lot. Other times it's it's just nice to be there, but, you know, why do you bring us in a little bit? Some of the customers, obviously anonymous. You need Teoh you know, how do they say, You know, this is phenomenal value for my business. >>Yes, it's all. It's all about the focus, right? So you're the customer. 100% focus on what is like I said, important to them, they are not being distracted at any point for, you know, on spending time on infrastructure related or platform related issues they purely focused on. Like I said, that is, that is important to their business. Andi, The successes that we see from that it is, is that we have this integration into into our customers like a seamless approach. We work with them is a true, transparent approach to work with our customers. There's a there's an active dialogue off what they developers want to see from the environment, what the customers want to see from from from the environment, what is working well, what we need to optimize. And that is really seeing ah, really good a approach from from from us and we're seeing some some great successes in it. But it all comes down to the customer is focusing on one thing, and that is on what is important to them on. But is there business instead off. You know, like you said, focusing on the stuff that shoot me, that that should be shift left. >>Yeah. And will, um, is there anything that really stands out when you talk about that? The monitoring that you in the reporting that you give the customers Is it all self serve? You know, how did they set that up and make sure that I'm getting valuable data. That's what what my company needs. >>Yes. So that is where your custom success manager comes in is really how to customize that approach to what fit for the customers. So we've got it in the background, very much automated, but we do the tweaks Thio customize it for for the customer that makes sense for them. Some customers want to see very granular details. Other customers just want a glance over it and look at the the high level metrics what they find important. So it is finding that balance and and understanding what your customer find important, and then put that in a way that makes sense for them Now. That might sound might sound kind of obvious, but it's more difficult than you think to put data from the customer That makes sense to them, uh, in their in their context. And then, you know, be in a position where you can take the information that you receive and give your customer the the runway to plan their the application. You know, where are they trending? So be able. Dutilleux. Look. 3452 quarters 345 months to quarters ahead to say this is where you're going to be. If you continue down this path, we might need to look at shifting direction or shift workload around or at Resource is or or, you know, depending on the situation. But it's all about having that insight going forward, looking forward. Rather, um, instead off, you know, playing things by year end, looking, looking at the year. And now because then that ISS that is done and dusted, really. So it's all about what is coming down the line for us and then be able to to plan for it and have an educated conversation of with with with your customer where they want to go. >>You mentioned that part of this offering is making this available for the Docker Enterprise base. Uh, maybe, if you could explain a little bit as to you know, what's gonna be compelling for for those customers. You know what Laurentis is has built specifically for that base? >>Yeah. So, like I said, this is an offering we have available on our Miranda's cloud platform for quite a while. You've seen some great success from it. Um, we're making it now available for the Doctor Enterprise customers. So it is really a true platform as a service offering, um, on your infrastructure of choice, whether that is on prim or whether that is on public cloud, we don't really care. We'll work with customers whichever way it is. And yeah, Like I said, just give that true platform as a service experience for our customers, Onda allow them to to focus on what's important to them. >>Alright, let me let you have the final word. Will tell us what you want your customers to understand about Iran tous when they leave launchpad this year. >>Yeah, So the main thing the main theme that I want to leave with is is that you know, the the we've made significant progress over the last 62 229 months on the doc enterprise side on. We're now in a position where we're taking the next step in making these offerings available for our customers, and we're really for the customers. That's the handful of custom that we have already. My greater to these offerings with getting some really good feedback from them on it is really helping them just thio thio just to expedite. They they, you know, wherever they're gonna go, whatever they want to want to achieve the, you know, expedite, think goals on bond. It is really there to ensure that we provide a customer or customers a true, um, you know, mission critical feeling, uh, giving them the support they need when they needed at the priority or the severity or the intensity that they need as well as they provide them. The ability to to focus on what is important to them on board. Let us look after the infrastructure and platform for them. >>Well, well, okay. Congratulations on Although the work that the team's done and definitely look forward to hearing more in the future. >>Excellent. Thank you very much for your time. >>Be sure to check out all the tracks for Miranda's launchpad 2020 of course. Powered by Cuba 3. 65. Got the infrastructure. The developers Lots of good content. Both live and on demand. And I'm still minimum. Thank you for watching. Thank you.

Published Date : Sep 9 2020

SUMMARY :

launchpad 2020 brought to you by more antis. He's the head of customer success in operations with Thanks for having me. Tell us what that's entail, What's what's under your purview, Right? So is everything basically, you know, post sales, right? Love to hear a little bit, you know, What are you hearing from your customers? You know, you know, you've got that safety net that is that is online and available for you. Other times it's it's just nice to be there, but, you know, You know, like you said, focusing on the stuff that shoot me, The monitoring that you in the reporting that you give the customers Is it all self serve? the information that you receive and give your customer the the runway Uh, maybe, if you could explain a little bit as to you know, what's gonna be compelling for for Like I said, just give that true platform as a service experience for our customers, Will tell us what you want your customers to understand you know, the the we've made significant progress Congratulations on Although the work that the team's done and definitely look forward to hearing more in the future. Thank you very much for your time. Thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

100%QUANTITY

0.99+

twoQUANTITY

0.99+

100QUANTITY

0.99+

EitelORGANIZATION

0.99+

50QUANTITY

0.99+

ThioPERSON

0.99+

1000 developersQUANTITY

0.98+

todayDATE

0.98+

this yearDATE

0.97+

First timeQUANTITY

0.97+

single pointQUANTITY

0.97+

Miran Tous WilliamORGANIZATION

0.97+

Doc EnterpriseORGANIZATION

0.96+

BothQUANTITY

0.95+

MirandaORGANIZATION

0.94+

Doctor EnterpriseORGANIZATION

0.92+

DutilleuxPERSON

0.91+

Willem du PlessisPERSON

0.89+

345 monthsQUANTITY

0.89+

Andi Ops CareTITLE

0.87+

OndaORGANIZATION

0.87+

MirandaPERSON

0.84+

DockerORGANIZATION

0.84+

launchpad 2020COMMERCIAL_ITEM

0.83+

one entityQUANTITY

0.82+

one thingQUANTITY

0.81+

24 7 missionQUANTITY

0.8+

IranLOCATION

0.8+

yearsDATE

0.8+

Prod CareTITLE

0.79+

LaurentisPERSON

0.79+

62 229 monthsQUANTITY

0.78+

supportQUANTITY

0.7+

3452QUANTITY

0.67+

WilliamPERSON

0.65+

TeohPERSON

0.64+

SkateORGANIZATION

0.64+

DWILOCATION

0.63+

3. 65OTHER

0.62+

AndiPERSON

0.57+

CubaORGANIZATION

0.56+

Tous Launchpad 2020TITLE

0.55+

DoORGANIZATION

0.53+

MiranCOMMERCIAL_ITEM

0.5+

Dave Van Everen, Mirantis | Mirantis Launchpad 2020 Preview


 

>>from the Cube Studios in Palo Alto in Boston, connecting with thought leaders all around the world. This is a cube conversation. >>Hey, welcome back. You're ready, Jeffrey here with the Cuban Apollo Alto studios today, and we're excited. You know, we're slowly coming out of the, uh, out of the summer season. We're getting ready to jump back into the fall. Season, of course, is still covet. Everything is still digital. But you know, what we're seeing is a digital events allow a lot of things that you couldn't do in the physical space. Mainly get a lot more people to attend that don't have to get in airplanes and file over the country. So to preview this brand new inaugural event that's coming up in about a month, we have We have a new guest. He's Dave and Everen. He is the senior vice president of marketing. Former ran tous. Dave. Great to see you. >>Happy to be here today. Thank you. >>Yeah. So tell us about this inaugural event. You know, we did an event with Miranda's years ago. I had to look it up like 2014. 15. Open stack was hot and you guys sponsored a community event in the Bay Area because the open stack events used to move all over the country each and every year. But you guys said, and the top one here in the Bay Area. But now you're launching something brand new based on some new activity that you guys have been up to over the last several months. So let us give us give us the word. >>Yeah, absolutely. So we definitely have been organizing community events in a variety of open source communities over the years. And, you know, we saw really, really good success with with the Cube And are those events in opens tax Silicon Valley days? And, you know, with the way things have gone this year, we've really seen that virtual events could be very successful and provide a new, maybe slightly different form of engagement but still very high level of engagement for our guests and eso. We're excited to put this together and invite the entire cloud native industry to join us and learn about some of the things that Mantis has been working on in recent months. A zwelling as some of the interesting things that are going on in the Cloud native and kubernetes community >>Great. So it's the inaugural event is called Moran Sous launchpad 2020. The Wares and the Winds in September 16th. So we're about a month away and it's all online is their registration. Costars is free for the community. >>It's absolutely free. Eso everyone is welcome to attend You. Just visit Miranda's dot com and you'll see the info for registering for the event and we'd love it. We love to see you there. It's gonna be a fantastic event. We have multiple tracks catering to developers, operators, general industry. Um, you know, participants in the community and eso we'd be happy to see you on join us on and learn about some of the some of the things we're working on. >>That's awesome. So let's back up a step for people that have been paying as close attention as they might have. Right? So you guys purchase, um, assets from Docker at the end of last year, really taken over there, they're they're kind of enterprise solutions, and you've been doing some work with that. Now, what's interesting is we we cover docker con, um, A couple of months ago, a couple three months ago. Time time moves fast. They had a tremendously successful digital event. 70,000 registrants, people coming from all over the world. I think they're physical. Event used to be like four or 5000 people at the peak, maybe 6000 Really tremendous success. But a lot of that success was driven, really by the by the strength of the community. The docker community is so passionate. And what struck me about that event is this is not the first time these people get together. You know, this is not ah, once a year, kind of sharing of information and sharing ideas, but kind of the passion and and the friendships and the sharing of information is so, so good. You know, it's a super or, um, rich development community. You guys have really now taken advantage of that. But you're doing your Miranda's thing. You're bringing your own technology to it and really taking it to more of an enterprise solution. So I wonder if you can kind of walk people through the process of, you know, you have the acquisition late last year. You guys been hard at work. What are we gonna see on September 16. >>Sure, absolutely. And, you know, just thio Give credit Thio Docker for putting on an amazing event with Dr Khan this year. Uh, you know, you mentioned 70,000 registrants. That's an astounding number. And you know, it really is a testament thio. You know, the community that they've built over the years and continue to serve eso We're really, really happy for Docker as they kind of move into, you know, the next the next path in their journey and, you know, focus more on the developer oriented, um, solution and go to market. So, uh, they did a fantastic job with the event. And, you know, I think that they continue toe connect with their community throughout the year on That's part of what drives What drove so many attendees to the event assed faras our our history and progress with with Dr Enterprise eso. As you mentioned mid November last year, we did acquire Doctor Enterprise assets from Docker Inc and, um, right away we noticed tremendous synergy in our product road maps and even in the in the team's eso that came together really, really quickly and we started executing on a Siris of releases. Um that are starting Thio, you know, be introduced into the market. Um, you know, one was introduced in late May and that was the first major release of Dr Enterprise produced exclusively by more antis. And we're going to announce at the launch pad 2020 event. Our next major release of the Doctor Enterprise Technology, which will for the first time include kubernetes related in life cycle management related technology from Mirant is eso. It's a huge milestone for our company. Huge benefit Thio our customers on and the broader user community around Dr Enterprise. We're super excited. Thio provide a lot of a lot of compelling and detailed content around the new technology that will be announcing at the event. >>So I'm looking at the at the website with with the agenda and there's a little teaser here right in the middle of the spaceship Docker Enterprise Container Cloud. So, um, and I glanced into you got a great little layout, five tracks, keynote track D container track operations and I t developer track and keep track. But I did. I went ahead and clicked on the keynote track and I see the big reveal so I love the opening keynote at at 8 a.m. On the 76 on the September 16th is right. Um, I, Enel CEO who have had on many, many times, has the big reveal Docker Enterprise Container Cloud. So without stealing any thunder, uh, can you give us any any little inside inside baseball on on what people should expect or what they can get excited about for that big announcement? >>Sure, absolutely so I definitely don't want to steal any thunder from Adrian, our CEO. But you know, we did include a few Easter eggs, so to speak, in the website on Dr Enterprise. Container Cloud is absolutely the biggest story out of the bunch eso that's visible on the on the rocket ship as you noticed, and in the agenda it will be revealed during Adrian's keynote, and every every word in the product name is important, right? So Dr Enterprise, based on Dr Enterprise Platform Container Cloud and there's the new word in there really is Cloud eso. I think, um, people are going to be surprised at the groundbreaking territory that were forging with with this release along the lines of a cloud experience and what we are going to provide to not only I t operations and the Op Graders and Dev ops for cloud environment, but also for the developers and the experience that we could bring to developers As they become more dependent on kubernetes and get more hands on with kubernetes. We think that we're going thio provide ah lot of ways for them to be more empowered with kubernetes while at the same time lowering the bar, the bar or the barrier of entry for kubernetes. As many enterprises have have told us that you know kubernetes can be difficult for the broader developer community inside the organization Thio interact with right? So this is, uh, you know, a strategic underpinning of our our product strategy. And this is really the first step in a non going launch of technologies that we're going to make bigger netease easier for developing. >>I was gonna say the other Easter egg that's all over the agenda, as I'm just kind of looking through the agenda. It's kubernetes on 80 infrastructure multi cloud kubernetes Miranda's open stack on kubernetes. So Goober Netease plays a huge part and you know, we talk a lot about kubernetes at all the events that we cover. But as you said, kind of the new theme that we're hearing a little bit more Morris is the difficulty and actually managing it so looking, kind of beyond the actual technology to the operations and the execution in production. And it sounds like you guys might have a few things up your sleeve to help people be more successful in in and actually kubernetes in production. >>Yeah, absolutely. So, uh, kubernetes is the focus of most of the companies in our space. Obviously, we think that we have some ideas for how we can, you know, really begin thio enable enable it to fulfill its promise as the operating system for the cloud eso. If we think about the ecosystem that's formed around kubernetes, uh, you know, it's it's now really being held back on Lee by adoption user adoption. And so that's where our focus in our product strategy really lives is around. How can we accelerate the move to kubernetes and accelerate the move to cloud native applications on? But in order to provide that acceleration catalyst, you need to be able to address the needs of not only the operators and make their lives easier while still giving them the tools they need for things like policy enforcement and operational insights. At the same time, Foster, you know, a grassroots, um, upswell of developer adoption within their company on bond Really help the I t. Operations team serve their customers the developers more effectively. >>Well, Dave, it sounds like a great event. We we had a great time covering those open stack events with you guys. We've covered the doctor events for years and years and years. Eso super engaged community and and thanks for, you know, inviting us back Thio to cover this inaugural event as well. So it should be terrific. Everyone just go to Miranda's dot com. The big pop up Will will jump up. You just click on the button and you can see the full agenda on get ready for about a month from now. When when the big reveal, September 16th will happen. Well, Dave, thanks for sharing this quick update with us. And I'm sure we're talking a lot more between now in, uh, in the 16 because I know there's a cube track in there, so we look forward to interview in our are our guests is part of the part of the program. >>Absolutely. Eso welcome everyone. Join us at the event and, uh, you know, stay tuned for the big reveal. >>Everybody loves a big reveal. All right, well, thanks a lot, Dave. So he's Dave. I'm Jeff. You're watching the Cube. Thanks for watching. We'll see you next time.

Published Date : Aug 26 2020

SUMMARY :

from the Cube Studios in Palo Alto in Boston, connecting with thought leaders all around the world. But you know, what we're seeing is a digital Happy to be here today. But you guys said, and the top one here in the Bay Area. invite the entire cloud native industry to join us and The Wares and the Winds in September 16th. participants in the community and eso we'd be happy to see you on So you guys purchase, um, assets from Docker at the end of last year, you know, focus more on the developer oriented, um, solution and So I'm looking at the at the website with with the agenda and there's a little teaser here right in the on the on the rocket ship as you noticed, and in the agenda it will be revealed So Goober Netease plays a huge part and you know, we talk a lot about kubernetes at all the events that we cover. some ideas for how we can, you know, really begin thio enable You just click on the button and you can see the full agenda on uh, you know, stay tuned for the big reveal. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AdrianPERSON

0.99+

September 16DATE

0.99+

DavePERSON

0.99+

JeffreyPERSON

0.99+

Dave Van EverenPERSON

0.99+

JeffPERSON

0.99+

EverenPERSON

0.99+

DockerORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

September 16thDATE

0.99+

Docker IncORGANIZATION

0.99+

Bay AreaLOCATION

0.99+

late MayDATE

0.99+

EnelPERSON

0.99+

mid November last yearDATE

0.99+

5000 peopleQUANTITY

0.99+

fourQUANTITY

0.99+

70,000 registrantsQUANTITY

0.99+

Dr EnterpriseORGANIZATION

0.99+

BostonLOCATION

0.99+

todayDATE

0.99+

MirantisORGANIZATION

0.99+

8 a.m.DATE

0.99+

first timeQUANTITY

0.98+

Docker Enterprise Container CloudTITLE

0.98+

Doctor EnterpriseORGANIZATION

0.98+

FosterPERSON

0.98+

Dr EnterpriseTITLE

0.98+

2014. 15DATE

0.98+

first stepQUANTITY

0.98+

80QUANTITY

0.98+

Docker Enterprise Container CloudTITLE

0.98+

6000QUANTITY

0.97+

this yearDATE

0.97+

Cube StudiosORGANIZATION

0.96+

late last yearDATE

0.96+

Container CloudTITLE

0.96+

five tracksQUANTITY

0.96+

EasterEVENT

0.96+

MorrisPERSON

0.96+

The Wares and the WindsEVENT

0.95+

MirandaPERSON

0.94+

DrPERSON

0.94+

Silicon ValleyLOCATION

0.94+

first timeQUANTITY

0.94+

once a yearQUANTITY

0.93+

eachQUANTITY

0.9+

endDATE

0.9+

couple of months agoDATE

0.88+

LaunchpadCOMMERCIAL_ITEM

0.88+

Apollo Alto studiosORGANIZATION

0.87+

CloudTITLE

0.87+

about a monthQUANTITY

0.86+

ThioPERSON

0.85+

WillPERSON

0.85+

MantisPERSON

0.84+

MirantORGANIZATION

0.84+

Thio DockerPERSON

0.83+

Doctor EnterpriseTITLE

0.82+

a monthQUANTITY

0.82+

KhanPERSON

0.81+

first major releaseQUANTITY

0.81+

last yearDATE

0.8+

2020DATE

0.8+

couple three months agoDATE

0.79+

yearsQUANTITY

0.79+

yearsDATE

0.75+

Moran Sous launchpad 2020EVENT

0.72+

ThioORGANIZATION

0.72+

LeePERSON

0.71+

MirandaORGANIZATION

0.7+

oneQUANTITY

0.69+

Container CloudTITLE

0.67+

monthsDATE

0.66+

Dr Enterprise PlatformTITLE

0.65+

76DATE

0.64+

Mark Peters, ESG | Pure Accelerate 2019


 

>> from Austin, Texas. It's Theo Cube covering your storage Accelerate 2019 Brought to you by pure storage. >> How do y'all welcome back Thio, the Cube leader In live coverage we're covering day to a pure accelerate 19 Lisa Martin With Day Volonte Welcoming to the cue for the first time from SG Mark Peters principal analyst and practice >> Oh, my apologies. So young. >> I wish I wish that was true. >> In fact, one of the first analysts I think that's true if not the first analyst ever on the Q. But, >> well, I'll say Welcome back. Thank you. We're glad to have you here. So you've been with Ishii for quite a while, You know, the storage industry inside and out, I'm sure pure. Just about to celebrate their 10th anniversary. Yesterday we heard lots of news, which is always nice for us to have father to talk about. But I'd love to get your take on this disruptive company. What they've been able to achieve in their 1st 10 years going directly through is Dave's been saying the last two days driving a truck there am sees, install, base, back of the day, your thoughts on how they've been able to achieve what they have. >> That'll last me to talk about something I really want to talk about. And I think it addresses your question. How have they been able to do it? It's by being different. Andi, I don't know. I mean, obviously you do a stack of into sheer and maybe other people have talked about that. But that is the end. When I say different, I don't necessarily mean technology. I have a kind of standard riff in this business that we get so embroiled in the technology. Do not for one second think it's not important, but we get so embroiled in that that we missed the human element or the emotional element on dhe. I think that's important. So they were very different. They created, you know, these thes armies of fans who just bought into what they did. Now, of course, that was based on initially bringing flash to the market making flasher Fordham. Well, they've extended that here with the sea announcement and other things as well, so I don't want to just focus on that, but you know, they continue to do things differently with the technology, But I think what really made them an attractive company and why they've survived 10 years on her now big sizable is because they were a different sort of company to deal with. >> Are you at all surprised that the fourth accelerate is in Austin, Texas? Dell's backyard? Yes. Well, they're disruptive. They're different. They're bold. We're okay, >> you see, But But also, did you go to the other three? >> Uh, the last two. I was trying to remind >> myself where they were. I know one was kind of on a pier in a ballpark in San Francisco. One words. You remember the one that was in that you Worf, But that was a a rusting, so cool it was. But it was a metaphor in a rusting spinning desk, right. But it was also such a different sort of place on, So I probably was also a few it D m c. But I agree. And then the last one was in some sort of constantly. Yes, So >> they were all >> different. And so I Yes, I know this is Dell's backyard. Probably literally, because I'm sure Michael owns a lot of the place. It's also kind of very normal place and so there's a little bit of me that I don't want to use the world worry. But as you grow up and of course, we've got the 10 year anniversary, we're in Austin. What's the tagline of Austin? >> I don't know. No. Keep Austin weird. Okay, >> I >> don't want to suggest appears weird, but they were always a little different, I said. That's why I think they were attracted as much as anything. Yes, that's why I had the hordes of admiring fans, all wearing their orange socks and T shirts and cheering on DDE as they get older as they get more mature as they expand their portfolio. Charlie was on stage talking not so much about scale the problem when he was asked, but more about complexity. As you get more complex, you actually get more normal on, So I don't know that weird is the word, but a bit like Austin pure needs to keep your interesting. >> I like that >> Very interesting. So >> you and I, >> we've been around a while. We were kind of students of the industry. I was commenting earlier that it's just to me very impressive that this company has achieved a new definition of escape velocity receiving a billion dollars show. First company since Nana to do it, I gotta listed three. Park couldn't do it. Compelling data domain isolani ecological left hand. Really good cos all very successful companies. Uh, >> what do you think? It's >> all coming out of >> the dot com crash. Maybe that pay part of it. Pure kind of came out of the, you know, the recession. Why >> do you >> think Pure has been able to achieve that? That you know, four x three par, for example in terms of revenues. And it's got a ways to go. They probably do 1.7 this year. I think they have aspirations for five on enough there. Publicly stated that they probably have, right? Of course. Why wouldn't they thoughts on why they were able to achieve that? What were the sort of factors genuinely know? Having no idea what you were gonna ask me. And now actually, listening to question let me You've just made me think of something that I had not really thought. So I took so long to ask the question formulated. And you are so, um, you used the word escape velocity. Let's think about planes. I mean, you know, I think it's a V one, isn't it to take off, Mitch? Maybe not the same as escape, which is in the skies. But you get the point. How long to really take off? Be independently airborne? They gave themselves. I don't know how much was by design default how it really happened? I don't know. They had an immensely long runway. You think the whole conversation about pure for years and years was Oh, yeah, yeah, they're making loads of revenue, but they lose 80 cents every time they get 50. That was the conversation for years and years. I know they've now turned that corner, and I think the difference. Actually, the more I think about it, yes. You can talk about product. Yes, you can talk about the experience. I think those things are both part of it. But the other companies you named had cool things too. They all had cool products you had. What was it? The autopilot thing with compelling. And they had lots of people cheering. Actually, in this building, I think three part was yellow and kind of cool in a different part of the market. and disruptive. But they were both trying to get to the exit fast. Whether the exit was being bought or whether it was going under. I don't know it was gonna be one or the other, and for both of them, they got bought. I don't think pure had that same intention, and it's certainly got funding and backers that allowed it to take longer. So that's a really good point. I think there's a There's a new Silicon Valley playbook. You saw it with service. Now, with Frank's limits like the Silicon Valley Mafia's Sweetman Dietzen, Bush re at Work Day, they all raised a boatload of cash and a sacrifice profits for for growth. I mean, I remember Dave Scott telling me, you know, when he came on, the board was saying, Hey, we're ready to you know, we're prepared to raise 30 million. He said, I need 80 eighties chump change today compared to what these guys were raising. Well, I mean, I think I mean, they pretty quickly raised hundreds of millions, didn't they? They weren't scraping by on 50 or 80 million, which which is what you see. You sort of want one more thought just this escape velocity idea, I think is interesting because the other thing about escape velocity is partly how long you take runway orbit, whatever. But it's the payload on, you know, The more the payload, the longer it takes the take off the ground or the more thrust you need thrust in this case, his money again. But if you think about it, this is another thing where he and I gotta say, we've been doing this a long time. The storage industry over decades has been one of the easiest industries to enter on one of the hardest to actually do well. Why is that? Because the payload is heavy. It's easy to make a box that works fast, big whatever you want in your garage. Two men on one application working for a day. It's really hard to be interoperable with every app, every other system, operational needs and so on and so forth. And so the payload to be successful. I think they understood that, too. So, you know, they didn't let ourselves get distracted by like the initial shiny, glittery we need to get out of this business. >> I love the parallels with payloads and Rockets. Because, of course, we had Leland Melvin inner keynote this morning. I'm a former NASA geek. Talk to us about your thoughts on their cloud strategy, the evolution of the partnership with a W s. We talked about that yesterday. Sort of this customers bringing this forcing function together, but being able to sort of simplify and give customers this pure management playing the software layer wherever their data is your thoughts on how their position themselves for multi cloud hybrid world. >> Okay, two thoughts, one cloud. Then you also used the word simplicity. So I want to talk about both of those things if I can, Um I don't know. I'm sorry. This is not a very good answer. I think it's the truth. I mean, you can't exist in this world if you haven't got a cloud story, and it better be hybrid or pub. Oh, are multi, whichever you prefer. I think those have very distinct meanings, by the way, but we would be here for an hour and 1/2. It'll be a cube special to really get into that. However, So you've got to do this. I mean, there is just, you know, none of the clients they're dealing with. Almost none. That's not research. I'll talk research in a second but glib statement. Everyone's got a cloud strategy. It doesn't matter which analyst company you put up the data, we'll do it. I want to talk about a cup, some research we've done in a second. But everyone will tell you a high number of people who have a cloud first strategy, whether that's overall or just the new applications or whatever. So they've got to do it. What's crucial to whether or not they succeed is not the AWS branding, because everyone's got a W s branding me people that they don't work with or will not work within the next year or two. I mean, I'm sure there's one God you look like you're anxious, you're on a roll. But simplicity is really important. So David knows we do a lot of research early yesterday, one of our cornerstone piece of researchers think all the spending intentions we do every year. One of the questions this year's Bean for a couple of years now is basically saying simple question Excuse. The overuse of the word is how much more complex is I t you know, in your experience, more or less complex. And it was two years ago. I t broadly and you know that I love this question. You know the answer on dhe. 66% of people say it is more complex now than it was two years ago. People don't want complexity. We all know that there's not enough skills around the research to back that up. A swell on dso Simplicity is really important cause who was sitting in this seat before May I think I will say that the company here was founded on simplicity. That was the point. They were to be the apple of storage. I think that's why people love them. They were just very easy to use on dso coming finally back to your question. If they can do this and keep it simple, then they have a better chance of success than others. But how do you define successful them isn't keeping their customers are getting new ones. That's a challenge. >> They do have a very high retention rate. I want to say like 140% but things like we have our dinner for two U percent attention. Yes. How did >> you do? So? So this is is interesting. It's actually 100 and 50% renewal rate. Oh, by the Mike Scarpelli CFO Math of renewal rates on a dollar value on net dollar value renewal rate subscriptions. Mike Scarpelli was the CFO of service. Now invented this model and service now had, like, 100 and whatever 1500 whatever 27. And so it's a revenue based renewal. Makes sense. Sorry for one second you're retaining more people than you >> go. 101 100 >> 50% is insane. 105 >> percent is great. Yeah, 150% is interrupted. Your question. >> Well, I'm just saying >> it's good. Good nuance, >> Yes, Thanks for clarifying its. You know, companies can say whether it's one. Appears customers are pure themselves or competitors. We are cloud. First, we have a cloud for strategy, and a company like pure can say we deliver simplicity, those air marketing terms until they're actually put in the field and delivered. So in your perspective, how does pure take what I T professionals are saying? Things are so much more complex these days? How does a pure commit and say simple, seamless, sustainable, like Charlie, Giancarlo said yesterday. And actually make that a reality. Well, I >> mean, obviously, that's their challenge, and that's what they have work to do to some degree. And this comes back to what I was saying that to some degree it becomes self fulfilling because your that's why your customers come back with more money because they bought into this on. So as long as they're kept happy, they're probably not going to go and look at 20 other people. I'm not saying they never had any of that simplicity to start off with, but it's very interesting if you go to a pure event, their customers and this might be sacrilege sitting in this environment don't talk about the product. They talk about the company, >> right? >> The experience There's that word again, off being appear customer yes on So they're into it. They brought into whatever this is, and as long as the product, please do not strike me down is good enough. I'm not saying that's all it is. I think it's a lot better that, but as long as it's good enough, but you're really well looked after a few minutes ago, when I'm saying that's why I think this market is about so much more than just how fast can you make the box? How big can you make the box? How smart can you make the box? All of those are interesting, But ultimately, I'm only looking at Dave because he's so old. Ultimately, technology is a leapfrog game. Yeah, branding is not >> Beaver >> s O. So that's a good point. But we've not seen the competitors be able to leap frog pure or be able to neutralize them the way, for example, that DMC was able to somewhat neutralize three par by saying, Oh, yeah, we have virtual ization, too, you know, are thin provisioning. Rather. Yeah. And even though they had a thin provisioning bolt on, it was it was good enough. Yes, they did the check box. You haven't seen the competitors be able to do that here? I'm not saying they won't, but are they? I think, um, I was going to say basically this on my MBA, but I don't have one, so I can't say that, but, you know, I've read that. Read the books. If you look at Harvard Business School cases, I think the mistake made by the competition was to assume that Pierre would go away, that they would each try it or that it would fail on will make fun of the fact they don't make any money for the first few years on dhe. You know, the people going to them, we're gonna be sadly mistaken when they can't handle these features, whether that be cloud or whether that be analytics or fresh blades or whatever else again to add on. They thought they would just go away that there are great parallels in history when you let competition in and you just keep thinking at each point they're going to go away. Spot the accent. British motorcycle industry. When the Japanese came in, they literally said, Well, let them. There are records. We'll let them have the 50 cc market because we don't really care about that. But we'll make the big bikes Well, Okay, well, let them have 152 100 cc because really, that doesn't matter. And 10 years later, there was no industry well, and I think what happened with the emcee in particular because, let's face it, pure hired a bunch of DMC wraps. They took your product and, as I've said before, they drove a truck to the the symmetric V n X install base Emcee responded by buying X extreme io and they said, You know what? We're sick of losing the pure. We're gonna go really aggressive into our own accounts and we're gonna keep them with flash. And then what happened is their accounts. It Hey, we're good. We don't actually really need more stores because the emcee tried to keep it is trying to keep both lines alive. And now they're conflicted, pure. You know, I had a what? We're mission. >> You thought not up a great point. Sorry. Just just because I think >> thing about that is if you look at how e. M. C using my words accurately usedto act, I think you said that, too. So I'm not criticizing Adele is they were exceptional organized marketing organization. We go that way. And if you're not going that way, you got a big problem both as a custom, Miranda's UN employees. But the problem with that is also is that way would sometimes become that way, and then it become that way on the product depending what was doing well. So, for example, they had, you know, tens of thousands of feet, all marching to the extreme. I owe beat for a few quarters, and then they would go off on to the next product pure. Just carried on, marching to its beat down that runway escape velocity question >> appoint you brought up a minute ago before we wrap her. That I think is really interesting is that you write your customers talk about the experience. I think we were talking with a customer yesterday. Dave was asking, Well, what technologies are you think he started talking about workloads? So when we're at other events, you hear other names of boxes brought up here to your point. It is all about the experience so interesting and how they're Can you continuing to just be different, but to wrap things up since they're in my ear, we're almost that time. I just wanna take a minute to ask you kind of upcoming research. What are some of the things that you're working on? Their really intriguing you and SG land. I think right >> now, from my perspective, I mean, as a company would continue to do 27,000 different things because there's so much going on in the market. So whether that's security is massive area of focus right now, even improvements in networking. So it's not just the regular run of the mill, you know, Bigger, faster, cheaper. Which is always there s o A. I, of course, in all these again, you may both know you will now doesn't mean we're always looking at buying intentions rather than counting boxes. So it's really where people are moving over the next few years. That said to May. I think what's really interesting is to other things. Number one is to what extent can. I don't think we can really measure this easily. But to what extent can we get people talking about pure again to acknowledge that emotions, attitudes, experiences are an important part of this business? I'm old enough that I'm not scared of saying it, and I think pure is a company is not scared of saying it, you know, I think a lot of companies don't want to admit that Andi all know that they have different corporate cultures and mantras and views on their customers reflect that two on The other thing just generally is the future of I t. As a whole. I know that. So, I mean, I'm doing this because none of us really know what that is, but, you know, clearly way gotta stop talking about the cloud At some point. It's just part of I t. It's not a thing as such. It's just another resource that you bring to bear. I don't know that we're yet at that point, but that's >> got to happen. >> Interesting. Thanks for looking. I'm imagine this was a crystal ball. But Mark, I wish we had more time because I know we could keep talking. But it's been a pleasure to have you >> got the whole multi cloud hybrid cloud for an hour and 1/2. >> We come back, we'll have that discussion. Like what I'll means and yeah, back anytime. >> Excellent. Thank you for joining David. Me. Thank you for David. Dante. I'm Lisa Martin. You were watching the Cube from pure accelerate 19

Published Date : Sep 18 2019

SUMMARY :

storage Accelerate 2019 Brought to you by pure storage. So young. In fact, one of the first analysts I think that's true if not the first analyst ever on the Q. We're glad to have you here. But I think what really made them an attractive company and why they've survived 10 years on her now big Are you at all surprised that the fourth accelerate is in Austin, Texas? I was trying to remind You remember the one that was in that you Worf, But that was a a rusting, But as you grow up and of course, we've got the 10 year anniversary, we're I don't know. As you get more complex, you actually get more normal on, So I was commenting earlier of came out of the, you know, the recession. But it's the payload on, you know, The more the payload, the longer it takes the take I love the parallels with payloads and Rockets. I mean, there is just, you know, none of the clients I want to say like 140% but things you do? 50% is insane. Yeah, 150% is interrupted. it's good. So in your perspective, how does pure take what I T they never had any of that simplicity to start off with, but it's very interesting if you go to a pure event, How big can you make the box? You haven't seen the competitors be able to do that here? because I think So, for example, they had, you know, tens of thousands of feet, It is all about the experience so interesting and how they're Can you continuing So it's not just the regular run of the mill, you know, But it's been a pleasure to have you Like what I'll means and yeah, back anytime. Thank you for joining David.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Lisa MartinPERSON

0.99+

Mike ScarpelliPERSON

0.99+

Dave ScottPERSON

0.99+

MarkPERSON

0.99+

AustinLOCATION

0.99+

AWSORGANIZATION

0.99+

80 centsQUANTITY

0.99+

DavePERSON

0.99+

DantePERSON

0.99+

CharliePERSON

0.99+

50 ccQUANTITY

0.99+

50QUANTITY

0.99+

MichaelPERSON

0.99+

10 yearsQUANTITY

0.99+

San FranciscoLOCATION

0.99+

30 millionQUANTITY

0.99+

100QUANTITY

0.99+

GiancarloPERSON

0.99+

Harvard Business SchoolORGANIZATION

0.99+

hundreds of millionsQUANTITY

0.99+

ThioPERSON

0.99+

140%QUANTITY

0.99+

80 millionQUANTITY

0.99+

Austin, TexasLOCATION

0.99+

bothQUANTITY

0.99+

EmceePERSON

0.99+

NASAORGANIZATION

0.99+

fiveQUANTITY

0.99+

first analystQUANTITY

0.99+

Two menQUANTITY

0.99+

yesterdayDATE

0.99+

FirstQUANTITY

0.99+

Leland MelvinPERSON

0.99+

one secondQUANTITY

0.99+

first timeQUANTITY

0.99+

two years agoDATE

0.99+

20 other peopleQUANTITY

0.99+

150%QUANTITY

0.99+

fourth accelerateQUANTITY

0.99+

10 years laterDATE

0.99+

DellORGANIZATION

0.99+

FrankPERSON

0.99+

two thoughtsQUANTITY

0.99+

50%QUANTITY

0.99+

One wordsQUANTITY

0.99+

this yearDATE

0.98+

threeQUANTITY

0.98+

oneQUANTITY

0.98+

10th anniversaryQUANTITY

0.98+

next yearDATE

0.98+

OneQUANTITY

0.98+

tens of thousands of feetQUANTITY

0.98+

MayDATE

0.98+

Mark PetersPERSON

0.98+

three partQUANTITY

0.98+

1st 10 yearsQUANTITY

0.98+

each pointQUANTITY

0.98+

YesterdayDATE

0.98+

todayDATE

0.98+

NanaORGANIZATION

0.97+

first strategyQUANTITY

0.97+

PierrePERSON

0.97+

eachQUANTITY

0.97+

First companyQUANTITY

0.97+

both linesQUANTITY

0.97+

one applicationQUANTITY

0.97+

DMCORGANIZATION

0.97+

1500QUANTITY

0.97+

fourQUANTITY

0.97+

twoQUANTITY

0.97+

a dayQUANTITY

0.97+

10 year anniversaryQUANTITY

0.96+

AdelePERSON

0.96+

first analystsQUANTITY

0.96+

Silicon ValleyLOCATION

0.96+

Bob Ghaffari, Intel Corporation | VMworld 2019


 

>> live from San Francisco, celebrating 10 years of high tech coverage. It's the Cube covering Veum World 2019. Brought to you by VM Wear and its ecosystem partners. >> Welcome back. We're here. Of'em World 2019. You're watching the Cubans? Our 10th year of coverage at the event. I'm stupid. And my co host this afternoon is Justin Warren. And happy to welcome back to the program. Bob Ghaffari, who's the general manager of the Enterprise and Claude networking division at Intel. Bob, welcome back. Great. Great to be here. Thank you. S Oh, uh, you know, it's a dressing. And I think that last year I felt like every single show that I went to there was an Intel executive up on the stage. You know, there's a way we talked about. You know, the tic tac of the industry is something that drove things. So last year? Ah, lot going on. Um, haven't seen intel quite as much, but we know that means that, you know, you're you and your team aren't really busy. You know a lot of things going on here. VM worldwide. Give us the update since last we spoke. Well, you know, um >> So I think we have to just go back a little bit in terms of how until has been involved in terms of really driving. Just hold this whole network transformation. I want to say it started about a decade ago when we were really focused on trying to go Dr. You know, a lot of the capabilities on to more of a standard architecture, right? In the past, you know, people were encumbered by challenging architectures, you know, using, you know, proprietary kind of network processors. We were able to bring this together until architecture we open source dp decay, which is really this fast packet processing, you know, library that we basically enabled the industry on. And with that, there's basically been this. I want to say this revolution in terms of how networking has come together. And so what we've seen since last year is you know how NSX via Miranda sex itself has really grown up and be able to sort of get to these newer, interesting usage models. And so, for us, you know what really gets us excited is being really involved with enabling hybrid cloud multi cloud from a network perspective. And that's just what really gets me out of bed every day. Yeah, An s >> t n is, I think, gone from that early days where it was all a bit scary and new, and people weren't quite sure that they wanted to have that. Whereas now Stu is the thing, it's people are quite happy and comfortable to use it. It's it's now a very accepted way of doing networking. What have you noticed about that change where people have gone? Well, actually, it's accepted. Now, what is that enabling customers to do with S T. N. >> You know, um I mean, I think what you know S Dan really does. It gives you a lot of the enterprise customers and cloud customers, and a lot of other is really the flexibility to be able to do what you really need to do much better. And so if you can imagine the first stage, we had to go get a lot of the functions virtualized, right? So we did that over the last 10 years, getting the functions virtualized, getting him optimized and making sure that the performance is there as a virtual function. The next step here is really trying to make sure that you know you weaken enable customers to be able to do what they need to end their micro service's and feels. Or do this in a micro segmented kind of view. When and so um and also being in a scenario, we don't have to trombone the traffic, you know, off to be there, be it's inspected or, you know, our load balance and bringing that capability in a way, in a distributed fashion to where the workloads Neto happen. >> Yeah, who you mentioned micro segmentation there, And that's something which has been spoken about again for quite a while. What's the state of play with micro segmentation? Because it some customs have been trying to use it and found it a little bit tricky. And so they were seeing lots of vendors who come in and say We'll help you manage that. What's the state of play with Michael segmentation From your perspective, >> you know, I would say the way I would categorize it as micro segmentation has definitely become a very important usage model. In turn, how did really contain, you know, uh, policies within certain segments, right? So, one you know, you're able to sort of get to a better way of managing your environments. And you're also getting to a better way of containing any kind of threats. And so the fact that you can somehow, you know, segment off, um, you know, areas and FAA. And if you basically get some kind of, like attack or some kind of, you know, exploit, it's not gonna, you know, will go out of that segmented area to to some extent, that simplifies how you look at your environment, but you want to be able to do it in the fashion that you know, helps. Ultimately, the enterprises managed what they got on their environments. >> So, Bob, one of things that really struck me last year was the messaging that VM were had around networking specifically around multi cloud. It really hearken back to what I had heard from my syrup reacquisition on. Of course. Now, Veum, we're extending that with of'em or cloud in all of you know, aws the partnerships they are false, extended with azure, with Google in non premises with Delhi emcee and others. And a big piece of that message is we're gonna be able to have the same stack on on both sides. You could kind of explain. Where does Intel fit in there? How does Intel's networking multi cloud story dovetail with what we're hearing from VM? Where Right, So I >> think >> the first thing is that until has been very involved in terms of being into, um, any on Prem or public clouds, we get really involved there. What were you really trying to do on my team does is really focusing on the networking aspects. And so, for us is to not only make sure that if you're running something on prime, you get the best experience on from but also the consistency of having a lot of the key instruction sets and any cloud and be able to sort of, ah, you know, managed that ballistically, especially when you're looking at a hybrid cloud environment where you're basically trying to communicate between a certain cloud. It could be on Prem to another cloud that might be somewhere else. Having the consistent way of managing through encrypted tunnels and making sure you're getting the kind of performance that you need to be able to go address that I think these are the kind of things that we really focus on, and I think that for us, it's not only really bring this out and, um improving our instructions that architecture's so most recently What we did is, you know, we launched our second generations Aeon Scaleable processors that really came out in April, and so for us that really takes it to the next level. We get some really interesting new instruction, sets things like a V X 5 12 We get also other kind of, you know, you know more of, like inference, analytic inference capabilities with things like Deal Boost that really brings things together so you can be more effective and efficient in terms of how you look at your workloads and what you need to do with them, making sure they're secure but also giving you the insights that you need to be able to make that kind of decisions you want from a enterprise perspective >> steward. It always amuses me how much Intel is involved in all of his cloud stuff when it it would support. We don't care about hardware anymore. It's all terribly obstructed. And come >> on, Justin, there is no cloud. It's just someone tells his computer and there's a reasonable chance there's an Intel component or two Wednesday, right? >> Isn't Intel intelligence and the fact that Intel comes out and is continuing to talk to customers and coming to these kinds of events and showing that it's still relevant, and the technology that you're creating? Exactly how that ties into what's happening in cloud and in networking, I think is an amazing credit to what? To Intel's ability to adapt. >> You know, it's definitely been very exciting, and so not only have we really been focused on, how do we really expand our processor franchise really getting the key capabilities we need. So any time, anywhere you're doing any kind of computer, we want to make sure we're doing the best for our customers as possible. But in addition to that, what we've really done is we've been helped us around doubt our platform capabilities from a solution perspective to really bring out not only what has historically been a very strong franchise, pressed with her what we call our foundational nicks or network interface cards, but we've been eldest would expand that to be able to bring better capabilities no matter what you're trying to do. So let's say, for example, you know, um, you are a customer that wants to be able to do something unique, and you want to be able to sort of accelerate, you know, your own specific networking kind of functions or virtual switches. Well, we have the ability to do that. And so, with her intel, f p g. A and 3000 card as an example, you get that capability to be able to expand what you would traditionally do from a platform level perspective. >> I want to talk about the edge, but before we go there, there's a topic that's hot conversation here. But when I've been talking to Intel for a lot of years out container ization in general and kubernetes more specifically, you know, where does that fit into your group? I mentioned it just cause you know that the last time Intel Developer forum happened, a friend of mine gave a presentation working for intel, and, you know, just talking about how much was going on in that space on. Do you know, I made a comment back there this few years ago. You know, we just spent over a decade fixing all the networking and storage issues with virtualization. Aren't we going to have to do that again? And containers Asian? Of course, we know way are having toe solve some of those things again. So, you >> know, and for us, you know, as you guys probably know, until it's been really involved in one of the biggest things that you know sometimes it's kept as a secret is that we're probably one of the bigger, um, employers of software engineers. And so until was really, really involved. We have a lot of people that started off with, you know, open source of clinics and being involved there. And, of course, containers is sort of evolution to that. And for us really trying to be involved in making sure that we can sort of bring the capabilities that's needed from our instructions, said architecture is to be able to do containers kubernetes, and, you know, to do this efficient, efficiently and effectively is definitely key to what we want to get done. >> All right, so that was a setup. I I wanted for EJ computing because a lot of these we have different architectures we're gonna be doing when we're getting to the edge starting here. A little bit of that show that this show. But it's in overall piece of that multi cloud architecture that we're starting to build out. You know, where's your play? >> Well, so for us, I mean the way that we look at it as we think it starts all, obviously with the network. So when you are really trying to do things often times Dedge is the closest to word that data is being, you know, realized. And so for us making sure that, you know, we have the right kind of platform level capabilities that can take this data. And then you have to do something with this data. So there's a computer aspect to it, and then you have to be able to really ship it somewhere else, right? And so it's basically going to be to another cloud and might be to another micro server somewhere else. And so for us, what really sets the foundation is having a scale will set a platform sort of this thick, too thin kind of concept. That sort of says, depending on what you're trying to do, what you need to have something that could go the answer mold into that. And so for us, having a scaleable platform that can go from our Biggers eons down to an Adam processor is really important. And then also what we've been doing is working with the ecosystem to make sure that the network functions and software defined when and you know that we think sets a foundation to how you want to go and live in this multi cloud world. But starting off of the edge, you want to make sure that that is really effective, efficient. We can basically provide this in a very efficient capability because there's some areas where you know this. It's gonna be very price sensitive. So we think we have this awesome capability here with our Adam processors. In fact, yesterday was really interesting. We had Tom Burns and Tom Gillis basically get on the stage and talk about how Dell and VM we're collaborating on this. Um, and this basically revolves around platforms based on the Adam Process sitter, and that could scale up to our ze aan de processors and above that, so it depends on what you're trying to do, and we've been working with our partners to make sure that these functions that start off with networker optimized and you can do as much compute auras little computer as you want on that edge >> off the customers who were starting to use age because it's it's kind of you, but it's also kind of not. It's been around for a while. We just used to call it other things, like robots for the customers who were using engine the moment. What's what's the most surprising thing that you've seen them do with your technology? >> You know what is interesting is, you know, we sometimes get surprised by this ourselves and so one of the things that you know, some customers say, Well, you know, we really need low cost because all we really care about is just low level. You know, we we want to build the deploy this into a cafe, and we don't think you're gonna be all that the price spot because they automatically think that all intel does is Biggs eons, and we do a great job with that. But what is really interesting is that with their aunt in processors, we get to these very interesting, you know, solutions that are cost effective and yet gives you the scalability of what you might want to do. And so, for example, you know, we've seen customers that say, Yeah, you know, we want to start off with this, but you know, I'm networking, is it? But you know what? We have this plan, and this plan is like this. Maybe it's a 90 day plan or it could be up to a two year plan in terms of how they want to bring more capabilities at that branch and want to want to be able to do more. They want to be able to compute more. They want to make decisions more. They want to be able to give their customers at that place a much better experience that we think we have a really good position here with their platforms and giving you this mix and match capability, but easily built to scale up and do what our customers want. Great >> Bob, You know, when I think about this space in general, we haven't talked about five g yet, and you know, five g WiFi six, you know, expected to have a significant impact on networking. We're talking a little bit about you know edge. It's gonna play in that environment. Uh, what do you hear from Augusta Summers? How much is that involved with the activities you're working through? You know, >> it's definitely, really interesting. So, uh, five g is definitely getting a lot of hype. Were very, very involved. We've been working on this for a while until it's, uh, on the forefront of enabling five G, especially as it relates to network infrastructure, one of the key focus areas for us. And so the way that we sort of look at this on the edges that a lot of enterprises, some of them are gonna be leading, especially for cases where Leighton see is really important. You want to be able to make decisions, you know, really rather quickly. You want to be able to process it right there. Five g is gonna be one of these interesting technologies that starts, and we're already starting to see it enabled these new or used cases, and so we're definitely really excited about that. We're already starting to see this in stadium experience being enabled by five G what we're doing on the edge. There's experiences like that that we really get excited when we're part of, and we're really able to provide this model of enabling, you know, these new usage models. So for us, you know the connectivity aspects five g is important. Of course, you know, we're going to see a lot of work clothes used for G as basically predominant option. And, of course, the standard wired connective ity of I p m pl less and other things. >> I want to give you the final word. Obviously, Intel long partnership. As we know you know, current CEO Pack else under, you know, spent a good part of his, you know, early part of career at Intel. Give us the takeaway intel VM wear from VM 2019. You know, I mean, we've had a >> long partnership here between intel on VM, where we definitely value the partnership for us. It started off with virtual light servers a while back. Now we've been working on networking and so for us, the partnership has been incredible. You know, we continue to be able to work together. Of course. You know, we continue to see challenges as we go into hybrid cloud Malta Cloud. We are very excited to how in terms of how we can take this to the next level. And, you know, we're very happy to be be great partners with them. >> All right. Well, Bob Ghaffari, thank you for giving us the Intel networking update. We go up the stack down the stack, Multi cloud, all out the edge, coyote and all the applications for Justin Warren. I'm stupid. Men will be back for our continuing coverage of the emerald 2019. Thanks for watching the Cube.

Published Date : Aug 27 2019

SUMMARY :

Brought to you by VM Wear and its ecosystem partners. Um, haven't seen intel quite as much, but we know that means that, you know, you're you and your team aren't And so what we've seen since last year is you know how NSX via have you noticed about that change where people have gone? you know, off to be there, be it's inspected or, you know, our load balance and And so they were seeing lots of vendors who come in and say We'll help you manage that. And so the fact that you can in all of you know, aws the partnerships they are false, extended with azure, with Google in non ah, you know, managed that ballistically, especially when you're looking at a hybrid cloud And come It's just someone tells his computer and there's a reasonable chance there's an Intel Isn't Intel intelligence and the fact that Intel comes out and is continuing to talk to customers and So let's say, for example, you know, um, you are a customer specifically, you know, where does that fit into your group? We have a lot of people that started off with, you know, open source of clinics and being involved of these we have different architectures we're gonna be doing when we're getting to the edge starting here. to word that data is being, you know, realized. off the customers who were starting to use age because it's it's kind of you, but it's also kind of not. You know what is interesting is, you know, we sometimes get surprised Bob, You know, when I think about this space in general, we haven't talked about five g yet, and you know, You want to be able to make decisions, you know, really rather quickly. As we know you know, And, you know, we're very happy to be be great partners with them. down the stack, Multi cloud, all out the edge, coyote and all the applications

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Bob GhaffariPERSON

0.99+

Justin WarrenPERSON

0.99+

San FranciscoLOCATION

0.99+

GoogleORGANIZATION

0.99+

JustinPERSON

0.99+

Tom GillisPERSON

0.99+

90 dayQUANTITY

0.99+

DellORGANIZATION

0.99+

BobPERSON

0.99+

10th yearQUANTITY

0.99+

AprilDATE

0.99+

last yearDATE

0.99+

Tom BurnsPERSON

0.99+

10 yearsQUANTITY

0.99+

IntelORGANIZATION

0.99+

intelORGANIZATION

0.99+

yesterdayDATE

0.99+

MichaelPERSON

0.99+

both sidesQUANTITY

0.99+

Augusta SummersPERSON

0.98+

first stageQUANTITY

0.98+

Intel CorporationORGANIZATION

0.98+

five GORGANIZATION

0.98+

Veum World 2019EVENT

0.97+

NSXORGANIZATION

0.96+

oneQUANTITY

0.94+

five gORGANIZATION

0.94+

3000 cardCOMMERCIAL_ITEM

0.93+

first thingQUANTITY

0.93+

Miranda sexORGANIZATION

0.92+

p g. ACOMMERCIAL_ITEM

0.91+

Biggs eonsORGANIZATION

0.91+

LeightonORGANIZATION

0.9+

CubansPERSON

0.9+

twoQUANTITY

0.9+

few years agoDATE

0.9+

second generationsQUANTITY

0.9+

GORGANIZATION

0.87+

over a decadeQUANTITY

0.87+

AdamPERSON

0.86+

a decade agoDATE

0.81+

2019DATE

0.81+

Delhi emceeORGANIZATION

0.8+

this afternoonDATE

0.79+

Of'em World 2019EVENT

0.79+

Five gORGANIZATION

0.78+

VMworldEVENT

0.74+

5COMMERCIAL_ITEM

0.73+

every single showQUANTITY

0.72+

aboutDATE

0.72+

two yearQUANTITY

0.72+

EnterpriseORGANIZATION

0.69+

VM WearORGANIZATION

0.68+

WednesdayDATE

0.67+

AsianLOCATION

0.67+

AeonCOMMERCIAL_ITEM

0.64+

AdamCOMMERCIAL_ITEM

0.64+

last 10 yearsDATE

0.63+

five GTITLE

0.63+

VMORGANIZATION

0.62+

S T. N.ORGANIZATION

0.62+

VeumPERSON

0.55+

VMEVENT

0.53+

VCOMMERCIAL_ITEM

0.52+