Image Title

Search Results for Moran:

Keith Moran, Nutanix | VMworld 2018


 

>> Live from Las Vegas, it's theCUBE covering VMworld 2018. Brought to you by VMware and its ecosystem partners. >> Welcome back to theCUBE's coverage of VMworld 2018. Two sets, wall-to-wall coverage. We had Michael Dell on this morning. We had Pat Gelsinger on this afternoon. And happy to welcome to the program, first time guest, Keith Moran, who's the vice president with Nutanix. Keith, I've talked to you lots about theCUBE, you've watched theCUBE, first time on theCUBE. Thanks so much for joining us. Yeah, thanks for having me. It's a great show. >> Alright, so let's set the stage here. We're here in Vegas. It's my ninth year doing VMworld. How many of these have you done? >> So this is my fourth. >> Yeah? How's the energy of the show? The expo hall's hopping. You guys have a nice booth. What are you hearing from the customers here? >> I think that we're seeing just a lot of discussion around where the market's going with hybrid cloud. I think that it's a massive opportunity. I think people are trying to connect the dots on where it's going in the next five years. The vibe's extremely strong right now. >> I've met you at some of the Nutanix shows in the past and seen you at some of these, but tell us a little bit about your role, how long you've been there, where you came from before. >> I run the Central US for Nutanix, and I spent a long time in the converged, whether it was that app at EMC, through a few start-ups, and then I've been at Nutanix for four years. It's been a great ride, seeing how the market's adopting to hyperconverged. The core problem and vision that Dheeraj saw nine years ago is playing out. He's five chess moves ahead of everyone. I think there's, again, a massive opportunity as we move forward. >> Keith, I love your to share. I love people in the field. You're talking to customers every day. You hear their mindset. I think back over the last 15 years in my career, and when Blade Server first came out, or when we started building converged solutions. It was like, "Oh, wait." Getting the organization together, sorting out the budgets. There were so many hurdles because this was the way we did things, and this is the way we're organized, and this is the way the budgets go. I think we've worked through a number of those, but I'd love to hear from you where we are with most customers, how many of them are on board, and doing more things, modernizing, and making changes, and being more flexible. >> Yeah, so I think you're spot on in the sense that the silos was the enemy in the sense that people were doing business as usual and that there was process, and they didn't want to take risks. But I think that the wave of disruption has been so strong and that we're in this period of mass extinction where customers, They don't have a choice anymore. That they have to protect against the competitive threat or exploit opportunity, and I think that the speed and the agility with hyperconverged is, And what the market disruption is forcing them to make those changes and forcing them to innovate. At the end of the day, that's their core revenue stream is how they experiment, how they innovate. Again, you're seeing the disruptions coming so fast that people are changing to survive. >> Yeah, we have some interesting paradoxes in the industry. We're talking about things like hyperconverged, yet really what we're trying to do is build distributed architectures. >> Correct. >> We're talking about, "Oh, well I want simplicity, and I want to get rid of the silos, but now I've got multicloud environment where I've got lots of different SaaS pieces, I've got multiple public clouds, I often have multiple vendors in my public cloud, and I've like recreated silos and certifications and expertise." How do customers deal with that? How do you help, and your team help to educate and get them up so that hopefully the new modern era is a little bit better than what they were dealing with? >> Yeah, and I think that's part of where the opportunity is. I think that the private cloud people don't do public well, and I don't think that the public cloud vendors do private well. So that's why the opportunity's so big. And I think for us, we're going to continue to harden the IaaS stack of what we built, and then our vision is how do we build a control plane for the next generation. If you look at our acquisition strategy, and where we're putting in it, how do you have a single operating system that spans the user experience from the public to private, making an exact replica. Again, I think customers are struggling with this problem and that as apps scale up, and scale down, and the demand for them, that they want this ability to course correct and be able to move VMs and containers in a very seamless fashion from one app to the next and adjust for the business market conditions. >> Yeah, I had a comment actually by one of my guests this week. We now have pervasive multicloud. We spent a few years sorting out who are the public clouds going to be. And there's still moves and changes, but we know there's a handful of the real big guys, then there's the next tier of all of the server providers, and the software players, like Nutanix. Look, you're not trying to become a competitor at Amazon or Google. You're partners. I see Nutanix at those shows. So maybe explain what's the long-term strategy. How does Nutanix, as you've been talking about enterprise cloud for a number of years, but what's that long-term vision as to how Nutanix plays in this ecosystem? >> Yeah. So for us I think part of it is our own cloud, which is Xi, and it's living in this multicloud world where our customer can do DRs of service with that single operating system, moving it from a Nutanix on-prem solution, moving it to a Nutanix cloud, moving it to Azure, moving it up to TCP, or moving it to AWS. And they have to do with it with thought because clearly there are so many interdependencies with these apps. There's governance, there's laws of the land, there's physics. There's so many things that are going to make this a complex equation for customers. But again, they're demanding, and that's forcing the issue where customers have to make these decisions. >> Keith, I want to hear, when you talk to your customers, where are they with their cloud strategy? I heard a one conference, 85% of customers have a cloud strategy, and I kind of put tongue in cheek. I said, "Well 15% of the people got to figure something out, and the other 85, when you talk to them next quarter, the strategy probably has changed quite a bit." Because things are changing fast, and you need to be agile and be able to change and adjust with what's going on. So where do your customers, I'm sure it's a big spectrum but? >> It is. The interesting thing for me for cloud is on average, we're seeing that the utilization rate, specifically in AWS, is somewhere in the 25% rate for reserved entrance, which was very surprising to me because the whole point of cloud is to test it, to deploy it, and to scale up, and if you're running in an environment where the utilization rate that the economics aren't working. So I think that people are starting to look at, alright, what are the economics behind the app? Does it make sense in the cloud? Does it make sense on-prem? Again, what are the interdependencies of it? The classic problems they're having are still around. They're spending 80% of their time just managing firmware and drivers and spending thousands of hours per quarter just troubleshooting and not impacting the business. So I think, fundamentally, that's what the customers are trying to solve is how do we get out of this business of spending all our time keeping the lights on and how do we drive innovation. And that ratio has been historically for 20 years. And I think, again, Nutanix helps drive that in the sense that we're helping customers shift that ratio and that pain. I always say, "Put your smartest people on your hardest problems," and when you've got these high-end SAN administrators spending a lot of time, they should be working on automation, orchestration, repeatable process that gives scale and again, impacts the business. >> Yeah. A line that I used at your most recent Nutanix show is talking to customers. Step one was modernize the platform, and step two, they could modernize the application. >> Absolutely. >> Speak a little bit to that because in this environment, we know the journey we went through to virtualize a lot of applications. I talked to a Nutanix customer this morning and talked about deploying Oracle, and I said, "Tell me how that was," because how many years did we spend fighting as customers? "You want to virtualize Oracle?" And Oracle would be like, "No, no, no. You have to use OVM. You have to use Oracle this. You have to use Oracle that." We've gone through that. And is it certified on Nutanix? It's good to go. It's ready to go. He's like, "It was pretty easy." And I'm like, it's so refreshing to see that. But when you talk about new modern applications and customers have this whole journey to embrace things like Agile, LMC ICD, and the like. Where does Nutanix play in this, and how are you helping? >> Yeah, so I think on the first. When you look at the classic database, so things like Sequel were automating so that you can extract it in a very simple manner. You look at the mode 2 apps like Kubernetes, we're taking a 37 page deployment guide and automating it down into three clicks because customers want the speed, they want the deployment cycles, they want the automation associated with that. And it's having a big impact in the sense that these customers are trying to figure out, "Where am I going here in the next three years?" For us, we're seeing massive workloads, whether it's Oracle, Sequel, people deploying on it. And again, there's so much pressure for people to change and constantly disrupt themselves, and that's what we're seeing. And layer that all on top of a lot of legacy apps. So we've got oil and gas customers, and big retailers, and when they show us the dependency maps of their applications, it's incredible. How complex these are, and they want simplicity and speed, and how do they get out of that business of the tangled mess. >> Yeah. Keith, I wonder if you have an example, and you might not be able to use an exact customer, but you mentioned some industries, so here's something I hear at a show like this. Alright, I understand my virtualized environment. I've deployed HCI. I really need to start extending and using public cloud. What are some first steps that you've seen customers as to how they're making that successful? What are some of those important patterns, what works, and where's good places for them to start? >> I look at it almost, when I see some of the automation deployment cycles they have of how they get a VM through the full lifecycle, and behind the scenes they have such massive complexities that it's hindering their ability to create automations. So the first layer is how do you simplify the infrastructure underneath, and it goes back to that dependency map. So again, oil and gas, that's big retailers. When they show us what their infrastructure is, they want to simplify that layer first, and then from there they can build incredible automation that gives them a multiple in the return that is much greater than what they're seeing in today's infrastructure. >> Keith, what's exciting you in the marketplace today? You get to meet with a lot of customers. Just kind of an open-ended. >> So for me, it's I've worked in a lot of big legacy companies, and I've never seen customers that have the passion towards Nutanix. And I think that it's the problems that we're solving for them, the impacts we're having on the business is driving that loyal following. But again, how fast people are either trying to exploit a competitive advantage or protect against a threat, that it's interesting to be right in this, in the epicenter of this big shift that's happening, right? Tectonic plates are shifting in that you've got a massive cloud provider like AWS. You've got a big player like VMware. What's the next generation going to look like? For me it's fascinating to see how these businesses are competing. I look at a customer. I've got a Fortune 500, The CTO's comment to me was, "I'm one app away from disruption." So they're a massive commercial real estate organization, and he's terrified of what could happen next, and he's got to stay way ahead of the curve, and I think that the innovation rate that we're bringing, the support, the infrastructure. I think it's a great place because of how we're serving what we call the underserved customer and having a big impact. >> Yeah. It's interesting. We always poke at the how much are customers just dreading that potential disruption and how much are they excited about what they can do different. You talk about working with traditional vendors in IT for the last decade or so, it's like IT and the business were kind of fighting over it. There's a line one of our hosts here, Alan Cohen, used to use. Actually, the first time I heard it was at the Nutanix show in Miami when we had it on. And he said there's this triangle, and where you want to get people is away from the no and the slow, and get them to go. Do you feel more people are fearful, or more people are excited. Is it a mix of-- >> It is. >> Those for your customers? >> And again, I think that the marketforce is really helping because people there they have to shift to stay competitive, and they're pushing every day to the level of change and how people are embracing change is much faster than it was. Because again, these disruption cycles are much faster and they're coming at customers in a totally different way that they weren't prepared for. >> Alright, Keith, final word from you is how many of theCUBE interviews have you watched in the last bunch of years? >> The content, I mean, it's off the charts. Hundreds and hundreds of hours, I would say. >> Well, hey. Really appreciate you joining us. Keith Moran, not only a long-time watcher, but now a CUBE alumni with the thousands that we've done. So pleasure to talk with ya on-camera, as well as always off-camera. >> Yeah, great stuff, Stu. >> We'll be back with lots more coverage here from VMworld 2018. I'm Stu Miniman, and thanks for watching theCUBE. (upbeat music)

Published Date : Aug 28 2018

SUMMARY :

Brought to you by VMware and its ecosystem partners. Keith, I've talked to you lots about theCUBE, Alright, so let's set the stage here. How's the energy of the show? I think that we're seeing just a lot of discussion in the past and seen you at some of these, seeing how the market's adopting to hyperconverged. but I'd love to hear from you where we are and the agility with hyperconverged is, Yeah, we have some interesting paradoxes in the industry. and I want to get rid of the silos, and adjust for the business market conditions. and the software players, like Nutanix. And they have to do with it with thought and the other 85, when you talk to them next quarter, So I think that people are starting to look at, is talking to customers. and how are you helping? and speed, and how do they get out of that business and you might not be able to use an exact customer, and behind the scenes they have such massive complexities You get to meet with a lot of customers. and he's got to stay way ahead of the curve, and get them to go. and they're pushing every day to Hundreds and hundreds of hours, I would say. So pleasure to talk with ya on-camera, I'm Stu Miniman, and thanks for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith MoranPERSON

0.99+

Alan CohenPERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Michael DellPERSON

0.99+

KeithPERSON

0.99+

20 yearsQUANTITY

0.99+

80%QUANTITY

0.99+

DheerajPERSON

0.99+

MiamiLOCATION

0.99+

VegasLOCATION

0.99+

25%QUANTITY

0.99+

37 pageQUANTITY

0.99+

15%QUANTITY

0.99+

NutanixORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

AWSORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

OracleORGANIZATION

0.99+

ninth yearQUANTITY

0.99+

four yearsQUANTITY

0.99+

VMwareORGANIZATION

0.99+

Las VegasLOCATION

0.99+

85%QUANTITY

0.99+

HundredsQUANTITY

0.99+

next quarterDATE

0.99+

first layerQUANTITY

0.99+

fourthQUANTITY

0.99+

Two setsQUANTITY

0.99+

five chessQUANTITY

0.99+

85QUANTITY

0.99+

firstQUANTITY

0.99+

first timeQUANTITY

0.98+

thousandsQUANTITY

0.98+

oneQUANTITY

0.98+

EMCORGANIZATION

0.98+

2 appsQUANTITY

0.98+

first stepsQUANTITY

0.97+

singleQUANTITY

0.97+

CUBEORGANIZATION

0.97+

one appQUANTITY

0.97+

nine years agoDATE

0.96+

this weekDATE

0.96+

AzureTITLE

0.96+

todayDATE

0.95+

KubernetesTITLE

0.95+

three clicksQUANTITY

0.95+

theCUBEORGANIZATION

0.94+

one conferenceQUANTITY

0.94+

this afternoonDATE

0.93+

VMworldEVENT

0.93+

VMworld 2018EVENT

0.92+

StuPERSON

0.91+

last decadeDATE

0.9+

CTOORGANIZATION

0.89+

this morningDATE

0.89+

thousands of hours per quarterQUANTITY

0.88+

Step oneQUANTITY

0.87+

step twoQUANTITY

0.85+

next five yearsDATE

0.85+

Christine Corbett Moran, Caltech | Open Source Summit 2017


 

>> [Voiceover] Live, from Los Angeles, it's theCUBE. Covering Open Source Summit, North America 2017. Brought to you by the Linux Foundation, and Red Hat.>> Hello everyone, welcome back to our special Cube live coverage of Linux Foundation's Open Source Summit North America here in LA, I'm John Furrier your co-host with Stu Mitiman. Our next guest is Christine Corbett Moran, Ph.D. at astronomy, astrophysics post-doctoral fellow at Caltech.>> That's right, it's a mouthful.>> Welcome to theCUBE, a mouthful but you're also keynoting, gave one of the talks opening day today after Jim Zemlin, on tech and culture and politics.>> That's right, yeah.>> Which I thought was fantastic. A lot of great notes there. Connect the dots for us metaphorically speaking, between Caltech and tech and culture. Why did you take that theme?>> Sure. So I've been involved in programming since I was an undergraduate in college. I studied computer science and always attending more and more conferences. hacker cons, security conferences, that sort of stuff. Very early on what attracted me to technology was not just the nitty gritty nuts and bolts of being able to solve a hard technical problem That was a lot of fun, but also the impact that it could have. So even as I went on a very academic track, I continued to make open source contributions. Really seeking that kind of cultural impact. And it wasn't something that I was real vocal about. Talking about. More talking about the technology side of things than the politics side of things. But in the past few years, I think with the rise of fake news, with the rise of various sorts of societal problems that we're seeing as a consequence of technology, I decided I was going to try to speak more to that end of things. So that we can focus on that as a technology community on what are we going to do with this enormous power that we have.>> And looking at that, a couple of direct questions for you, it was awesome talk. You get a lot in there. You were riffing some good stuff there with Jim as well. But you had made a comment that you originally wanted to be lawyer, you went to MIT, and you sort of got pulled in to the dark side>> That's right, yeah.>> In programming. As a former computer scientist myself, what got the bug take us through that moment. Was it you just started coding and said damn I love coding? What was the moment?>> Sure, so I was always talented in math and science. That was part of the reason why I was admitted to MIT and chose to go there. My late father was a lawyer. I didn't really have an example of a technologist in my life. So, to me, career wise I was going to be a lawyer, but I was interested in technology. What kind of lawyer is that? Patent attorney. So that was my career path. MIT, some sort of engineering, then a patent attorney. I got to MIT and realized I didn't have to be a attorney. I could just do the fun stuff. For some people that's the fun part. For me it ended up being when I took my first computer science class. Something that was fun, that I was good at, and that I really got addicted to kind of the feedback loop of you always have a problem you're trying to solve. It doesn't work, it doesn't work. Then you get it to work and then it's great for a minute and then there's a new problem to solve.>> That's a great story. I think it was very inspirational. A lot of folks of watching will be inspired by that. The other thing that inspired me in the key note was your comment about code and culture.>> [Christine] Yeah.>> I love this notion that code is now at a point where open source is a global phenomenon. You mentioned Earth and space.>> [Christine] Yeah.>> You know and all this sort of space is now Linux based now. But coding can shape culture. Explain what you mean by that, because I think it's one of those things that people might not see happening right now, but it is happening. You starting to see the more inclusionary roles and the communities are changing. Code is not just a tech thing. Explain what you mean by code-shaping culture.>> Well we can already that in terms of changing corporate culture. So, for example, 10 or 15, 20 years ago it might be inconceivable to make contributions that might benefit your corporate competitor. And we all have corporate competitors whether that's a nation, the US having competitors. Whether that's your local sports rivalry. We all have competitors, but open source has really shown that you're relying on things that you as a group, no matter what entity you are, you can't do as much as you can if you share your contributions and benefit from people around the globe. So that's one big way I've seen corporate culture in just every day culture change that people have recognized. Whether it's science, or corporate success, you can't do it alone. There's no lone genius. You really have to do it as a community.>> As a collective too you mentioned some of the ruling class and you kind of referring to not ruling class and open source, but also politics. In that gerrymandering was a word you used. We don't hear that often at conferences, but the idea of having more people exposed creates more data. Talk about what you mean by that because this is interesting. This truly is a democratization opportunity.>> [Christine] Absolutely.>> If not handled properly could go away.>> Yeah, I think am a little, I don't know if there's any Game of Thrones fans out there, but you know at some point this season and previous seasons you know Daenerys Targaryen is there and they're like well if you do this you're going to be the same evil person just new face. I think there's a risk of that in the open source community that if it ends up just being a few people it's the same oligarchy. The same sort of corruption just a different face to it. I don't think open source will go that way just based on the people that I've met in the community. It is something that we actively have to guard against and make sure that that we have as many people contributing to open source so that it's not just a few people who are capable of changing the world and have the power to decide whether it's going to be A or B, but as many people as possible.>> Christine, the kind of monetization of open source is always an interesting topic at these kind of shows. You had an interesting piece talking about young people contributing. You know contributing to open source. It's not just oh yeah do it for free and expect them to do it. Same thing in academia a lot of times. Like oh hey, you're going to do that research and participate and write papers and you know money is got to come somewhere to help fund this. How does kind of the money fit into this whole discussion of open source?>> So I think that's been one of the big successes of open source and we heard that from Jim as well today. It isn't you know some sort of unattainable in terms of achieving value for society. When you do something of value, money is a reward for that. The only question is how to distribute that award effectively to the community. What I see sometimes in the community is there's this myth of everyone in open source getting involved for just the fun of it and there's a huge amount of that. I have done a bunch of contributions for free on the side, but I've always in the end gotten some sort monetary reward for that down the line. And someone talked today about that makes you more employable, et cetera. That has left me with the time and freedom to continue that development. I think it's a risk that as a young person who is going into debt for college to not realize that that monetary reward will come or have it be so out of sync with their current life situation that they're unable to get the time to develop the skills. So, I don't think that money is a primary motivating factor for most people in the community, but certainly as Linus said today as well. When you don't have to worry about money that's when you do the really cool nitty-gritty things that might be a risk that then grow to be that next big project.>> It's an interesting comment you made about the US how they couldn't do potentially Linux if it wasn't in the US. It opens up your eyes and you say hmm we got to do better.>> Yeah.>> And so that brings up the whole notion of the radical comment of open source has always been kind of radical and then you know when I was growing up it was a tier two alternative to the big guys. Now it's tier one. I think the stakes are higher and the thing I'd like you to get your comment or reaction to is how does the community take it to the next level when it's bigger than the United States. You have China saying no more ICOs, no more virtual currencies. That's a potential issue there's a data point of many other things that can be on the global scale. Security, the Equifax hack, identity theft, truth in communities is now an issue, and there's more projects more than ever. So I made a comment on Twitter. Whose shoulders do we stand on in the expression of standing on the shoulders before you.>> [Christine] Yeah, you're standing on a sea.>> So it's a discovery challenge of what do we do and how do we get to the truth. What's your thoughts on that?>> That is a large question. I don't know if I can answer it in the short amount of time. So to break it down a little bit. One of the issues is that we're in this global society and we have different portions trying to regulate what's next in technology. For example, China with the ICOs, et cetera. One of the phrases I used in my talk was that the math was on the people's side and I think it is the case still with a lot of the technologies that are distributed. It's very hard for one particular government, or nation state, to say hey we're going to put this back in the box. It's Pandora's box. It's out in the open. So that's a challenge as well for China and other people, the US. If you have some harmful scenario, how to actually regulate that. I don't know how that's going to work out moving forward. I think it is the case in our community how to go to the next level, which is another point that you brought up. One thing that Linus also brought up today, is one of the reasons why it's great to collaborate with corporations is that often they put kind of the finishing touches on a product to really make it to the level that people can engage with it easily. That kind of on ramping to new technology is very easy and that's because of corporations is very incentivized monetarily to do that, whereas the open source community isn't necessarily incentivized to do that. Moreover, a lot of that work that final 1% of a project for the polish is so much more difficult. It's not the fun technical element. So a lot of the open source contributors, myself included, aren't necessarily very excited about that. However, what we saw in Signal, which is a product that it is a non-profit it is something that isn't necessarily for corporate gain, but that final polish and making it very usable did mean that a lot more people are using the product. So in terms of we as a community I think we have to figure out how keeping our radical governance structure, how to get more and more projects to have that final polish. And that'll really take the whole community.>> Let them benefit from it in a way that they're comfortable with now it's not a proprietary lock and it's more of only 10% of most of the applications are uniquely differentiated with open source. Question kind of philosophic thought experiment, or just philosophical question, I'll say astronomy and astrophysics is an interesting background. You've got a world of connected devices, the IoT, Internet of Things, includes people. So, you know I'm sitting there looking at the stars, oh that's the Apache Project, lots of stars in that one. You have these constellations of communities, if you will out there to kind of use the metaphor. And then you got astrophysics, the Milky Way, a lot of gravity around me. You almost take a metaphor talks to how communities work. So let's get your thoughts. How does astrophysics and astronomy relate to some of the dynamics in how self-governing things work?>> I'd love to see that visualization by the way, of the Apache Project and the Milky Way,>> [John] Which one's the Big Dipper?>> That sounds gorgeous, you guys should definitely pursue that.>> John you're going to find something at Caltech, you know our next fellowship.>> Argued who always did the Big Dipper or not, but you know.>> I think some of the challenges are similar in the sciences in that people initially get into it because it's something they're curious about. It's something they love and that's an innate human instinct. People have always gazed up at the stars. People have always wondered how things work. How your computer works? You know let me figure that out. That said, ultimately, they need to eat and feed their families and that sort of stuff. And we often see in the astrophysics community incredibly talented people at some stage in their career leaving for some sort of corporate job. And retaining talent is difficult because a lot of people are forced to move around the globe, to different centers in academia, and that lifestyle can be difficult. The pay often isn't as rewarding as it could be. So to make some sort of parallel between that community and the open source community, retaining talent in open source, if you want people to not necessarily work in open source under Microsoft, under a certain corporation only, but to kind of work more generally. That is something that ultimately, we have to distribute the rewards from that to the community.>> It's kind of interesting. The way I always thought the role of the corporation and open source was always trying to change the game. You know, you mentioned gerrymandering. The old model was we got to influence a slow that down so that we can control it.>> So John we've had people around the globe and even that have made it to space on theCUBE before. I don't know that we've ever had anybody that's been to the South Pole before on theCUBE. So Christine, maybe tell us a little about how's technology you know working in the South Pole and what can you tell our audience about it?>> Sure. So I spent 10 and half months at the South Pole. Not just Antarctica, but literally the middle of the continent, the geographic South Pole. There the US has a research base that houses up to about 200 people during the austral summer months when it's warm that is maybe minus 20 degrees or so. During the cold winter months, it gets completely dark and planes have a very difficult time coming in and out so they close off the station to a skeleton crew to keep the science experiments down there running. There are several astrophysical experiments, several telescopes, as well as many research projects, and that skeleton crew was what I was a part of. 46 people and I was tasked with running the telescope down there and looking at some of the echoes of the Big Bang. And I was basically a telescope doctor. So I was on call much like a sys-admin might be. I was responsible for the kind of IT support for the telescope, but also just physical, something physically broke, kind of replacing that. And that meant I could be woken up in the middle of night because of some kind of package update issue or anything like that and I'd have to hike out in minus a 100 degrees to fix this, sometimes. Oftentimes, there was IT support on the station so we did have internet running to the telescope which was about a kilometer away. It took me anywhere from 20 to 30 minutes to walk out there. So if it didn't require on-site support sometimes I could do the work in my pajamas to kind of fix that. So it was a kind of traditional computer support role in a very untraditional environment.>> That's an IoT device isn't it.>> Yeah.>> Stu and I are always interested in the younger generation as we both have kids who are growing up in this new digital culture. What's your feeling in terms of the younger generation that are coming up because people going to school now, digital natives, courseware, online isn't always the answer, people learn differently. Your thoughts on onboarding the younger generation and for the inclusion piece which is super important whether it's women in tech and/or just people just getting more people into computer science. What are some of things that you see happening that excite you and what are some of the things that get you concerned?>> Yeah, so I had the chance I mentioned a little in my talk to teach 12 high school students how to computer program this summer. Some of them have been through computer programming classes at their colleges, or at their high schools, some not. What I saw when I was in high school was a huge variety of competence in the high school teachers that I had. Some were amazing and inspiring. Others because in the US you need a degree in education, but not necessarily a degree in the field that you're teaching. I think that there's a huge lack of people capable of teaching the next generation who are working at the high school level. It's not that there's a huge lack of people who are capable, kind of anyone at this conference could sit down and help a high schooler get motivated and self-study. So I think teacher training is something that I'm concerned about. In terms of things I'm very excited about, we're not quite there yet with the online courses, but the ability to acquire that knowledge online is very, very exciting. In addition, I think we're waking up as a society to the fact that four year college isn't necessarily the best preparation for every single field. For some fields it's very useful. For other fields, particularly engineering, maybe even computer science engineering, apprenticeships or practical experience could be as valuable if not more valuable for less expense. So I'm excited about new initiatives, these coding bootcamps. I think there's a difficulty in regulation in that you don't know for a new coding bootcamp. Is it just trying to get people's money? Is it really going to help their careers? So we're in a very frothy time there, but I think ultimately how it will shake out is it's going to help people enter technology jobs quicker.>> You know there's a percentage of jobs that aren't even invented yet. So there's AI. You see self-driving cars. These things are easy indicators that hey society's changing.>> Yeah. And it's also good to be helpful for a professionals like us, older professionals who want to keep up in this ever growing field and I don't necessarily want to go back for a second Ph.D, but I'll absolutely take an online course in something I didn't see in my undergrad.>> I mean you can get immersed in anything these days online. It's great, there's a lot of community behind it. Christine thanks so much for sharing. Congratulations on a great keynote. Thanks for spending some time with us.>> [Christine] Yeah, thanks for having me.>> It's theCUBE live coverage here in LA for Open Source Summit in North America. I'm John Furrier, Stu Miniman, and we'll be right back with more live coverage after this short break.

Published Date : Sep 11 2017

SUMMARY :

Brought to you by the Linux Foundation, and Red Hat. Source Summit North America here in LA, I'm John Furrier your co-host with Stu Mitiman. Welcome to theCUBE, a mouthful but you're also keynoting, gave one of the talks opening Why did you take that theme? So that we can focus on that as a technology community on what are we going to do with But you had made a comment that you originally wanted to be lawyer, you went to MIT, and Was it you just started coding and said damn I love coding? the feedback loop of you always have a problem you're trying to solve. I think it was very inspirational. I love this notion that code is now at a point where open source is a global phenomenon. You starting to see the more inclusionary roles and the communities are changing. that you as a group, no matter what entity you are, you can't do as much as you can if In that gerrymandering was a word you used. is there and they're like well if you do this you're going to be the same evil person just How does kind of the money fit into this whole discussion of open source? I have done a bunch of contributions for free on the side, but I've always in the end gotten It's an interesting comment you made about the US how they couldn't do potentially Linux I think the stakes are higher and the thing I'd like you to get your comment or reaction So it's a discovery challenge of what do we do and how do we get to the truth. So a lot of the open source contributors, myself included, aren't necessarily very excited lock and it's more of only 10% of most of the applications are uniquely differentiated the globe, to different centers in academia, and that lifestyle can be difficult. You know, you mentioned gerrymandering. So Christine, maybe tell us a little about how's technology you know working in the South So if it didn't require on-site support sometimes I could do the work in my pajamas to kind that get you concerned? Others because in the US you need a degree in education, but not necessarily a degree You know there's a percentage of jobs that aren't even invented yet. And it's also good to be helpful for a professionals like us, older professionals who want to keep I mean you can get immersed in anything these days online. I'm John Furrier, Stu Miniman, and we'll be right back with more live coverage after this

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

ChristinePERSON

0.99+

Stu MitimanPERSON

0.99+

South PoleLOCATION

0.99+

Stu MinimanPERSON

0.99+

Jim ZemlinPERSON

0.99+

Linux FoundationORGANIZATION

0.99+

South PoleLOCATION

0.99+

AntarcticaLOCATION

0.99+

Red HatORGANIZATION

0.99+

John FurrierPERSON

0.99+

LALOCATION

0.99+

LinusPERSON

0.99+

USLOCATION

0.99+

Los AngelesLOCATION

0.99+

Christine Corbett MoranPERSON

0.99+

JohnPERSON

0.99+

CaltechORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Game of ThronesTITLE

0.99+

PandoraORGANIZATION

0.99+

MITORGANIZATION

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

EarthLOCATION

0.99+

EquifaxORGANIZATION

0.99+

Big BangEVENT

0.99+

oneQUANTITY

0.99+

20QUANTITY

0.99+

firstQUANTITY

0.99+

StuPERSON

0.99+

four yearQUANTITY

0.99+

North AmericaLOCATION

0.99+

minus 20 degreesQUANTITY

0.99+

30 minutesQUANTITY

0.98+

10%QUANTITY

0.97+

1%QUANTITY

0.97+

Daenerys TargaryenPERSON

0.97+

United StatesLOCATION

0.97+

secondQUANTITY

0.97+

bothQUANTITY

0.96+

about a kilometerQUANTITY

0.95+

Open Source Summit 2017EVENT

0.95+

about 200 peopleQUANTITY

0.95+

TwitterORGANIZATION

0.95+

tier oneQUANTITY

0.94+

LinuxTITLE

0.93+

10DATE

0.92+

10 and half monthsQUANTITY

0.92+

15DATE

0.91+

Open Source SummitEVENT

0.91+

46 peopleQUANTITY

0.9+

12 high school studentsQUANTITY

0.9+

One thingQUANTITY

0.89+

Milky WayLOCATION

0.89+

ChinaORGANIZATION

0.88+

Open Source Summit North AmericaEVENT

0.87+

this summerDATE

0.85+

2017EVENT

0.84+

tier twoQUANTITY

0.79+

20 years agoDATE

0.79+

a minuteQUANTITY

0.79+

single fieldQUANTITY

0.78+

one big wayQUANTITY

0.77+

Andrew Elvish & Christian Morin | CUBE Conversation


 

>>Welcome to this Q conversation. I'm Dave Nicholson. And today we are joined by Andrew ish and Chris Y Moran, both from Gentech. Andrew is the vice president of marketing. Chris John is the, uh, vice president of product engineering, gentlemen, welcome to the cube. >>Welcome David. Thanks for having us. Hey, >>David, thanks for having us on your show. >>Absolutely. Give us just, let's start out by, uh, giving us some background on, on Gentech. How would you describe to a relative coming over and asking you what you do for a living? What Genotech does? >>Well, I'll take a shot at that. I'm the marketing guy, David, but, uh, I think the best way to think of Genotech first and foremost is a software company. We, uh, we do a really good job of bringing together all of that physical security sensor network onto a platform. So people can make sense out of the data that comes from video surveillance, cameras, access control, reads, license plate recognition, cameras, and from a whole host of different sensors that can live out there in the world. Temperature, sensors, microwaves, all sorts of stuff. So we're a company that's really good at making sense of complex data from sensors. That's kind of, I think that's kind of what we >>Do and, and, and we focus specifically on like larger, complex, critical infrastructure type projects, whether they be airports, uh, large enterprise campuses and whatnot. So we're not necessarily your well known consumer type brand. >>So you mentioned physical, you mentioned physical security. Um, what about the intersection between physical security and, and cyber security who are, who are the folks that you work with directly as customers and where do they, where do they sit in that spectrum of cyber versus physical? >>So we predominantly work with physical security professionals and, uh, they typically are responsible for the security of a facility, a campus, a certain area. And we'll talk about security cameras. We'll talk about access control devices with card readers and, and, and locks, uh, intrusion detection, systems, fences, and whatnot. So anything that you would see that physically protects a facility. And, uh, what's actually quite interesting is that, you know, cybersecurity, we, we hear about cybersecurity and depressed all the time, right. And who's been hacked this week is typically like, uh, a headline that we're all like looking at, uh, we're looking for in the news. Um, so we actually do quite a lot of, I would say education work with the physical security professional as it pertains to the importance of cyber security in the physical security system, which in and of itself is an information system. Right. Um, so you don't wanna put a system in place to protect your facility that is full of cybersecurity holes because at that point, you know, your physical security systems becomes, uh, your weakest link in your security chain. Uh, the way I like to say it is, you know, there's no such thing as physical security versus cyber security, it's just security. Uh, really just the concept or a context of what threat vectors does this specific control or mechanism actually protects against >>Those seem to be words to live by, but are, are they aspirational? I mean, do you, do you see gaps today, uh, between the worlds of cyber and physical security? >>I mean, for sure, right? Like we, physical security evolved from a different part of the enterprise, uh, structure then did it or cyber security. So they, they come at things from a different angle. Um, so, you know, for a long time, the two worlds didn't really meet. Uh, but now what we're seeing, I would say in the last 10 years, Christian, about that, there's a huge convergence of cyber security with physical security. It, so information technology with operation technology really coming together quite tightly in the industry. And I think leading companies and sophisticated CISOs are really giving a big pitcher thought to what's going on across the organization, not just in cybersecurity. >>Yeah. I think we've come a long way from CCTV, which stands for closed circuit television, uh, which was typically like literally separated from the rest of the organization, often managed by the facilities, uh, part of any organization. Uh, and now we're seeing more and more organizations where this is converging together, but there's still ways to go, uh, to get this proper convergence in place. But, you know, we're getting there. >>How, how does Gentech approach its addressable market? Is this, is this a direct model? Uh, do you work with partners? What, what does that look like in your world? >>Well, we're a, we're a partner led company Gentech, you know, model on many friends is all about our partners. So we go to market through our integration channel. So we work with really great integrators all around the world. Um, and they bring together our software platform, which is usually forms the nucleus of sort of any O T security network. Uh, they bring that together with all sorts of other things, such as the sensor network, the cabling, all of that. It's a very complex multiplayer world. And also in that, you know, partnership ecosystem and Christian, this is more your world. We have to build deep integrations with all of these companies that build sensors, whether that's access, Bosch, Canon, uh, Hanoi, you know, we're, we're really working with them them. And of course with our storage and server partners >>Like Dell >>Mm-hmm <affirmative>. Yeah. So we have, we have like hundreds of, I would say ecosystem partners, right? Camera manufacturers, uh, access control reader, controller manufacturers, intrusion detection, manufacturers, late LIDAR radar, you know, the list goes on and on and on. And, and basically we bring this all together. The system integrator really is going to pick best of breed based on a specific end customer's I would say requirements and then roll out the system. According >>That's very interesting, you know, at, at Silicon angle on the cube, um, we've initiated coverage of this subject of the question, does hardware still matter? And, and you know, of course we're, we're approaching that primarily from kind of the traditional it, uh, perspective, but you said at the outset, you you're a software company mm-hmm <affirmative>, but clearly correct me if I'm wrong, your software depends upon all of these hardware components and as they improve, I imagine you can do things that maybe you couldn't do before those improvements. The first thing that comes to mind is just camera resolution. Um, you know, sort of default today is 4k, uh, go back five years, 10 years. I imagine that some of the sophisticated things that you can do today weren't possible because the hardware was lagging. Is that, is that a, is that a fair assessment? >>Oh, that's a fair assessment. Just going back 20 years ago. Uh, just VGA resolution on a security camera was like out of this world resolution, uh, even more so if it was like full motion, 30 images per second. So you typically have like, probably even like three 20 by 2 44 images per second, like really lousy resolution, just from a resolution perspective, the, the imagery sensors have, have really increased in terms of what they can provide, but even more so is the horsepower of these devices. Mm-hmm, <affirmative> now it's not uncommon to have, uh, pretty, pretty powerful Silicon in those devices now that can actually run machine learning models and you can actually do computer vision and analytics straight into the device. Uh, as you know, in some of the initial years, you would actually run this on kind of racks of servers in this data center. >>Now you can actually distribute those workloads across on the edge. And what we're seeing is, you know, the power that the edge provides is us as a software company, we have the opportunity to actually bring our workloads where it makes most sense. And in some cases we'll actually also have a ground station kind of in between the sensors and potentially the cloud, uh, because the use case just, uh, calls for it. Uh, just looking from a, from a, from a video security perspective, you know, when you have hundreds or thousands of cameras on an airport, it's just not economical or not even feasible in some cases to bring all that footage to the cloud even more so when 99% of that footage is never watched by anybody. So what's the point. Uh, so you just wanna provide the clips that, that actually do matter to the cloud and for longer term retention, you also want to be able to have sometimes more resilient systems, right? So what happens if the cloud disconnects, you can stop the operations of that airport or stop that operations of that, of that prison, right? It needs to continue to operate and therefore you need higher levels of resiliency. So you do need that hardware. So it's really a question of what it calls for and having the right size type of hardware so that you don't overly complexify the installation, uh, and, and actually get the job done. Are >>You comparing airports to prisons >>Christian? Well, nowadays they're pretty much prepared <laugh>, >>But I mean, this is exactly it, David, but I mean, this payload, especially from the video surveillance, like the, the workload that's going through to the, these ground stations really demands flexible deployment, right? So like we think about it as edge to cloud and, uh, you know, that's, what's really getting us excited because it, it gives so much more flexibility to the, you know, the C I S O and security professionals in places like prisons, airports, also large scale retail and banking, and, uh, other places, >>Universities, the list goes on and on and on, and >>On the flexibility of deployment just becomes so much easier because these are lightweight, you usually word deploying on a Linux box and it can connect seamlessly with like large scale head end storage or directly to, uh, cloud providers. It's, it's really a sophisticated new way of looking at how you architect out these networks. >>You've just given, you've just given a textbook example of why, uh, folks in the it world have been talking about hybrid cloud for, for, for such a long time, and some have scoffed at the idea, but you just, you just present a perfect use case for that combination of leveraging cloud with, uh, on-premises hardware and tracking with hardware advances, um, uh, on, on the subject of camera resolution. I don't know if you've seen this meme, but there's a great one with the, the first deep field image from the, from the, I was gonna say humble, the James web space telescope, uh, in contrast with a security camera F photo, which is really blurry of someone in your driveway <laugh>, uh, which is, which is, uh, sort of funny. The reality though, is I've seen some of these latest generation security cameras, uh, you know, beyond 4k resolution. And it's amazing just, you know, the kind of detail that you can get into, but talk about what what's, what's exciting in your world. What's, what's Gentech doing, you know, over the next, uh, several quarters that's, uh, particularly interesting what's on the leading edge of your, of your world. >>Well, I think right now what's on the leading edges is being driven by our end users. So the, so the, the companies, the governments, the organizations that are implementing our software into these complex IOT networks, they wanna do more with that data, right? It's not just about, you know, monitoring surveillance. It's not just about opening and closing doors or reading license plates, but more and more we're seeing organizations taking this bigger picture view of the data that is generated in their organizations and how they can take value out of existing investments that they've made in sensor networks, uh, and to take greater insight into operations, whether that can be asset utilization, customer service efficiency, it becomes about way more than just, you know, either physical security or cyber security. It becomes really an enterprise shaping O T network. And to us, that is like a massive, massive opportunity, uh, in the, in the industry today. >>Yeah. >>Now you're you're you're oh, go ahead. I'm sorry, Christian, go ahead. Yeah, >>No, it's, it's, it's good. But, you know, going back to a comment that I mentioned earlier about how it was initially siloed and now, you know, we're kind of discovering this diamond in the rough, in terms of all these sensors that are out there, which a lot of organizations didn't even know existed or didn't even know they had. And how can you bring that on kind of across the organizations for non-security related applications? So that's kind of one very interesting kind of, uh, direction that we're, that we've been undergoing for the last few years, and then, you know, security, uh, and physical security for that matter often is kind of the bastard step child. Doesn't get all the budget and, you know, there's lots of opportunities for, to help them increase and improve their operations, uh, as, as Andrew pointed out and really help bringing them into the 21st century. >>Yeah. >>And you're, you're headquartered in Montreal, correct? >>Yes. >>Yeah. So, so the reason, the reason why that's interesting is because, um, and, you know, correct me if I'm, if I'm off base here, but, but you're sort of the bridge between north America and Europe. Uh, and, and, uh, and so you sit at that nexus where, uh, you probably have more of an awareness of, uh, trends in security, which overlap with issues of privacy. Yeah. Where Europe has led in a lot of cases. Um, some of those European like rules are coming to north America. Um, is there anything in your world that is particularly relevant or that concerns you about north America catching up, um, or, or do those worlds of privacy and security not overlap as much as I might think they do? >>Ah, thank you. Any >>Thoughts? >>Absolutely not. No, no. <laugh> joking aside. This is, this is, this is, >>Leave me hanging >><laugh>, uh, this is actually core to our DNA. And, and, and we, we often say out loud how, like Europe has really paved the way for a different way, uh, of, of looking at privacy from a security setting, right. And they're not mutually exclusive. Right. You can have high security all while protecting people's privacy. And it's all of a question of ensuring that, you know, how you kind of, I would say, uh, ethically, uh, use said technology and we can actually put some safeguards in it. So to minimize the likelihood of there being abuse, right? There's, there's something that we do, which we call the privacy protector, which, you know, for all intents and purposes, it's not that complex of an idea. It's, it's really the concept of you have security cameras in a public space or a more sensitive location. And you have your security guards that can actually watch that footage when nothing really happens. >>You, you want to protect people's privacy in these situations. Uh, however, you still want to be able to provide a view to the security guard so they can still make out that, you know, there there's actually people walking around or there's a fight that broke out. And in the likelihood that something did happen, then you can actually view the overall footage. So, and with, with the details that the cameras that you had, you know, the super high mega pixel cameras that you have will provide. So we blur the images of the individuals. We still keep the background. And once you have the proper authorization, and this is based on the governance of the organization, so it can be a four I principle where it could be the chief security officer with the chief privacy officer need to authorize this footage to be kind of UN blurred. And at that point you can UN blur the footage and provide it to law enforcement for the investigation, for example. >>Excellent. I've got Andrew, if you wanted, then I, then I'm. Well, so I, I've a, I have a final question for you. And this comes out of a game that, uh, some friends and I, some friends of mine and I devised over the years, primarily this is played with strangers that you meet on airplanes as you're traveling. But the question you ask is in your career, what you're doing now and over the course of your careers, um, what's the most shocking thing <laugh> that people would learn from what, you know, what do you, what do you find? What's the craziest thing. When you go in to look at these environments that you see that people should maybe address, um, well, go ahead and start with you, Andrew. >>I, >>The most shocking thing you see every day in your world, >>It's very interesting. The most shocking thing I think we've seen in the industry is how willing, uh, some professionals are in our industry to install any kind of device on their networks without actually taking the time to do due diligence on what kind of security risks these devices can have on a network. Because I think a lot of people don't think about a security camera as first and foremost, a computer, and it's a computer with an IP address on a network, and it has a visual sensor, but we always get pulled in by that visual sensor. Right. And it's like, oh, it's a camera. No, it's a computer. And, you know, over the last, I would say eight years in the industry, we've spent a lot of time trying to sensitize the industry to the fact that, you know, you can't just put devices on your, your network without understanding the supply chain, without understanding the motives behind who's put these together and their track record of cybersecurity. So probably the weirdest thing that I've seen in my, um, you know, career in this industry is just the willingness of people not to take time to do due diligence before they hook something up on onto their corporate network where, you know, data can start leaking out, being exfiltrated by those devices and malevolent actors behind them. So gotta ask questions about what you put on your network. >>Christian, did he steal your, did he steal your thunder? Do you have any other, any other thoughts? >>Well, so first of all, there's things I just cannot say on TV. Okay. But you can't OK. >>You can't. Yeah, yeah, yeah. Saying that you're shocked that not everyone speaks French doesn't count. Okay. Let's just get, let's get past that, but, but go, but yeah, go ahead. Any thoughts? >>So, uh, you know, I, I would say something that I I've seen a lot and, and specifically with customers sometimes that were starting to shop for a new system is you'd be surprised by first of all, there's a camera, the likelihood of actually somebody watching it live while you're actually in the field of view of that camera is close to Neil first and foremost, second, there's also a good likelihood that that camera doesn't even record. It actually is not even functional. And, and I would say a lot of organizations often realize that, you know, that camera was not functioning when they actually knew do need to get the footage. And we've seen this with some large incidents, uh, very, uh, bad incidents that happened, uh, whether in the UK or in Boston or whatnot, uh, when they're, when law enforcement is trying to get footage and they realize that a lot of cameras actually weren't recording and, and, and goes back to Andrew's point in terms of the selection process of these devices. >>Yeah. Image resolution is important, like, because you need an, an image that it actually usable so that you can actually do something with it forensically, but you know, these cameras need to be recorded by a reliable system and, and should something happen with the device. And there's always going to be something, you know, power, uh, uh, a bird ate the lens. I don't know what it might be, or squirrel ate the wire. Um, and the camera doesn't work anymore. So you have to replace it. So having a system that provides, you know, you with like health insights in terms of, of, of if it's working or not is, is actually quite important. It needs to be managed like any it environment, right? Yeah. You have all these devices and if one of them goes down, you need to manage it. And most organizations it's fire and forget, I sign a purchase order. I bought my security system, I installed it. It's done. We move on to the next one and seven years later, something bad happens. And like, uhoh, >>It's not a CCTV system. It's a network. Yeah. Life cycle management counts. >>Well, uh, I have to say on that, uh, I'm gonna be doing some research on Canadian birds and squirrels. I, I had no idea, >>Very hungry. >>Andrew, Chris, John, thank you so much. Great conversation, uh, from all of us here at the cube. Thanks for tuning in. Stay tuned. The cube from Silicon angle media, we are your leader in tech coverage.

Published Date : Jul 29 2022

SUMMARY :

Andrew is the vice president of marketing. Thanks for having us. How would you describe to a relative coming over and asking you what you I'm the marketing guy, David, but, uh, I think the best way to think of So we're not necessarily your well known consumer type brand. So you mentioned physical, you mentioned physical security. Uh, the way I like to say it is, you know, so, you know, for a long time, the two worlds didn't really meet. But, you know, we're getting there. And also in that, you know, partnership ecosystem and you know, the list goes on and on and on. I imagine that some of the sophisticated things that you can do today weren't possible Uh, as you know, in some of the initial years, from a video security perspective, you know, when you have hundreds or thousands of cameras on an It's, it's really a sophisticated new way of looking at how you architect uh, you know, beyond 4k resolution. It's not just about, you know, Yeah, Doesn't get all the budget and, you know, there's lots of opportunities for, to help them increase Uh, and, and, uh, and so you sit at that nexus where, Ah, thank you. this is, this is, It's, it's really the concept of you have security cameras in a public space or a And in the likelihood that something did happen, then you can actually view the overall footage. what, you know, what do you, what do you find? to sensitize the industry to the fact that, you know, you can't just put devices But you can't OK. Saying that you're shocked that not everyone speaks French doesn't count. So, uh, you know, I, I would say something that I I've seen a lot and, and specifically with customers So having a system that provides, you know, you with like health insights It's not a CCTV system. Well, uh, I have to say on that, uh, I'm gonna be doing some research Andrew, Chris, John, thank you so much.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

DavidPERSON

0.99+

AndrewPERSON

0.99+

CanonORGANIZATION

0.99+

BoschORGANIZATION

0.99+

GenotechORGANIZATION

0.99+

GentechORGANIZATION

0.99+

HanoiORGANIZATION

0.99+

Chris JohnPERSON

0.99+

MontrealLOCATION

0.99+

Chris Y MoranPERSON

0.99+

hundredsQUANTITY

0.99+

99%QUANTITY

0.99+

BostonLOCATION

0.99+

Christian MorinPERSON

0.99+

UKLOCATION

0.99+

21st centuryDATE

0.99+

eight yearsQUANTITY

0.99+

Andrew ElvishPERSON

0.99+

ChrisPERSON

0.99+

2QUANTITY

0.99+

Andrew ishPERSON

0.99+

EuropeLOCATION

0.99+

north AmericaLOCATION

0.99+

NeilPERSON

0.99+

north AmericaLOCATION

0.99+

DellORGANIZATION

0.99+

20QUANTITY

0.99+

10 yearsQUANTITY

0.99+

this weekDATE

0.98+

bothQUANTITY

0.98+

todayDATE

0.98+

seven years laterDATE

0.98+

JohnPERSON

0.98+

two worldsQUANTITY

0.97+

secondQUANTITY

0.96+

five yearsQUANTITY

0.95+

20 years agoDATE

0.94+

firstQUANTITY

0.93+

nexusORGANIZATION

0.92+

4kQUANTITY

0.92+

ChristianORGANIZATION

0.91+

44 images per secondQUANTITY

0.91+

LinuxTITLE

0.9+

first thingQUANTITY

0.89+

FrenchOTHER

0.89+

oneQUANTITY

0.87+

JamesORGANIZATION

0.85+

last 10 yearsDATE

0.83+

threeQUANTITY

0.83+

30 images per secondQUANTITY

0.81+

thousands of camerasQUANTITY

0.81+

first deepQUANTITY

0.8+

EuropeanOTHER

0.79+

CanadianOTHER

0.78+

fourQUANTITY

0.77+

EuropeORGANIZATION

0.68+

SiliconORGANIZATION

0.64+

lastDATE

0.61+

OORGANIZATION

0.56+

yearsDATE

0.5+

oneDATE

0.38+

Incompressible Encodings


 

>> Hello, my name is Daniel Wichs, I'm a senior scientist at NTT research and a professor at Northeastern University. Today I want to tell you about incompressible encodings. This is a recent work from Crypto 2020 and it's a joint work with Tal Moran. So let me start with a question. How much space would it take to store all of Wikipedia? So it turns out that you can download Wikipedia for offline use and some reasonable version of it is about 50 gigabytes in size. So as you'd expect, it's a lot of data, it's quite large. But there's another way to store Wikipedia which is just to store the link www.wikipedia.org that only takes 17 bytes. And for all intents and purposes as long as you have a connection to the internet storing this link is as good as storing the Wikipedia data. You can access a Wikipedia with this link whenever you want. And the point I want to make is that when it comes to public data like Wikipedia, even though the data is huge, it's trivial to compress it down because it is public just by storing a small link to it. And the question for this talk is, can we come up with an incompressible representation of public data like Wikipedia? In other words can we take Wikipedia and represent it in some way such that this representation requires the full 50 gigabytes of storage store, even for someone who has the link to the underlying Wikipedia data and can get the underlying data for free. So let me actually tell you what this means in more detail. So this is the notion of incompressible encodings that we'll focus on in this work. So incompressible encoding consists of an encoding algorithm and a decoding algorithm, these are public algorithms. There's no secret key. Anybody can run these algorithms. The encoding algorithm takes some data m, let's say the Wikipedia data and encodes it in some probabilistic randomized way to derive a codeword c. And the codeword c, you can think of it as just an alternate representation of the Wikipedia data. Anybody can come and decode the codeword to recover the underlying data m. And the correctness property we want here is that no matter what data you start with, if you encode the data m and then decode it, you get back the original data m. This should hold with probably one over the randomness of the encoding procedure. Now for security, we want to consider an adversary that knows the underlying data m, let's say has a link to Wikipedia and can access the Wikipedia data for free does not pay for storing it. The goal of the adversary is to compress this codeword that we created this new randomized representation of the Wikipedia data. So the adversary consists of two procedures a compression procedure and a decompression procedure. The compression procedure takes its input the codeword c and output some smaller compressed value w and the decompression procedure takes w and its goal is to recover the codeword c. And a security property says that no efficient adversary should be able to succeed in this game with better than negligible property. So there are two parameters of interest in this problem. One is the codeword size, which we'll denote by alpha, and ideally we want the codeword size alpha to be as close as possible to the original data size. In other words we don't want the encoding to add too much overhead to the data. The second parameter is the incompressibility parameter beta and that tells us how much space, how much storage and adversary needs to use in order to store the codeword. And ideally, we want this beta to be as close as possible to the codeword size alpha, which should also be as close as possible to the original data size. So I want to mention that there is a trivial construction of incompressible encodings that achieves very poor parameters. So the trivial construction is just take the data m and add some randomness, concatenate some randomness to it and store the original data m plus the concatenated randomness as the codeword. And now even an adversary that knows the underlying data m cannot compress the randomness. So the incompressibility, so we ensure that this construction is incompressible with incompressibility parameter beta that just corresponds to the size of this randomness we added. So essentially the adversary cannot compress the red part of the codeword. So this gets us a scheme where alpha the size of the codeword, is the original data size m plus the incompressible parameter beta. And it turns out that you cannot do better than this information theoretically. So this is not what we want for this we want to focus on what I will call good incompressible encodings. So here, the codeword size should be as close as possible to the data size, should be just one plus little o one of the data size. And the incompressibility should be as essential as large as the entire codeword the adversary cannot compress the codeword almost at all, the incompressible parameter beta is one minus little o one of the data size or the codeword size. And in essence, what this means is that we're somehow want to take the randomness of the encoding procedure and spread it around in some clever way throughout the codeword in such a way that's impossible for the adversary to separate out the randomness and the data, and only store the randomness and rely on the fact that it can get the data for free. We want to make sure it's impossible that adversary accesses essentially this entire code word which contains both the randomness and data and some carefully intertwined way and cannot compress it down using the fact that it knows the data parts. So this notion of incompressible encodings was defined actually in a prior work of Damgard-Ganesh and Orlandi from crypto 2019. They defined a variant of this notion, they had a different name for it. As a tool or a building block for a more complex cryptographic primitive that they called Proofs of Replicated Storage. And I'm not going to talk about what these are. But in this context of constructing these Proofs of Replicated Storage, they also constructed incompressible encodings albeit with some major caveats. So in particular, their construction relied on the random Oracle models, the heuristic construction and it was not known whether you could do this in the standard model, the encoding and decoding time of the construction was quadratic in the data size. And in particular, here we want to apply this, we want to use these types of incompressible encodings on fairly large data like Wikipedia data, 50 gigabytes in size. So quadratic runtime on such huge data is really impractical. And lastly the proof of security for their construction was flawed or someone incompleted, didn't consider general adversaries. And the slope was actually also noticed by concurrent work of Garg-Lu and Waters. And they managed to give a fixed proof for this construction but this required actually quite a lot of effort. It was a highly non-trivial and subtle proof to proof the original construction of Damgard-Ganesh and Orlandi secure. So in our work, we give a new construction of these types of incompressible encodings, our construction already achieved some form of security in the Common Reference String Model come Random String Model without the use of Random Oracles. We have a linear encoding time, linear in the data size. So we get rid of the quadratic and we have a fairly simple proof of security. In fact, I'm hoping to show you a slightly simplified form of it and the stock. We also give some lower bounds and negative results showing that our construction is optimal in some aspects and lastly we give a new application of this notion of incompressible encodings to something called big-key cryptography. And so I want to tell you about this application, hopefully it'll give you some intuition about why incompressible encodings are interesting and useful, and also some intuition about what their real goal is or what it is that they're trying to achieve. So, the application of big-key cryptography is concerned with the problem of system compromise. So, a computer system can become compromised either because the user downloads a malware or remote attacker manages to hack into it. And when this happens, the remote attacker gains control over the system and any cryptographic keys that are stored on the system can easily be exfiltrated or just downloaded out of the system by the attacker and therefore, any security that these cryptographic keys were meant to provide is going to be completely lost. And the idea of big-key cryptography is to mitigate against such attacks by making the secret keys intentionally huge on the order of many gigabytes to even terabytes. And the idea is that by having a very large secret key it would make it harder to exfiltrate such a secret key. Either because the adversary's bandwidth to the compromised system is just not large enough to exfiltrate such a large key or because it might not be cost-effective to have to download so much data of compromised system and store so much data to be able to use the key in the future, especially if the attacker wants to do this on some mass scale or because the system might have some other mechanisms let's say firewall that would detect such large amounts of leakage out of the compromised system and block it in some way. So there's been a lot of work on this idea building big-key crypto systems. So crypto systems where the secret key can be set arbitrarily huge and these crypto systems should testify two goals. So one is security, security should hold even if a large amount of data about the secret key is out, as long as it's not the entire secret key. So when you have an attacker download let's say 90% of the data of the secret key, the security of the system should be preserved. And the second property is that even though the secret key of the system can be huge, many gigabytes or terabytes, we still want the crypto system to remain efficient even though the secret is huge. And particularly this means that the crypto system can even read the entire secret key during each cryptographic operation because that would already be too inefficient. So it can only read some small number of bits of the secret key during each operation, then it performs. And so there's been a lot of work constructing these types of crypto systems but one common problem for all these works is that they require the user to waste a lot of their storage the storage on their computer in storing this huge secret key which is useless for any other purpose, other than providing security. And users might not want to do this. So that's the problem that we address here. And the new idea in our work is let's make the secret key useful instead of just having a secret key with some useless, random data that the cryptographic scheme picks, let's have a secret key that stores let's say the Wikipedia data at which a user might want to store in their system anyway or the user's movie collection or music collection et cetera and the data that the user would want to store on their system. Anyway, we want to convert it. We want to use that as the secret key. Now we think about this for a few seconds. Well, is it a good idea to use Wikipedia as a secret key? No, that sounds like a terrible idea. Wikipedia is not secret, it's public, it's online, Anyone can access it whenever they want. So it's not what we're suggesting. We're suggesting to use an incompressible encoding of Wikipedia as a secret key. Now, even though Wikipedia is public the incompressible encoding is randomized. And therefore the accuracy does not know the value of this incompressible encoding. Moreover, because it's incompressible in order for the adversary to steal, to exfiltrate the entire secret key, it would have to download a very large amount of data out of the compromised system. So there's some hope that this could provide security and we show how to build public encryption schemes and the setting that make use of a secret key which is an incompressible coding of some useful data like Wikipedia. So the secret key is an incompressible encoding of useful data and security ensures that the adversary will need to exfiltrate almost entire key to break the security of this critical system. So in the last few minutes, let me give you a very brief overview of our construction of incompressible encodings. And for this part, we're going to pretend we have something a real beautiful cryptographic object called Lossy Trapdoor Permutations. It turns out we don't quite have an object that's this beautiful and in the full construction, we relax this notion somewhat in order to be able to get our full construction. So Lossy Trapdoor Permutation is a function f we just key by some public key pk and it maps end bits to end bits. And we can sample the public key in one of two indistinguishable modes. In injective mode, this function of fPK is a permutation, and there's in fact, a trapdoor that allows us to invert it efficiently. And in the Lossy mode, if we sample the public in Lossy mode, then if we take some value, random value x and give you fpk of x, then this loses a lot of information about x. And in particular, the image size of the function is very small, much smaller than two to the n and so fpk of x does not contain all the information about x. Okay, so using this type of Lossy Trapdoor Permutation, here's the encoding of a message m using long random CRS come random string. So the encoding just consists of sampling the public key of this Lossy Trapdoor Permutation in injected mode, along with the trapdoor. And the encoding is just going to take the message m, x over it with a common reference string, come random string and invert the trapdoor permutation on this value. And then Coding will just be the public key and the inverse x. So this is something anybody can decode by just taking fpk of x, x over it with the CRS. And that will recover the original message. Now, to add the security, we're going to in the proof, we're going to switch to choosing the value x uniformly at random. So the x component of the codeword is going to be chosen uniformly random and we're going to set the CRS to be fpk of x, x over the message. And if you look at it for a second this distribution is exactly equivalent. It's just a different way of sampling the exact same distribution. And in particular, the relation between the CRS and X is preserved. Now in the second step, we're going to switch the public key to Lossy mode. And now when we do this, then the Codeword part, sorry then the CRS fpk of x, x over m only leaks some small amount of information about the random value x. In other words, even if that resists these, the CRS then the value x and the codeword has a lot of entropy. And because it has a lot of entropy it's incompressible. So what we did here is that we actually start to show that the code word and the CRS are indistinguishable from a different way of sampling them where we placed information about the message and the CRS and the codeword actually is truly random, has a lot of real entropy. And therefore even given the CRS the Codeword is incompressible that's the main idea behind the proof. I just want to make two remarks, our full constructions rely on a relaxed notion of Lossy Trapdoor Permutations which we're able to construct from either the decisional residuoisity or the learning with errors assumption. So in particular, we don't actually know how to construct trapdoor permutations from LWE from any postquantum assumption but the relaxed notion that we need for our actual construction, we can achieve from post quantum assumptions that get post quantum security. I want to mention two caveats of the construction. So one is that in order to make this work, the CRS needs to be long essentially as long as the message size. And also this construction achieves a weak form of selective security where the adversary decides to choose the message before seeing the CRS. And we show that both of these caveats are inherent. We show this by black-box separation and one can overcome them only in the random oracle model. Unless I want to just end with an interesting open question. I think one of the most interesting open questions in this area all of the constructions of incompressible encodings from our work and prior work required the use of some public key crypto assumptions some sort of trapdoor permutations or trapdoor functions. And one of the interesting open question is can you construct and incompressible encodings without relying on public key crypto, using one way functions or just the random oracle model. We conjecture this is not possible, but we don't know. So I want to end with that open questions and thank you very much for listening.

Published Date : Sep 21 2020

SUMMARY :

in order for the adversary to steal,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Daniel WichsPERSON

0.99+

second stepQUANTITY

0.99+

NTTORGANIZATION

0.99+

two caveatsQUANTITY

0.99+

17 bytesQUANTITY

0.99+

50 gigabytesQUANTITY

0.99+

two remarksQUANTITY

0.99+

bothQUANTITY

0.99+

two proceduresQUANTITY

0.99+

WikipediaORGANIZATION

0.99+

www.wikipedia.orgOTHER

0.99+

two goalsQUANTITY

0.99+

second parameterQUANTITY

0.99+

second propertyQUANTITY

0.99+

each operationQUANTITY

0.99+

two parametersQUANTITY

0.98+

oneQUANTITY

0.98+

OrlandiPERSON

0.98+

Tal MoranPERSON

0.97+

TodayDATE

0.97+

one common problemQUANTITY

0.97+

OneQUANTITY

0.97+

Garg-LuORGANIZATION

0.96+

Damgard-GaneshPERSON

0.96+

Northeastern UniversityORGANIZATION

0.96+

twoQUANTITY

0.95+

two indistinguishable modesQUANTITY

0.94+

Crypto 2020ORGANIZATION

0.94+

about 50 gigabytesQUANTITY

0.94+

each cryptographicQUANTITY

0.94+

CRSORGANIZATION

0.94+

WikipediaTITLE

0.93+

90% of the dataQUANTITY

0.89+

LWEORGANIZATION

0.89+

OracleORGANIZATION

0.84+

terabytesQUANTITY

0.83+

WatersORGANIZATION

0.79+

one wayQUANTITY

0.77+

secondsQUANTITY

0.74+

Lossy TrapdoorOTHER

0.71+

Proofs of Replicated StorageOTHER

0.64+

2019DATE

0.62+

secondQUANTITY

0.56+

muchQUANTITY

0.55+

lot ofQUANTITY

0.54+

caveatsQUANTITY

0.51+

gigabytesQUANTITY

0.48+

cryptoTITLE

0.33+

Kubernetes on Any Infrastructure Top to Bottom Tutorials for Docker Enterprise Container Cloud


 

>>all right, We're five minutes after the hour. That's all aboard. Who's coming aboard? Welcome everyone to the tutorial track for our launchpad of them. So for the next couple of hours, we've got a SYRIZA videos and experts on hand to answer questions about our new product, Doctor Enterprise Container Cloud. Before we jump into the videos and the technology, I just want to introduce myself and my other emcee for the session. I'm Bill Milks. I run curriculum development for Mirant us on. And >>I'm Bruce Basil Matthews. I'm the Western regional Solutions architect for Moran Tissue esa and welcome to everyone to this lovely launchpad oven event. >>We're lucky to have you with us proof. At least somebody on the call knows something about your enterprise Computer club. Um, speaking of people that know about Dr Enterprise Container Cloud, make sure that you've got a window open to the chat for this session. We've got a number of our engineers available and on hand to answer your questions live as we go through these videos and disgusting problem. So that's us, I guess, for Dr Enterprise Container Cloud, this is Mirant asses brand new product for bootstrapping Doctor Enterprise Kubernetes clusters at scale Anything. The airport Abu's? >>No, just that I think that we're trying Thio. Uh, let's see. Hold on. I think that we're trying Teoh give you a foundation against which to give this stuff a go yourself. And that's really the key to this thing is to provide some, you know, many training and education in a very condensed period. So, >>yeah, that's exactly what you're going to see. The SYRIZA videos we have today. We're going to focus on your first steps with Dr Enterprise Container Cloud from installing it to bootstrapping your regional child clusters so that by the end of the tutorial content today, you're gonna be prepared to spin up your first documentary prize clusters using documented prize container class. So just a little bit of logistics for the session. We're going to run through these tutorials twice. We're gonna do one run through starting seven minutes ago up until I guess it will be ten fifteen Pacific time. Then we're gonna run through the whole thing again. So if you've got other colleagues that weren't able to join right at the top of the hour and would like to jump in from the beginning, ten. Fifteen Pacific time. We're gonna do the whole thing over again. So if you want to see the videos twice, you got public friends and colleagues that, you know you wanna pull in for a second chance to see this stuff, we're gonna do it all. All twice. Yeah, this session. Any any logistics I should add, Bruce that No, >>I think that's that's pretty much what we had to nail down here. But let's zoom dash into those, uh, feature films. >>Let's do Edmonds. And like I said, don't be shy. Feel free to ask questions in the chat or engineers and boosting myself are standing by to answer your questions. So let me just tee up the first video here and walk their cost. Yeah. Mhm. Yes. Sorry. And here we go. So our first video here is gonna be about installing the Doctor Enterprise Container Club Management cluster. So I like to think of the management cluster as like your mothership, right? This is what you're gonna use to deploy all those little child clusters that you're gonna use is like, Come on it as clusters downstream. So the management costs was always our first step. Let's jump in there >>now. We have to give this brief little pause >>with no good day video. Focus for this demo will be the initial bootstrap of the management cluster in the first regional clusters to support AWS deployments. The management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case, eight of us and the Elsie um, components on the UCP Cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a big strap note on this dependencies on handling with download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the adduce environment. The fourth configuring the deployment, defining things like the machine types on the fifth phase. Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node, just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now we're just checking through AWS to make sure that the account we want to use we have the correct credentials on the correct roles set up and validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just going to check that we can, from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next, we're going to run it. I'm in. Deploy it. Changing into that big struck folder. Just making see what's there. Right now we have no license file, so we're gonna get the license filed. Oh, okay. Get the license file through the more antis downloads site, signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, Once we've done that, we can now go ahead with the rest of the deployment. See that the follow is there. Uh, huh? That's again checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. All right, The next big step is valid in all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running an AWS policy create. So it is part of that is creating our Food trucks script, creating the mystery policy files on top of AWS, Just generally preparing the environment using a cloud formation script you'll see in a second will give a new policy confirmations just waiting for it to complete. Yeah, and there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created Today I am console. Go to that new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media Access key I D and the secret access key. We went, Yeah, usually then exported on the command line. Okay. Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Yes. Okay, that's the key. Secret X key. Right on. Let's kick it off. Yeah, So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you, and as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the west side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS. At the end of the process, that cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Okay. Local clusters boat just waiting for the various objects to get ready. Standard communities objects here Okay, so we speed up this process a little bit just for demonstration purposes. Yeah. There we go. So first note is being built the best in host. Just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for a W s to create the instance. Okay. Yes. Here, beauty there. Okay. Mhm. Okay. Yeah, yeah. Okay. On there. We got question. Host has been built on three instances for the management clusters have now been created. We're going through the process of preparing. Those nodes were now copying everything over. See that? The scaling up of controllers in the big Strap cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Yeah. Yeah, just waiting for key. Clark. Uh huh. Start to finish up. Yeah. No. What? Now we're shutting down control this on the local bootstrap node on preparing our I. D. C. Configuration. Fourth indication, soon as this is completed. Last phase will be to deploy stack light into the new cluster the last time Monitoring tool set way Go stack like to plan It has started. Mhm coming to the end of the deployment Mountain. Yeah, America. Final phase of the deployment. Onda, We are done. Okay, You'll see. At the end they're providing us the details of you. I log in so there's a keeper clogging. You can modify that initial default password is part of the configuration set up with one documentation way. Go Councils up way can log in. Yeah, yeah, thank you very much for watching. >>Excellent. So in that video are wonderful field CTO Shauna Vera bootstrapped up management costume for Dr Enterprise Container Cloud Bruce, where exactly does that leave us? So now we've got this management costume installed like what's next? >>So primarily the foundation for being able to deploy either regional clusters that will then allow you to support child clusters. Uh, comes into play the next piece of what we're going to show, I think with Sean O'Mara doing this is the child cluster capability, which allows you to then deploy your application services on the local cluster. That's being managed by the ah ah management cluster that we just created with the bootstrap. >>Right? So this cluster isn't yet for workloads. This is just for bootstrapping up the downstream clusters. Those or what we're gonna use for workings. >>Exactly. Yeah. And I just wanted to point out, since Sean O'Mara isn't around, toe, actually answer questions. I could listen to that guy. Read the phone book, and it would be interesting, but anyway, you can tell him I said that >>he's watching right now, Crusoe. Good. Um, cool. So and just to make sure I understood what Sean was describing their that bootstrap er knows that you, like, ran document fresh pretender Cloud from to begin with. That's actually creating a kind kubernetes deployment kubernetes and Docker deployment locally. That then hits the AWS a p i in this example that make those e c two instances, and it makes like a three manager kubernetes cluster there, and then it, like, copies itself over toe those communities managers. >>Yeah, and and that's sort of where the transition happens. You can actually see it. The output that when it says I'm pivoting, I'm pivoting from my local kind deployment of cluster AP, I toothy, uh, cluster, that's that's being created inside of AWS or, quite frankly, inside of open stack or inside of bare metal or inside of it. The targeting is, uh, abstracted. Yeah, but >>those air three environments that we're looking at right now, right? Us bare metal in open staff environments. So does that kind cluster on the bootstrap er go away afterwards. You don't need that afterwards. Yeah, that is just temporary. To get things bootstrapped, then you manage things from management cluster on aws in this example? >>Yeah. Yeah. The seed, uh, cloud that post the bootstrap is not required anymore. And there's no, uh, interplay between them after that. So that there's no dependencies on any of the clouds that get created thereafter. >>Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, be a temporary container that would bootstrap all the other containers. Go away. It's, uh, so sort of a similar, similar temporary transient bootstrapping model. Cool. Excellent. What will convict there? It looked like there wasn't a ton, right? It looked like you had to, like, set up some AWS parameters like credentials and region and stuff like that. But other than that, that looked like heavily script herbal like there wasn't a ton of point and click there. >>Yeah, very much so. It's pretty straightforward from a bootstrapping standpoint, The config file that that's generated the template is fairly straightforward and targeted towards of a small medium or large, um, deployment. And by editing that single file and then gathering license file and all of the things that Sean went through, um, that that it makes it fairly easy to script >>this. And if I understood correctly as well that three manager footprint for your management cluster, that's the minimum, right. We always insist on high availability for this management cluster because boy do not wanna see oh, >>right, right. And you know, there's all kinds of persistent data that needs to be available, regardless of whether one of the notes goes down or not. So we're taking care of all of that for you behind the scenes without you having toe worry about it as a developer. >>No, I think there's that's a theme that I think will come back to throughout the rest of this tutorial session today is there's a lot of there's a lot of expertise baked him to Dr Enterprise Container Cloud in terms of implementing best practices for you like the defaulter, just the best practices of how you should be managing these clusters, Miss Seymour. Examples of that is the day goes on. Any interesting questions you want to call out from the chap who's >>well, there was. Yeah, yeah, there was one that we had responded to earlier about the fact that it's a management cluster that then conduce oh, either the the regional cluster or a local child molester. The child clusters, in each case host the application services, >>right? So at this point, we've got, in some sense, like the simplest architectures for our documentary prize Container Cloud. We've got the management cluster, and we're gonna go straight with child cluster. In the next video, there's a more sophisticated architecture, which will also proper today that inserts another layer between those two regional clusters. If you need to manage regions like across a BS, reads across with these documents anything, >>yeah, that that local support for the child cluster makes it a lot easier for you to manage the individual clusters themselves and to take advantage of our observation. I'll support systems a stack light and things like that for each one of clusters locally, as opposed to having to centralize thumb >>eso. It's a couple of good questions. In the chat here, someone was asking for the instructions to do this themselves. I strongly encourage you to do so. That should be in the docks, which I think Dale helpfully thank you. Dale provided links for that's all publicly available right now. So just head on in, head on into the docks like the Dale provided here. You can follow this example yourself. All you need is a Mirante license for this and your AWS credentials. There was a question from many a hear about deploying this toe azure. Not at G. Not at this time. >>Yeah, although that is coming. That's going to be in a very near term release. >>I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. Very bracing. Cool. Okay. Any other thoughts on this one does. >>No, just that the fact that we're running through these individual pieces of the steps Well, I'm sure help you folks. If you go to the link that, uh, the gentleman had put into the chat, um, giving you the step by staff. Um, it makes it fairly straightforward to try this yourselves. >>E strongly encourage that, right? That's when you really start to internalize this stuff. OK, but before we move on to the next video, let's just make sure everyone has a clear picture in your mind of, like, where we are in the life cycle here creating this management cluster. Just stop me if I'm wrong. Who's creating this management cluster is like, you do that once, right? That's when your first setting up your doctor enterprise container cloud environment of system. What we're going to start seeing next is creating child clusters and this is what you're gonna be doing over and over and over again. When you need to create a cluster for this Deb team or, you know, this other team river it is that needs commodity. Doctor Enterprise clusters create these easy on half will. So this was once to set up Dr Enterprise Container Cloud Child clusters, which we're going to see next. We're gonna do over and over and over again. So let's go to that video and see just how straightforward it is to spin up a doctor enterprise cluster for work clothes as a child cluster. Undocumented brands contain >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster, the scaling of the cluster and how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the U I so you can switch. Project Mary only has access to development. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Nan Yes, this H Keys Associate ID for Mary into her team on the cloud credentials that allow you to create access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences, Right? Let's now set up semester search keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name, we copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our local machine. A simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, you go to the clusters tab. We hit the create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one release version five point seven is the current release Onda Attach. Mary's Key is necessary Key. We can then check the rest of the settings, confirming the provider Any kubernetes c r D r I p address information. We can change this. Should we wish to? We'll leave it default for now on. Then what components? A stack light I would like to deploy into my Custer. For this. I'm enabling stack light on logging on Aiken. Sit up the retention sizes Attention times on. Even at this stage, at any customer alerts for the watchdogs. E consider email alerting which I will need my smart host details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I commend side on the route device size. There we go, my three machines obviously creating. I now need to add some workers to this custom. So I go through the same process this time once again, just selecting worker. I'll just add to once again, the AM is extremely important. Will fail if we don't pick the right, Am I for a boon to machine in this case and the deployment has started. We can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here we've created the VPC. We've created the sub nets on We've created the Internet gateway. It's unnecessary made of us and we have no warnings of the stage. Yeah, this will then run for a while. We have one minute past waken click through. We can check the status of the machine bulls as individuals so we can check the machine info, details of the machines that we've assigned, right? Mhm Onda. See any events pertaining to the machine areas like this one on normal? Yeah. Just watch asked. The community's components are waiting for the machines to start. Go back to Custer's. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway on the stage. The machines have been built on assigned. I pick up the U. S. Thank you. Yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. Mhm. No speeding things up a little bit. This whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured. Mhm, mhm. And then we go. Cluster has been deployed. So once the classes deployed, we can now never get around our environment. Okay, Are cooking into configure cluster We could modify their cluster. We could get the end points for alert alert manager on See here The griffon occupying and Prometheus are still building in the background but the cluster is available on you would be able to put workloads on it the stretch to download the cube conflict so that I can put workloads on it. It's again three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster Mhm all right Now that the build is fully completed, we can check out cluster info on. We can see that Allow the satellite components have been built. All the storage is there, and we have access to the CPU. I So if we click into the cluster, we can access the UCP dashboard, right? Shit. Click the signing with Detroit button to use the SSO on. We give Mary's possible to use the name once again. Thing is, an unlicensed cluster way could license at this point. Or just skip it on. There. We have the UCP dashboard. You can see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon, a data just being automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Mhm. So we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah. General dashboard of Cuba navies cluster one of this is configurable. You can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster, all right to scale the cluster on to add a notice. A simple is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger disks and you'll see that worker has been added from the provisioning state on shortly. We will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workouts are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button, validating which release you would like to update to. In this case, the next available releases five point seven point one. Here I'm kicking the update by in the background We will coordinate. Drain each node slowly go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Girl, we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already on In a few minutes we'll see that there are great has been completed. There we go. Great. Done. Yeah. If you work loads of both using proper cloud native community standards, there will be no impact. >>Excellent. So at this point, we've now got a cluster ready to start taking our communities of workloads. He started playing or APs to that costume. So watching that video, the thing that jumped out to me at first Waas like the inputs that go into defining this workload cost of it. All right, so we have to make sure we were using on appropriate am I for that kind of defines the substrate about what we're gonna be deploying our cluster on top of. But there's very little requirements. A so far as I could tell on top of that, am I? Because Docker enterprise Container Cloud is gonna bootstrap all the components that you need. That s all we have is kind of kind of really simple bunch box that we were deploying these things on top of so one thing that didn't get dug into too much in the video. But it's just sort of implied. Bruce, maybe you can comment on this is that release that Shawn had to choose for his, uh, for his cluster in creating it. And that release was also the thing we had to touch. Wanted to upgrade part cluster. So you have really sharp eyes. You could see at the end there that when you're doing the release upgrade enlisted out a stack of components docker, engine, kubernetes, calico, aled, different bits and pieces that go into, uh, go into one of these commodity clusters that deploy. And so, as far as I can tell in that case, that's what we mean by a release. In this sense, right? It's the validated stack off container ization and orchestration components that you know we've tested out and make sure it works well, introduction environments. >>Yeah, and and And that's really the focus of our effort is to ensure that any CVS in any of the stack are taken care of that there is a fixes air documented and up streamed to the open stack community source community, um, and and that, you know, then we test for the scaling ability and the reliability in high availability configuration for the clusters themselves. The hosts of your containers. Right. And I think one of the key, uh, you know, benefits that we provide is that ability to let you know, online, high. We've got an update for you, and it's fixes something that maybe you had asked us to fix. Uh, that all comes to you online as your managing your clusters, so you don't have to think about it. It just comes as part of the product. >>You just have to click on Yes. Please give me that update. Uh, not just the individual components, but again. It's that it's that validated stack, right? Not just, you know, component X, y and Z work. But they all work together effectively Scalable security, reliably cool. Um, yeah. So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old universal control plane. Doctor Enterprise. On top of that, Sean had the classic comment there, you know? Yeah. Yeah. You'll see a little warnings and errors or whatever. When you're setting up, UCP don't handle, right, Just let it do its job, and it will converge all its components, you know, after just just a minute or two. But we saw in that video, we sped things up a little bit there just we didn't wait for, you know, progress fighters to complete. But really, in real life, that whole process is that anything so spend up one of those one of those fosters so quite quite quick. >>Yeah, and and I think the the thoroughness with which it goes through its process and re tries and re tries, uh, as you know, and it was evident when we went through the initial ah video of the bootstrapping as well that the processes themselves are self healing, as they are going through. So they will try and retry and wait for the event to complete properly on. And once it's completed properly, then it will go to the next step. >>Absolutely. And the worst thing you could do is panic at the first warning and start tearing things that don't don't do that. Just don't let it let it heal. Let take care of itself. And that's the beauty of these manage solutions is that they bake in a lot of subject matter expertise, right? The decisions that are getting made by those containers is they're bootstrapping themselves, reflect the expertise of the Mirant ISS crew that has been developing this content in these two is free for years and years now, over recognizing humanities. One cool thing there that I really appreciate it actually that it adds on top of Dr Enterprise is that automatic griffon a deployment as well. So, Dr Enterprises, I think everyone knows has had, like, some very high level of statistics baked into its dashboard for years and years now. But you know our customers always wanted a double click on that right to be able to go a little bit deeper. And Griffon are really addresses that it's built in dashboards. That's what's really nice to see. >>Yeah, uh, and all of the alerts and, uh, data are actually captured in a Prometheus database underlying that you have access to so that you are allowed to add new alerts that then go out to touch slack and say hi, You need to watch your disk space on this machine or those kinds of things. Um, and and this is especially helpful for folks who you know, want to manage the application service layer but don't necessarily want to manage the operations side of the house. So it gives them a tool set that they can easily say here, Can you watch these for us? And Miran tas can actually help do that with you, So >>yeah, yeah, I mean, that's just another example of baking in that expert knowledge, right? So you can leverage that without tons and tons of a long ah, long runway of learning about how to do that sort of thing. Just get out of the box right away. There was the other thing, actually, that you could sleep by really quickly if you weren't paying close attention. But Sean mentioned it on the video. And that was how When you use dark enterprise container cloud to scale your cluster, particularly pulling a worker out, it doesn't just like Territo worker down and forget about it. Right? Is using good communities best practices to cordon and drain the No. So you aren't gonna disrupt your workloads? You're going to just have a bunch of containers instantly. Excellent crash. You could really carefully manage the migration of workloads off that cluster has baked right in tow. How? How? Document? The brass container cloud is his handling cluster scale. >>Right? And And the kubernetes, uh, scaling methodology is is he adhered to with all of the proper techniques that ensure that it will tell you. Wait, you've got a container that actually needs three, uh, three, uh, instances of itself. And you don't want to take that out, because that node, it means you'll only be able to have to. And we can't do that. We can't allow that. >>Okay, Very cool. Further thoughts on this video. So should we go to the questions. >>Let's let's go to the questions >>that people have. Uh, there's one good one here, down near the bottom regarding whether an a p I is available to do this. So in all these demos were clicking through this web. You I Yes, this is all a p. I driven. You could do all of this. You know, automate all this away is part of the CSC change. Absolutely. Um, that's kind of the point, right? We want you to be ableto spin up. Come on. I keep calling them commodity clusters. What I mean by that is clusters that you can create and throw away. You know, easily and automatically. So everything you see in these demos eyes exposed to FBI? >>Yeah. In addition, through the standard Cube cuddle, Uh, cli as well. So if you're not a programmer, but you still want to do some scripting Thio, you know, set up things and deploy your applications and things. You can use this standard tool sets that are available to accomplish that. >>There is a good question on scale here. So, like, just how many clusters and what sort of scale of deployments come this kind of support our engineers report back here that we've done in practice up to a Zeman ia's like two hundred clusters. We've deployed on this with two hundred fifty nodes in a cluster. So were, you know, like like I said, hundreds, hundreds of notes, hundreds of clusters managed by documented press container fall and then those downstream clusters, of course, subject to the usual constraints for kubernetes, right? Like default constraints with something like one hundred pods for no or something like that. There's a few different limitations of how many pods you can run on a given cluster that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. >>Yeah, E. I mean, I don't think that we constrain any of the capabilities that are available in the, uh, infrastructure deliveries, uh, service within the goober Netease framework. So were, you know, But we are, uh, adhering to the standards that we would want to set to make sure that we're not overloading a node or those kinds of things, >>right. Absolutely cool. Alright. So at this point, we've got kind of a two layered our protection when we are management cluster, but we deployed in the first video. Then we use that to deploy one child clustering work, classroom, uh, for more sophisticated deployments where we might want to manage child clusters across multiple regions. We're gonna add another layer into our architectural we're gonna add in regional cluster management. So this idea you're gonna have the single management cluster that we started within the first video. On the next video, we're gonna learn how to spin up a regional clusters, each one of which would manage, for example, a different AWS uh, US region. So let me just pull out the video for that bill. We'll check it out for me. Mhm. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectures of you how to set up the management environment, prepare for the deployment deployment overview and then just to prove it, to play a regional child cluster. So, looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case AWS on the LCN components on the D you speak Cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need a regional cluster? Different platform architectures, for example aws who have been stack even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager we also Machine Manager were held. Mandel are managed as well as the actual provider logic. Mhm. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. And you see, it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster similar to what we're going to deploy now, also only has three managers once again, no workers. But as a comparison, here's a child cluster This one has three managers, but also has additional workers associate it to the cluster. All right, we need to connect. Tell bootstrap note. Preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine. All right. A few things we have to do to make sure the environment is ready. First thing we're going to see go into route. We'll go into our releases folder where we have the kozberg struck on. This was the original bootstrap used to build the original management cluster. Yeah, we're going to double check to make sure our cube con figures there once again, the one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything is working. A condom. No damages waken access to a swell. Yeah. Next we're gonna edit the machine definitions. What we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I. So that's found under the templates AWS directory. We don't need to edit anything else here. But we could change items like the size of the machines attempts. We want to use that The key items to ensure where you changed the am I reference for the junta image is the one for the region in this case AWS region for utilizing this was no construct deployment. We have to make sure we're pointing in the correct open stack images. Yeah, okay. Set the correct and my save file. Now we need to get up credentials again. When we originally created the bootstrap cluster, we got credentials from eight of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we're just exporting the AWS access key and I d. What's important is CAAs aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our cube conflict that we want to use for the management cluster. When we looked at earlier Yeah, now we're exporting that. Want to call the cluster region Is Frank Foods Socrates Frankfurt yet trying to use something descriptive It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed. Um, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at W s and waiting for that bastard and no to get started. Please. The best you nerd Onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy. Dr. Enterprise, this is probably the longest face. Yeah, seeing the second that all the nerds will go from the player deployed. Prepare, prepare. Yeah, You'll see their status changes updates. He was the first night ready. Second, just applying second already. Both my time. No waiting from home control. Let's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running the date of the U. S. All my stay. Ah, now we're playing Stockland. Switch over is done on. Done. Now I will build a child cluster in the new region very, very quickly to find the cluster will pick. Our new credential has shown up. We'll just call it Frankfurt for simplicity a key and customs to find. That's the machine. That cluster stop with three managers. Set the correct Am I for the region? Yeah, Do the same to add workers. There we go test the building. Yeah. Total bill of time Should be about fifteen minutes. Concedes in progress. It's going to expect this up a little bit. Check the events. We've created all the dependencies, machine instances, machines, a boat shortly. We should have a working cluster in Frankfurt region. Now almost a one note is ready from management. Two in progress. Yeah, on we're done. Clusters up and running. Yeah. >>Excellent. So at this point, we've now got that three tier structure that we talked about before the video. We got that management cluster that we do strapped in the first video. Now we have in this example to different regional clustering one in Frankfurt, one of one management was two different aws regions. And sitting on that you can do Strap up all those Doctor enterprise costumes that we want for our work clothes. >>Yeah, that's the key to this is to be able to have co resident with your actual application service enabled clusters the management co resident with it so that you can, you know, quickly access that he observation Elson Surfboard services like the graph, Ana and that sort of thing for your particular region. A supposed to having to lug back into the home. What did you call it when we started >>the mothership? >>The mothership. Right. So we don't have to go back to the mother ship. We could get >>it locally. Yeah, when, like to that point of aggregating things under a single pane of glass? That's one thing that again kind of sailed by in the demo really quickly. But you'll notice all your different clusters were on that same cluster. Your pain on your doctor Enterprise Container Cloud management. Uh, court. Right. So both your child clusters for running workload and your regional clusters for bootstrapping. Those child clusters were all listed in the same place there. So it's just one pane of glass to go look for, for all of your clusters, >>right? And, uh, this is kind of an important point. I was, I was realizing, as we were going through this. All of the mechanics are actually identical between the bootstrapped cluster of the original services and the bootstrapped cluster of the regional services. It's the management layer of everything so that you only have managers, you don't have workers and that at the child cluster layer below the regional or the management cluster itself, that's where you have the worker nodes. And those are the ones that host the application services in that three tiered architecture that we've now defined >>and another, you know, detail for those that have sharp eyes. In that video, you'll notice when deploying a child clusters. There's not on Lee. A minimum of three managers for high availability management cluster. You must have at least two workers that's just required for workload failure. It's one of those down get out of work. They could potentially step in there, so your minimum foot point one of these child clusters is fine. Violence and scalable, obviously, from a >>That's right. >>Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want to my last video. There's another question here about, like where these clusters can live. So again, I know these examples are very aws heavy. Honestly, it's just easy to set up down on the other us. We could do things on bare metal and, uh, open stack departments on Prem. That's what all of this still works in exactly the same way. >>Yeah, the, uh, key to this, especially for the the, uh, child clusters, is the provision hers? Right? See you establish on AWS provision or you establish a bare metal provision or you establish a open stack provision. Or and eventually that list will include all of the other major players in the cloud arena. But you, by selecting the provision or within your management interface, that's where you decide where it's going to be hosted, where the child cluster is to be hosted. >>Speaking off all through a child clusters. Let's jump into our last video in the Siri's, where we'll see how to spin up a child cluster on bare metal. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. So why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent. Provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and supports high performance workloads like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another opera visor. Lay it between so continue on the theme Why Communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p G A s G p us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. Uh, we can handle utilization in the scheduling. Better Onda we increase the performances and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project will add the bare metal hosts, including the host name. I put my credentials I pay my address the Mac address on then provide a machine type label to determine what type of machine it is for later use. Okay, let's get started. So well again. Was the operator thing. We'll go and we'll create a project for our machines to be a member off helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. So the first thing we had to be in post, Yeah, many of the machine A name. Anything you want, que experimental zero one. Provide the IAP my user name type my password. Okay. On the Mac address for the common interface with the boot interface and then the i p m I i p address These machines will be at the time storage worker manager. He's a manager. Yeah, we're gonna add a number of other machines on will. Speed this up just so you could see what the process looks like in the future. Better discovery will be added to the product. Okay. Okay. Getting back there we have it are Six machines have been added, are busy being inspected, being added to the system. Let's have a look at the details of a single note. Yeah, you can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. I see. Okay, let's go and create the cluster. Yeah, So we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So we'll credit custom. We'll give it a name, but if it were selecting bare metal on the region, we're going to select the version we want to apply. No way. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of dress range on update the address range that we want to use for the cluster. Check that the sea ideal blocks for the Cuban ladies and tunnels are what we want them to be. Enable disabled stack light. Yeah, and soothe stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here. We're focused on building communities clusters, so we're gonna put the count of machines. You want managers? We're gonna pick the label type manager and create three machines is the manager for the Cuban eighties. Casting Okay thing. We're having workers to the same. It's a process. Just making sure that the worker label host level are I'm sorry. On when Wait for the machines to deploy. Let's go through the process of putting the operating system on the notes validating and operating system deploying doctor identifies Make sure that the cluster is up and running and ready to go. Okay, let's review the bold events waken See the machine info now populated with more information about the specifics of things like storage and of course, details of a cluster etcetera. Yeah, yeah, well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build? And that brings us to the end of this particular demo. You can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>All right, so there we have it, deploying a cluster to bare metal. Much the same is how we did for AWS. I guess maybe the biggest different stepwise there is there is that registration face first, right? So rather than just using AWS financials toe magically create PM's in the cloud. You got a point out all your bare metal servers to Dr Enterprise between the cloud and they really come in, I guess three profiles, right? You got your manager profile with a profile storage profile which has been labeled as allocate. Um, crossword cluster has appropriate, >>right? And And I think that the you know, the key differentiator here is that you have more physical control over what, uh, attributes that love your cat, by the way, uh, where you have the different attributes of a server of physical server. So you can, uh, ensure that the SSD configuration on the storage nodes is gonna be taken advantage of in the best way the GP use on the worker nodes and and that the management layer is going to have sufficient horsepower to, um, spin up to to scale up the the environments, as required. One of the things I wanted to mention, though, um, if I could get this out without the choking much better. Um, is that Ah, hey, mentioned the load balancer and I wanted to make sure in defining the load balancer and the load balancer ranges. Um, that is for the top of the the cluster itself. That's the operations of the management, uh, layer integrating with your systems internally to be able to access the the Cube Can figs. I I p address the, uh, in a centralized way. It's not the load balancer that's working within the kubernetes cluster that you are deploying. That's still cube proxy or service mesh, or however you're intending to do it. So, um, it's kind of an interesting step that your initial step in building this, um and we typically use things like metal L B or in gen X or that kind of thing is to establish that before we deploy this bear mental cluster so that it can ride on top of that for the tips and things. >>Very cool. So any other thoughts on what we've seen so far today? Bruce, we've gone through all the different layers. Doctor enterprise container clouds in these videos from our management are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. Closing thoughts before we take just a very short break and run through these demos again. >>You know, I've been very exciting. Ah, doing the presentation with you. I'm really looking forward to doing it the second time, so that we because we've got a good rhythm going about this kind of thing. So I'm looking forward to doing that. But I think that the key elements of what we're trying to convey to the folks out there in the audience that I hope you've gotten out of it is that will that this is an easy enough process that if you follow the step by steps going through the documentation that's been put out in the chat, um, that you'll be able to give this a go yourself, Um, and you don't have to limit yourself toe having physical hardware on prim to try it. You could do it in a ws as we've shown you today. And if you've got some fancy use cases like, uh, you you need a Hadoop And and, uh, you know, cloud oriented ai stuff that providing a bare metal service helps you to get there very fast. So right. Thank you. It's been a pleasure. >>Yeah, thanks everyone for coming out. So, like I said we're going to take a very short, like, three minute break here. Uh, take the opportunity to let your colleagues know if they were in another session or they didn't quite make it to the beginning of this session. Or if you just want to see these demos again, we're going to kick off this demo. Siri's again in just three minutes at ten. Twenty five a. M. Pacific time where we will see all this great stuff again. Let's take a three minute break. I'll see you all back here in just two minutes now, you know. Okay, folks, that's the end of our extremely short break. We'll give people just maybe, like one more minute to trickle in if folks are interested in coming on in and jumping into our demo. Siri's again. Eso For those of you that are just joining us now I'm Bill Mills. I head up curriculum development for the training team here. Moran Tous on Joining me for this session of demos is Bruce. Don't you go ahead and introduce yourself doors, who is still on break? That's cool. We'll give Bruce a minute or two to get back while everyone else trickles back in. There he is. Hello, Bruce. >>How'd that go for you? Okay, >>Very well. So let's kick off our second session here. I e just interest will feel for you. Thio. Let it run over here. >>Alright. Hi. Bruce Matthews here. I'm the Western Regional Solutions architect for Marantz. Use A I'm the one with the gray hair and the glasses. Uh, the handsome one is Bill. So, uh, Bill, take it away. >>Excellent. So over the next hour or so, we've got a Siris of demos that's gonna walk you through your first steps with Dr Enterprise Container Cloud Doctor Enterprise Container Cloud is, of course, Miranda's brand new offering from bootstrapping kubernetes clusters in AWS bare metal open stack. And for the providers in the very near future. So we we've got, you know, just just over an hour left together on this session, uh, if you joined us at the top of the hour back at nine. A. M. Pacific, we went through these demos once already. Let's do them again for everyone else that was only able to jump in right now. Let's go. Our first video where we're gonna install Dr Enterprise container cloud for the very first time and use it to bootstrap management. Cluster Management Cluster, as I like to describe it, is our mother ship that's going to spin up all the other kubernetes clusters, Doctor Enterprise clusters that we're gonna run our workloads on. So I'm gonna do >>I'm so excited. I can hardly wait. >>Let's do it all right to share my video out here. Yeah, let's do it. >>Good day. The focus for this demo will be the initial bootstrap of the management cluster on the first regional clusters. To support AWS deployments, the management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case AWS and the Elsom components on the UCP cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a bootstrap note on its dependencies on handling the download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the ideas environment, the fourth configuring the deployment, defining things like the machine types on the fifth phase, Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node. Just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now, we're just checking through aws to make sure that the account we want to use we have the correct credentials on the correct roles set up on validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just gonna check that we can from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next we're going to run it. Yeah, I've been deployed changing into that big struck folder, just making see what's there right now we have no license file, so we're gonna get the license filed. Okay? Get the license file through more antis downloads site signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, since we've done that, we can now go ahead with the rest of the deployment. Yeah, see what the follow is there? Uh huh. Once again, checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. Alright. Next big step is violating all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running in AWS policy create. So it is part of that is creating our food trucks script. Creating this through policy files onto the AWS, just generally preparing the environment using a cloud formation script, you'll see in a second, I'll give a new policy confirmations just waiting for it to complete. And there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created. Good day. I am console. Go to the new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media access Key I. D and the secret access key, but usually then exported on the command line. Okay, Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Okay, thanks. Is key. So you could X key Right on. Let's kick it off. So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you. Um, as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the AWS side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS at the end of the process. That cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Yeah, okay. Local clusters boat. Just waiting for the various objects to get ready. Standard communities objects here. Yeah, you mentioned Yeah. So we've speed up this process a little bit just for demonstration purposes. Okay, there we go. So first note is being built the bastion host just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for AWS to create the instance. Okay. Yeah. Beauty there. Movies. Okay, sketch. Hello? Yeah, Okay. Okay. On. There we go. Question host has been built on three instances for the management clusters have now been created. Okay, We're going through the process of preparing. Those nodes were now copying everything over. See that scaling up of controllers in the big strapped cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Right? Okay. Just waiting for key. Clark. Uh huh. So finish up. Yeah. No. Now we're shutting down. Control this on the local bootstrap node on preparing our I. D. C configuration, fourth indication. So once this is completed, the last phase will be to deploy stack light into the new cluster, that glass on monitoring tool set, Then we go stack like deployment has started. Mhm. Coming to the end of the deployment mountain. Yeah, they were cut final phase of the deployment. And we are done. Yeah, you'll see. At the end, they're providing us the details of you. I log in. So there's a key Clark log in. Uh, you can modify that initial default possible is part of the configuration set up where they were in the documentation way. Go Councils up way can log in. Yeah. Yeah. Thank you very much for watching. >>All right, so at this point, what we have we got our management cluster spun up, ready to start creating work clusters. So just a couple of points to clarify there to make sure everyone caught that, uh, as advertised. That's darker. Enterprise container cloud management cluster. That's not rework loans. are gonna go right? That is the tool and you're gonna use to start spinning up downstream commodity documentary prize clusters for bootstrapping record too. >>And the seed host that were, uh, talking about the kind cluster dingy actually doesn't have to exist after the bootstrap succeeds eso It's sort of like, uh, copies head from the seed host Toothy targets in AWS spins it up it then boots the the actual clusters and then it goes away too, because it's no longer necessary >>so that bootstrapping know that there's not really any requirements, Hardly on that, right. It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, as you just said, it's just a kubernetes in docker cluster on that piece. Drop note is just gonna get torn down after the set up finishes on. You no longer need that. Everything you're gonna do, you're gonna drive from the single pane of glass provided to you by your management cluster Doctor enterprise Continue cloud. Another thing that I think is sort of interesting their eyes that the convict is fairly minimal. Really? You just need to provide it like aws regions. Um, am I? And that's what is going to spin up that spending that matter faster. >>Right? There is a mammal file in the bootstrap directory itself, and all of the necessary parameters that you would fill in have default set. But you have the option then of going in and defining a different Am I different for a different region, for example? Oh, are different. Size of instance from AWS. >>One thing that people often ask about is the cluster footprint. And so that example you saw they were spitting up a three manager, um, managing cluster as mandatory, right? No single manager set up at all. We want high availability for doctrine Enterprise Container Cloud management. Like so again, just to make sure everyone sort of on board with the life cycle stage that we're at right now. That's the very first thing you're going to do to set up Dr Enterprise Container Cloud. You're going to do it. Hopefully exactly once. Right now, you've got your management cluster running, and they're gonna use that to spend up all your other work clusters Day today has has needed How do we just have a quick look at the questions and then lets take a look at spinning up some of those child clusters. >>Okay, e think they've actually been answered? >>Yeah, for the most part. One thing I'll point out that came up again in the Dail, helpfully pointed out earlier in surgery, pointed out again, is that if you want to try any of the stuff yourself, it's all of the dogs. And so have a look at the chat. There's a links to instructions, so step by step instructions to do each and every thing we're doing here today yourself. I really encourage you to do that. Taking this out for a drive on your own really helps internalizing communicate these ideas after the after launch pad today, Please give this stuff try on your machines. Okay, So at this point, like I said, we've got our management cluster. We're not gonna run workloads there that we're going to start creating child clusters. That's where all of our work and we're gonna go. That's what we're gonna learn how to do in our next video. Cue that up for us. >>I so love Shawn's voice. >>Wasn't that all day? >>Yeah, I watched him read the phone book. >>All right, here we go. Let's now that we have our management cluster set up, let's create a first child work cluster. >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster the scaling of the cluster on how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the u I. So you can switch Project Mary only has access to development. Uh huh. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Man. Yes, this H keys, Associate ID for Mary into her team on the cloud credentials that allow you to create or access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences. Right. Let's now set up some ssh keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name. We copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our machine. A very simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, we got the clusters tab we had to create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one released version five point seven is the current release Onda Attach. Mary's Key is necessary key. We can then check the rest of the settings, confirming the provider any kubernetes c r D a r i p address information. We can change this. Should we wish to? We'll leave it default for now and then what components of stack light? I would like to deploy into my custom for this. I'm enabling stack light on logging, and I consider the retention sizes attention times on. Even at this stage, add any custom alerts for the watchdogs. Consider email alerting which I will need my smart host. Details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I convinced side on the route. Device size. There we go. My three machines are busy creating. I now need to add some workers to this cluster. So I go through the same process this time once again, just selecting worker. I'll just add to once again the am I is extremely important. Will fail if we don't pick the right. Am I for a Clinton machine? In this case and the deployment has started, we can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen. Cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here. We've created the VPC. We've created the sub nets on. We've created the Internet Gateway. It's unnecessary made of us. And we have no warnings of the stage. Okay, this will then run for a while. We have one minute past. We can click through. We can check the status of the machine balls as individuals so we can check the machine info, details of the machines that we've assigned mhm and see any events pertaining to the machine areas like this one on normal. Yeah. Just last. The community's components are waiting for the machines to start. Go back to customers. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway. And at this stage, the machines have been built on assigned. I pick up the U S. Yeah, yeah, yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. No speeding things up a little bit this whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured mhm and then we go. Cluster has been deployed. So once the classes deployed, we can now never get around. Our environment are looking into configure cluster. We could modify their cluster. We could get the end points for alert Alert Manager See here the griffon occupying and Prometheus are still building in the background but the cluster is available on You would be able to put workloads on it at this stage to download the cube conflict so that I can put workloads on it. It's again the three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster. All right, Now that the build is fully completed, we can check out cluster info on. We can see that all the satellite components have been built. All the storage is there, and we have access to the CPU. I. So if we click into the cluster, we can access the UCP dashboard, click the signing with the clock button to use the SSO. We give Mary's possible to use the name once again. Thing is an unlicensed cluster way could license at this point. Or just skip it on. Do we have the UCP dashboard? You could see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon. A data just been automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Um, so we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah, a general dashboard of Cuba Navies cluster. What If this is configurable, you can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster. All right to scale the cluster on to add a No. This is simple. Is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger group disks and you'll see that worker has been added in the provisioning state. On shortly, we will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node we would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workloads are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button validating which release you would like to update to this case. This available releases five point seven point one give you I'm kicking the update back in the background. We will coordinate. Drain each node slowly, go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Who we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already. Yeah, and in a few minutes, we'll see that the upgrade has been completed. There we go. Great. Done. If you work loads of both using proper cloud native community standards, there will be no impact. >>All right, there. We haven't. We got our first workload cluster spun up and managed by Dr Enterprise Container Cloud. So I I loved Shawn's classic warning there. When you're spinning up an actual doctor enterprise deployment, you see little errors and warnings popping up. Just don't touch it. Just leave it alone and let Dr Enterprises self healing properties take care of all those very transient temporary glitches, resolve themselves and leave you with a functioning workload cluster within victims. >>And now, if you think about it that that video was not very long at all. And that's how long it would take you if someone came into you and said, Hey, can you spend up a kubernetes cluster for development development A. Over here, um, it literally would take you a few minutes to thio Accomplish that. And that was with a W s. Obviously, which is sort of, ah, transient resource in the cloud. But you could do exactly the same thing with resource is on Prem or resource is, um physical resource is and will be going through that later in the process. >>Yeah, absolutely one thing that is present in that demo, but that I like to highlight a little bit more because it just kind of glides by Is this notion of, ah, cluster release? So when Sean was creating that cluster, and also when when he was upgrading that cluster, he had to choose a release. What does that didn't really explain? What does that mean? Well, in Dr Enterprise Container Cloud, we have released numbers that capture the entire staff of container ization tools that will be deploying to that workload costume. So that's your version of kubernetes sed cor DNs calico. Doctor Engineer. All the different bits and pieces that not only work independently but are validated toe work together as a staff appropriate for production, humanities, adopted enterprise environments. >>Yep. From the bottom of the stack to the top, we actually test it for scale. Test it for CVS, test it for all of the various things that would, you know, result in issues with you running the application services. And I've got to tell you from having, you know, managed kubernetes deployments and things like that that if you're the one doing it yourself, it can get rather messy. Eso This makes it easy. >>Bruce, you were staying a second ago. They I'll take you at least fifteen minutes to install your release. Custer. Well, sure, but what would all the other bits and pieces you need toe? Not just It's not just about pressing the button to install it, right? It's making the right decision. About what components work? Well, our best tested toe be successful working together has a staff? Absolutely. We this release mechanism and Dr Enterprise Container Cloud. Let's just kind of package up that expert knowledge and make it available in a really straightforward, fashionable species. Uh, pre Confederate release numbers and Bruce is you're pointing out earlier. He's got delivered to us is updates kind of transparent period. When when? When Sean wanted toe update that cluster, he created little update. Custer Button appeared when an update was available. All you gotta do is click. It tells you what Here's your new stack of communities components. It goes ahead. And the straps those components for you? >>Yeah, it actually even displays at the top of the screen. Ah, little header That says you've got an update available. Do you want me to apply? It s o >>Absolutely. Another couple of cool things. I think that are easy to miss in that demo was I really like the on board Bafana that comes along with this stack. So we've been Prometheus Metrics and Dr Enterprise for years and years now. They're very high level. Maybe in in previous versions of Dr Enterprise having those detailed dashboards that Ravana provides, I think that's a great value out there. People always wanted to be ableto zoom in a little bit on that, uh, on those cluster metrics, you're gonna provides them out of the box for us. Yeah, >>that was Ah, really, uh, you know, the joining of the Miranda's and Dr teams together actually spawned us to be able to take the best of what Morantes had in the open stack environment for monitoring and logging and alerting and to do that integration in in a very short period of time so that now we've got it straight across the board for both the kubernetes world and the open stack world. Using the same tool sets >>warm. One other thing I wanna point out about that demo that I think there was some questions about our last go around was that demo was all about creating a managed workplace cluster. So the doctor enterprise Container Cloud managers were using those aws credentials provisioned it toe actually create new e c two instances installed Docker engine stalled. Doctor Enterprise. Remember all that stuff on top of those fresh new VM created and managed by Dr Enterprise contain the cloud. Nothing unique about that. AWS deployments do that on open staff doing on Parramatta stuff as well. Um, there's another flavor here, though in a way to do this for all of our long time doctor Enterprise customers that have been running Doctor Enterprise for years and years. Now, if you got existing UCP points existing doctor enterprise deployments, you plug those in to Dr Enterprise Container Cloud, uh, and use darker enterprise between the cloud to manage those pre existing Oh, working clusters. You don't always have to be strapping straight from Dr Enterprises. Plug in external clusters is bad. >>Yep, the the Cube config elements of the UCP environment. The bundling capability actually gives us a very straightforward methodology. And there's instructions on our website for exactly how thio, uh, bring in import and you see p cluster. Um so it it makes very convenient for our existing customers to take advantage of this new release. >>Absolutely cool. More thoughts on this wonders if we jump onto the next video. >>I think we should move press on >>time marches on here. So let's Let's carry on. So just to recap where we are right now, first video, we create a management cluster. That's what we're gonna use to create All our downstream were closed clusters, which is what we did in this video. Let's maybe the simplest architectures, because that's doing everything in one region on AWS pretty common use case because we want to be able to spin up workload clusters across many regions. And so to do that, we're gonna add a third layer in between the management and work cluster layers. That's gonna be our regional cluster managers. So this is gonna be, uh, our regional management cluster that exists per region that we're going to manage those regional managers will be than the ones responsible for spending part clusters across all these different regions. Let's see it in action in our next video. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectural overview, how to set up the management environment, prepare for the deployment deployment overview, and then just to prove it, to play a regional child cluster. So looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case, AWS on the L C M components on the d you speak cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need original cluster? Different platform architectures, for example AWS open stack, even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager. We also machine manager. We're hell Mandel are managed as well as the actual provider logic. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. When you see it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster, similar to what we're going to deploy now. Also only has three managers once again, no workers. But as a comparison is a child cluster. This one has three managers, but also has additional workers associate it to the cluster. Yeah, all right, we need to connect. Tell bootstrap note, preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine Mhm. All right, A few things we have to do to make sure the environment is ready. First thing we're gonna pseudo into route. I mean, we'll go into our releases folder where we have the car's boot strap on. This was the original bootstrap used to build the original management cluster. We're going to double check to make sure our cube con figures there It's again. The one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything's working, condone, load our images waken access to a swell. Yeah, Next, we're gonna edit the machine definitions what we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I So that's found under the templates AWS directory. We don't need to edit anything else here, but we could change items like the size of the machines attempts we want to use but the key items to ensure where changed the am I reference for the junta image is the one for the region in this case aws region of re utilizing. This was an open stack deployment. We have to make sure we're pointing in the correct open stack images. Yeah, yeah. Okay. Sit the correct Am I save the file? Yeah. We need to get up credentials again. When we originally created the bootstrap cluster, we got credentials made of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we just exporting AWS access key and I d. What's important is Kaz aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our Q conflict that we want to use for the management cluster when we looked at earlier. Yeah, now we're exporting that. Want to call? The cluster region is Frankfurt's Socrates Frankfurt yet trying to use something descriptive? It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at us and waiting for the past, you know, to get started. Please the best your node, onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy Dr Enterprise, he says. Probably the longest face we'll see in a second that all the nodes will go from the player deployed. Prepare, prepare Mhm. We'll see. Their status changes updates. It was the first word ready. Second, just applying second. Grady, both my time away from home control that's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running a data for us? Yeah, almost a on. Now we're playing Stockland. Thanks. Whichever is done on Done. Now we'll build a child cluster in the new region very, very quickly. Find the cluster will pick our new credential have shown up. We'll just call it Frankfurt for simplicity. A key on customers to find. That's the machine. That cluster stop with three manages set the correct Am I for the region? Yeah, Same to add workers. There we go. That's the building. Yeah. Total bill of time. Should be about fifteen minutes. Concedes in progress. Can we expect this up a little bit? Check the events. We've created all the dependencies, machine instances, machines. A boat? Yeah. Shortly. We should have a working caster in the Frankfurt region. Now almost a one note is ready from management. Two in progress. On we're done. Trust us up and running. >>Excellent. There we have it. We've got our three layered doctor enterprise container cloud structure in place now with our management cluster in which we scrap everything else. Our regional clusters which manage individual aws regions and child clusters sitting over depends. >>Yeah, you can. You know you can actually see in the hierarchy the advantages that that presents for folks who have multiple locations where they'd like a geographic locations where they'd like to distribute their clusters so that you can access them or readily co resident with your development teams. Um and, uh, one of the other things I think that's really unique about it is that we provide that same operational support system capability throughout. So you've got stack light monitoring the stack light that's monitoring the stack light down to the actual child clusters that they have >>all through that single pane of glass that shows you all your different clusters, whether their workload cluster like what the child clusters or usual clusters from managing different regions. Cool. Alright, well, time marches on your folks. We've only got a few minutes left and I got one more video in our last video for the session. We're gonna walk through standing up a child cluster on bare metal. So so far, everything we've seen so far has been aws focus. Just because it's kind of easy to make that was on AWS. We don't want to leave you with the impression that that's all we do, we're covering AWS bare metal and open step deployments as well documented Craftsman Cloud. Let's see it in action with a bare metal child cluster. >>We are on the home stretch, >>right. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. Yeah, so why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and support high performance workouts like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another hyper visor layer in between. So continuing on the theme Why communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p g A s G p, us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. We can handle utilization in the scheduling better Onda. We increase the performance and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project. Will add the bare metal hosts, including the host name. I put my credentials. I pay my address, Mac address on, then provide a machine type label to determine what type of machine it is. Related use. Okay, let's get started Certain Blufgan was the operator thing. We'll go and we'll create a project for our machines to be a member off. Helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. Yeah. So the first thing we had to be in post many of the machine a name. Anything you want? Yeah, in this case by mental zero one. Provide the IAP My user name. Type my password? Yeah. On the Mac address for the active, my interface with boot interface and then the i p m i P address. Yeah, these machines. We have the time storage worker manager. He's a manager. We're gonna add a number of other machines on will speed this up just so you could see what the process. Looks like in the future, better discovery will be added to the product. Okay, Okay. Getting back there. We haven't Are Six machines have been added. Are busy being inspected, being added to the system. Let's have a look at the details of a single note. Mhm. We can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. Okay, it's going to create the cluster. Mhm. Okay, so we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So credit custom. We'll give it a name. Thank you. But he thought were selecting bare metal on the region. We're going to select the version we want to apply on. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of the dress range update the address range that we want to use for the cluster. Check that the sea idea blocks for the communities and tunnels are what we want them to be. Enable disabled stack light and said the stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here we're focused on building communities clusters. So we're gonna put the count of machines. You want managers? We're gonna pick the label type manager on create three machines. Is a manager for the Cuban a disgusting? Yeah, they were having workers to the same. It's a process. Just making sure that the worker label host like you are so yes, on Duin wait for the machines to deploy. Let's go through the process of putting the operating system on the notes, validating that operating system. Deploying Docker enterprise on making sure that the cluster is up and running ready to go. Okay, let's review the bold events. We can see the machine info now populated with more information about the specifics of things like storage. Yeah, of course. Details of a cluster, etcetera. Yeah, Yeah. Okay. Well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build, and that brings us to the end of this particular do my as you can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>Here we have a child cluster on bare metal for folks that wanted to play the stuff on Prem. >>It's ah been an interesting journey taken from the mothership as we started out building ah management cluster and then populating it with a child cluster and then finally creating a regional cluster to spread the geographically the management of our clusters and finally to provide a platform for supporting, you know, ai needs and and big Data needs, uh, you know, thank goodness we're now able to put things like Hadoop on, uh, bare metal thio in containers were pretty exciting. >>Yeah, absolutely. So with this Doctor Enterprise container cloud platform. Hopefully this commoditized scooping clusters, doctor enterprise clusters that could be spun up and use quickly taking provisioning times. You know, from however many months to get new clusters spun up for our teams. Two minutes, right. We saw those clusters gets better. Just a couple of minutes. Excellent. All right, well, thank you, everyone, for joining us for our demo session for Dr Enterprise Container Cloud. Of course, there's many many more things to discuss about this and all of Miranda's products. If you'd like to learn more, if you'd like to get your hands dirty with all of this content, police see us a training don Miranda's dot com, where we can offer you workshops and a number of different formats on our entire line of products and hands on interactive fashion. Thanks, everyone. Enjoy the rest of the launchpad of that >>thank you all enjoy.

Published Date : Sep 17 2020

SUMMARY :

So for the next couple of hours, I'm the Western regional Solutions architect for Moran At least somebody on the call knows something about your enterprise Computer club. And that's really the key to this thing is to provide some, you know, many training clusters so that by the end of the tutorial content today, I think that's that's pretty much what we had to nail down here. So the management costs was always We have to give this brief little pause of the management cluster in the first regional clusters to support AWS deployments. So in that video are wonderful field CTO Shauna Vera bootstrapped So primarily the foundation for being able to deploy So this cluster isn't yet for workloads. Read the phone book, So and just to make sure I understood The output that when it says I'm pivoting, I'm pivoting from on the bootstrap er go away afterwards. So that there's no dependencies on any of the clouds that get created thereafter. Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, The config file that that's generated the template is fairly straightforward We always insist on high availability for this management cluster the scenes without you having toe worry about it as a developer. Examples of that is the day goes on. either the the regional cluster or a We've got the management cluster, and we're gonna go straight with child cluster. as opposed to having to centralize thumb So just head on in, head on into the docks like the Dale provided here. That's going to be in a very near term I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. No, just that the fact that we're running through these individual So let's go to that video and see just how We can check the status of the machine bulls as individuals so we can check the machine the thing that jumped out to me at first Waas like the inputs that go into defining Yeah, and and And that's really the focus of our effort is to ensure that So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old of the bootstrapping as well that the processes themselves are self healing, And the worst thing you could do is panic at the first warning and start tearing things that don't that then go out to touch slack and say hi, You need to watch your disk But Sean mentioned it on the video. And And the kubernetes, uh, scaling methodology is is he adhered So should we go to the questions. Um, that's kind of the point, right? you know, set up things and deploy your applications and things. that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. to the standards that we would want to set to make sure that we're not overloading On the next video, we're gonna learn how to spin up a Yeah, Do the same to add workers. We got that management cluster that we do strapped in the first video. Yeah, that's the key to this is to be able to have co resident with So we don't have to go back to the mother ship. So it's just one pane of glass to the bootstrapped cluster of the regional services. and another, you know, detail for those that have sharp eyes. Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want all of the other major players in the cloud arena. Let's jump into our last video in the Siri's, So the first thing we had to be in post, Yeah, many of the machine A name. Much the same is how we did for AWS. nodes and and that the management layer is going to have sufficient horsepower to, are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. that's been put out in the chat, um, that you'll be able to give this a go yourself, Uh, take the opportunity to let your colleagues know if they were in another session I e just interest will feel for you. Use A I'm the one with the gray hair and the glasses. And for the providers in the very near future. I can hardly wait. Let's do it all right to share my video So the first thing is, we need those route credentials which we're going to export on the command That is the tool and you're gonna use to start spinning up downstream It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, and all of the necessary parameters that you would fill in have That's the very first thing you're going to Yeah, for the most part. Let's now that we have our management cluster set up, let's create a first We can check the status of the machine balls as individuals so we can check the glitches, resolve themselves and leave you with a functioning workload cluster within exactly the same thing with resource is on Prem or resource is, All the different bits and pieces And I've got to tell you from having, you know, managed kubernetes And the straps those components for you? Yeah, it actually even displays at the top of the screen. I really like the on board Bafana that comes along with this stack. the best of what Morantes had in the open stack environment for monitoring and logging So the doctor enterprise Container Cloud managers were Yep, the the Cube config elements of the UCP environment. More thoughts on this wonders if we jump onto the next video. Let's maybe the simplest architectures, of the regional cluster and how it connects to the management cluster on their components, There we have it. that we provide that same operational support system capability Just because it's kind of easy to make that was on AWS. Just making sure that the worker label host like you are so yes, It's ah been an interesting journey taken from the mothership Enjoy the rest of the launchpad

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MaryPERSON

0.99+

SeanPERSON

0.99+

Sean O'MaraPERSON

0.99+

BrucePERSON

0.99+

FrankfurtLOCATION

0.99+

three machinesQUANTITY

0.99+

Bill MilksPERSON

0.99+

AWSORGANIZATION

0.99+

first videoQUANTITY

0.99+

second phaseQUANTITY

0.99+

ShawnPERSON

0.99+

first phaseQUANTITY

0.99+

ThreeQUANTITY

0.99+

Two minutesQUANTITY

0.99+

three managersQUANTITY

0.99+

fifth phaseQUANTITY

0.99+

ClarkPERSON

0.99+

Bill MillsPERSON

0.99+

DalePERSON

0.99+

Five minutesQUANTITY

0.99+

NanPERSON

0.99+

second sessionQUANTITY

0.99+

Third phaseQUANTITY

0.99+

SeymourPERSON

0.99+

Bruce Basil MatthewsPERSON

0.99+

Moran TousPERSON

0.99+

five minutesQUANTITY

0.99+

hundredsQUANTITY

0.99+

Why Multi-Cloud?


 

>>Hello, everyone. My name is Rick Pew. I'm a senior product manager at Mirant. This and I have been working on the Doctor Enterprise Container Cloud for the last eight months. Today we're gonna be talking about multi cloud kubernetes. So the first thing to kind of look at is, you know, is multi cloud rial. You know, the terms thrown around a lot and by the way, I should mention that in this presentation, we use the term multi cloud to mean both multi cloud, which you know in the technical sense, really means multiple public clouds and hybrid cloud means public clouds. And on Prem, uh, we use in this presentation will use the term multi cloud to refer to all different types of multiple clouds, whether it's all public cloud or a mixture of on Prem and Public Cloud or, for that matter, multiple on Prem clouds as doctor and price container. Cloud supports all of those scenarios. So it really well, let's look at some research that came out of flex era in their 2020 State of the cloud report. You'll notice that ah, 33% state that they've got multiple public and one private cloud. 53% say they've got multiple public and multiple private cloud. So if you have those two up, you get 86% of the people say that they're in multiple public clowns and at least one private cloud. So I think at this stage we could say that multi cloud is a reality. According to 4 51 research, you know, a number of CEO stated that the strong driver their desire was to optimize cost savings across their private and public clouds. Um, they also wanted to avoid vendor lock in by operating in multiple clouds and try to dissuade their teams from taking too much advantage of a given providers proprietary infrastructure. But they also indicated that there the complexity of using multiple clouds hindered the rate of adoption of doing it doesn't mean they're not doing it. It just means that they don't go assed fast as they would like to go in many cases because of the complexity. And here it Miranda's. We surveyed our customers as well, and they're telling us similar things, you know. Risk management, through the diversification of providers, is key on their list cost optimization and the democratization of allowing their development teams, uh, to create kubernetes clusters without having to file a nightie ticket. But to give them a self service, uh, cloud like environment, even if it's on prem or multi cloud to give them the ability to create their own clusters, resize their own clusters and delete their own clusters without needing to have I t. Or of their operations teams involved at all. But there are some challenges with this, with the different clouds you know require different automation. Thio provisioned the underlying infrastructure or deploy and operating system or deployed kubernetes, for that matter, in a given cloud. You could say that they're not that complicated. They all have, you know, very powerful consoles and a P I s to do that. But did you get across three or four or five different clouds? Then you have to learn three or four or five different AP ice and Web consoles in order to make that happen on in. That scenario is difficult to provide self service for developers across all the cloud options, which is what you want to really accelerate your application innovation. So what's in it for me? You know We've got a number of roles and their prizes developers, operators and business leaders, and they have somewhat different needs. So when the developer side the need is flexibility to meet their development schedules, Number one you know they're under constant pressure to produce, and in order to do that, they need flexibility and in this case, the flexibility to create kubernetes clusters and use them across multiple clouds. Now they also have C I C D tools, and they want them to be able to be normalized on automated across all of the the on prim and public clouds that they're using. You know, in many cases they'll have a test and deployment scenario where they'll want to create a cluster, deploy their software, run their test, score the tests and then delete that cluster because the only point of that cluster, perhaps, was to test ah pipeline of delivery. So they need that kind of flexibility. From the operator's perspective, you know, they always want to be able to customize the control of their infrastructure and deployment. Uh, they certainly have the desire to optimize their optics and Capex fans. They also want to support their develops teams who many times their their customers through a p I access for on Prem and public clouds burst. Scaling is something operators are interested in, and something public clouds can provide eso the ability to scale out into public clouds, perhaps from there on prem infrastructure in a seamless manner. And many times they need to support geographic distribution of applications either for compliance or performance reasons. So having you know, data centers all across the world and be able to specifically target a given region, uh, is high on their list. Business leaders want flexibility and confidence to know that you know, they're on prim and public cloud uh, deployments. Air fully supported. They want to be able, like the operator, optimize their cloud, spends business leaders, think about disaster recovery. So having the applications running and living in different data centers gives them the opportunity to have disaster recovery. And they really want the flexibility of keeping private data under their control. On on Prem In certain applications may access that on Prem. Other applications may be able to fully run in the cloud. So what should I look for in a container cloud? So you really want something that fully automates these cluster deployments for virtual machine or bare metal. The operating system, uh, and kubernetes eso It's not just deploying kubernetes. It's, you know, how do I create my underlying infrastructure of a VM or bare metal? How do I deploy the operating system? And then, on top of all that, I want to be able to deploy kubernetes. Uh, you also want one that gives a unified cluster lifecycle management across all the clouds. So these clusters air running software gets updated. Cooper Netease has a new release cycle. Uh, they come out with something new. It's available, you know, How do you get that across all of your clusters? That air running in multiple clouds. We also need a container cloud that can provide you the visibility through logging, monitoring and alerting again across all the clouds. You know, many offerings have these for a particular cloud, but getting that across multiple clouds, uh, becomes a little more difficult. The Doctor Enterprise Container cloud, you know, is a very strong solution and really meets many of these, uh, dimensions along the left or kind of the dimensions we went through in the last slide we've got on Prem and public clouds as of RG A Today we're supporting open stack and bare metal for the on Prem Solutions and AWS in the public cloud. We'll be adding VM ware very soon for another on Prem uh, solution as well as azure and G C P. So thank you very much. Uh, look forward, Thio answering any questions you might have and we'll call that a rap. Thank you. >>Hi, Rick. Thanks very much for that. For that talk, I I am John James. You've probably seen me in other sessions. I do marketing here in Miran Tous on. I wanted to to take this opportunity while we had Rick to ask some more questions about about multi cloud. It's ah, potentially a pretty big topic, isn't it, Rick? >>Yeah. I mean, you know, the devil's in the details and there's, uh, lots of details that we could go through if you'd like, be happy to answer any questions that you have. >>Well, we've been talking about hybrid cloud for literally years. Um, this is something that I think you know, several generations of folks in the in the I. A s space doing on premise. I s, for example, with open stack the way Miran Tous Uh does, um, found, um, you know, thought that that it had a lot of potential. A lot of enterprises believed that, but there were There were things stopping people from from making it. Really, In many cases, um, it required a very, ah, very high degree of willingness to create homogeneous platforms in the cloud and on the premise. Um, and that was often very challenging. Um, but it seems like with things like kubernetes and with the isolation provided by containers, that this is beginning to shift, that that people are actually looking for some degree of application portability between their own Prem and there and their cloud environments. And that this is opening up, Uh, you know, investment on interest in pursuing this stuff. Is that the right perception? >>Yeah. So let's let's break that down a little bit. So what's nice about kubernetes is through the a. P. I s are the same. Regardless of whether it's something that Google or or a W s is offering as a platform as a service or whether you've taken the upstream open source project and deploy it yourself on parameter in a public cloud or whatever the scenario might be or could be a competitor of Frances's product, the Kubernetes A. P I is the same, which is the thing that really gives you that application portability. So you know, the container itself is contained arising, obviously your application and minimizing any kind of dependency issues that you might have And then the ability to deploy that to any of the coup bernetti clusters you know, is the same regardless of where it's running, the complexity comes and how doe I actually spend up a cluster in AWS and open stack and D M Where and gp An azure. How do I build that infrastructure and and spin that up and then, you know, used the ubiquitous kubernetes a p I toe actually deploy my application and get it to run. So you know what we've done is we've we've unified and created A I use the word normalized. But a lot of times people think that normalization means that you're kind of going to a lowest common denominator, which really isn't the case and how we've attacked the the enabling of multi cloud. Uh, you know, what we've done is that we've looked at each one of the providers and are basically providing an AP that allows you to utilize. You know, whatever the best of you know, that particular breed of provider has and not, uh, you know, going to at least common denominator. But, you know, still giving you a ah single ap by which you can, you know, create the infrastructure and the infrastructure could be on Prem is a bare metal infrastructure. It could be on preeminent open stack or VM ware infrastructure. Any of the public clouds, you know, used to have a a napi I that works for all of them. And we've implemented that a p i as an extension to kubernetes itself. So all of the developers, Dev ops and operators that air already familiar operating within the, uh, within the aapi of kubernetes. It's very, very natural. Extension toe actually be able to spend up these clusters and deploy them >>Now that's interesting. Without giving away, obviously what? Maybe special sauce. Um, are you actually using operators to do this in the Cooper 90? Sense of the word? >>Yes. Yeah, we've extended it with with C R D s, uh, and and operators and controllers, you know in the way that it was meant to be extended. So Kubernetes has a recipe on how you extend their A P I on that. That's what we used as our model. >>That, at least to me, makes enormous sense. Nick Chase, My colleague and I were digging into operators a couple of weeks ago, and that's a very elegant technology. Obviously, it's a it's evolving very fast, but it's remarkably unintimidating once you start trying to write them. We were able toe to compose operators around Cron and other simple processes and just, >>you know, >>a couple of minutes on day worked, which I found pretty astonishing. >>Yeah, I mean, you know, Kubernetes does a lot of things and they spent a lot of effort, um, in being able, you know, knowing that their a p I was gonna be ubiquitous and knowing that people wanted to extend it, uh, they spent a lot of effort in the early development days of being able to define that a p I to find what an operator was, what a controller was, how they interact. How a third party who doesn't know anything about the internals of kubernetes could add whatever it is that they wanted, you know, and follow the model that makes it work. Exactly. Aziz, the native kubernetes ap CSTO >>What's also fascinating to me? And, you know, I've I've had a little perspective on this over the past, uh, several weeks or a month or so working with various stakeholders inside the company around sessions related to this event that the understanding of how things work is by no means evenly distributed, even in a company as sort of tightly knit as Moran Tous. Um, some people who shall remain nameless have represented to me that Dr Underprice Container Cloud basically works. Uh, if you handed some of the EMS, it will make things for you, you know, and this is clearly not what's going on that that what's going on is a lot more nuanced that you are using, um, optimal resource is from each provider to provide, uh, you know, really coherent architected solutions. Um, the load balancing the d. N s. The storage that this that that right? Um all of which would ultimately be. And, you know, you've probably tried this. I certainly have hard to script by yourself in answerable or cloud formation or whatever. Um, this is, you know, this is not easy work. I I wrote a about the middle of last year for my prior employer. I wrote a dip lawyer in no Js against the raw aws a piece for deployment and configuration of virtual networks and servers. Um, and that was not a trivial project. Um, it took a long time to get thio. Uh, you know, a dependable result. And to do it in parallel and do other things that you need to do in order to maintain speed. One of the things, in fact, that I've noticed in working with Dr Enterprise Container Cloud recently, is how much parallelism it's capable of within single platforms. It's It's pretty powerful. I mean, if you want to clusters to be deployed simultaneously, that's not hard for Doc. Aerated price container cloud to dio on. I found it pretty remarkable because I have sat in front of a single laptop trying to churn out of cluster under answerable, for example, and just on >>you get into that serial nature, your >>poor little devil, every you know, it's it's going out and it's ssh, Indian Terminals and it's pretending it's a person and it's doing all that stuff. This is much more magical. Um, so So that's all built into the system to, isn't it? >>Yeah. Interesting, Really Interesting point on that. Is that you know, the complexity isn't not necessarily and just creating a virtual machine because all of these companies have, you know, spend a lot of effort to try to make that as easy as possible. But when you get into networking, load balancing, routing, storage and hooking those up, you know, two containers automating that if you were to do that in terror form or answerable or something like that is many, many, many lines of code, you know, people have to experiment. Could you never get it right the first or second or the third time? Uh, you know, and then you have to maintain that. So one of the things that we've heard from customers that have looked a container cloud was that they just can't wait to throw away their answerable or their terror form that they've been maintaining for a couple of years. The kind of enables them to do this. It's very brittle. If if the clouds change something, you know on the network side, let's say that's really buried. And it's not something that's kind of top of mind. Uh, you know, your your thing fails or maybe worse, you think that it works. And it's not until you actually go to use it that you notice that you can't get any of your containers. So you know, it's really great the way that we've simplified that for the users and again democratizing it. So the developers and Dev ops people can create these clusters, you know, with ease and not worry about all the complexities of networking and storage. >>Another thing that amazed me as I was digging into my first, uh, Dr Price container Cloud Management cluster deployment was how, uh, I want I don't want to use the word nuanced again, but I can't think of a better word. Nuanced. The the security thinking is in how things air set up. How, um, really delicate the thinking about about how much credential power you give to the deploy. Er the to the seed server that deploys your management cluster as opposed thio Um uh or rather the how much how much administrative access you give to the to the administrator who owns the entire implementation around a given provider versus how much power the seed server gets because that gets its own user right? It gets a bootstrap user specifically created so that it's not your administrator, you know, more limited visibility and permissions. And this whole hierarchy of permissions is then extended down into the child clusters that this management cluster will ultimately create. So that Dev's who request clusters will get appropriate permissions granted within. Ah, you know, a corporate schema of permissions. But they don't get the keys to the kingdom. They don't have access to anything they don't you know they're not supposed to have access to, but within their own scope, they're safe. They could do anything they want, so it's like a It's a It's a really neat kind of elegant way of protecting organizations against, for example, resource over use. Um, you know, give people the power to deploy clusters, and basically you're giving them the power toe. Make sure that a big bill hits you know, your corporate accounting office at the end of the billing cycle, um so there have to be controls and those controls exist in this, you know, in this. >>Yeah, And there's kind of two flavors of that. One is kind of the day one that you're doing the deployment you mentioned the seed servers, you know, And then it creates a bastion server, and then it creates, you know, the management cluster and so forth, you know, and how all those permissions air handled. And then once the system is running, you know, then you have full access to going into key cloak, which is a very powerful open source identity management tool on you have dozens of, you know, granular permissions that you can give to an individual user that gives them permission to do certain things and not others within the context of kubernetes eso. It's really well thought out. And the defaults, you know, our 80% right. You know, there's very few people are gonna have to go in and sort of change those defaults. You mentioned the corporate directory. You know, hooks right upto l bap or active directory can suck everybody down. So there's no kind of work from a day. One perspective of having to go add. You know everybody that you can think of different teams and groupings of of people. Uh, you know, that's kind of all given from the three interface to the corporate directory. And so it just makes kind of managing the users and and controlling who can do what? Uh, really easy. And, you know, you know, day one day two it's really almost like our one hour to write because it's just all the defaults were really well thought out. You can deploy, you know, very powerful doctor and price container cloud, you know, within an hour, and then you could just start using it. And you know, you can create users if you want. You can use the default users. That air set up a time goes on, you can fine tune that, and it's a really, really nice model again for the whole frictionless democratization of giving developers the ability to go in and get it out of, you know, kind of their way and doing what they want to do. And I t is happy to do that because they don't like dozens of tickets and saying, you know, create a cluster for this team created cluster for that team. You know, here's the size of these guys. Want to resize when you know let's move all that into a self service model and really fulfill the prophecy of, you know, speeding up application development. >>It strikes me is extremely ironic that one of the things that public cloud providers bless them, uh, have always claimed, is that their products provide this democratization when in the experience, I think my own experience and the experience of most of the AWS developers, for example, not toe you know, name names, um, that I've encountered is that an initial experience of trying to start start a virtual machine and figuring out how to log into it? A. W s could take the better part of an afternoon. It's just it's not familiar once you have it in your fingers. Boom. Two seconds, right. But, wow, that learning curve is steep and precipitous, and you slip back and you make stupid mistakes your first couple 1000 times through the loop. Um, by letting people skip that and letting them skip it potentially on multiple providers, in a sense, I would think products like this are actually doing the public cloud industry is, you know, a real surface Hide as much of that as you can without without taking the power away. Because ultimately people want, you know, to control their destiny. They want choice for a reason. Um, and and they want access to the infinite services And, uh, and, uh, innovation that AWS and Azure and Google are all doing on their platforms. >>Yeah, you know, and they're solving, uh, very broad problems in the public clouds, you know, here were saying, you know, this is a world of containers, right? This is a world of orchestration of these containers. And why should I have to worry about the underlying infrastructure, whether it's a virtual machine or bare metal? You know, I shouldn't care if I'm an application developer developing some database application. You know, the last thing I wanna worry about is how do I go in and create a virtual machine? Oh, this is running. And Google. It's totally different than the one I was creating. An AWS I can't find. You know where I get the I P address in Google. It's not like it was an eight of us, you know, and you have to relearn the whole thing. And that's really not what your job is. Anyways, your job is to write data base coat, for example. And what you really want to do is just push a button, deploy a nor kiss traitor, get your app on it and start debugging it and getting it >>to work. Yep. Yeah, it's It's powerful. I've been really excited to work with the product the past week or so, and, uh, I hope that folks will look at the links at the bottoms of our thank you slides and, uh, and, uh, avail themselves of of free trial downloads of both Dr Enterprise Container, Cloud and Lens. Thank you very much for spending this extra time with me. Rick. I I think we've produced some added value here for for attendees. >>Well, thank you, John. I appreciate your help. >>Have a great rest of your session by bike. >>Okay, Thanks. Bye.

Published Date : Sep 16 2020

SUMMARY :

the first thing to kind of look at is, you know, is multi cloud rial. For that talk, I I am John James. And that this is opening up, Uh, you know, investment on interest in pursuing any of the coup bernetti clusters you know, is the same regardless of where it's running, Um, are you actually using operators to do this in the Cooper 90? and and operators and controllers, you know in the way that it was meant to be extended. but it's remarkably unintimidating once you start trying whatever it is that they wanted, you know, and follow the model that makes it work. And, you know, poor little devil, every you know, it's it's going out and it's ssh, Indian Terminals and it's pretending Is that you know, the complexity isn't not necessarily and just creating a virtual machine because all of these companies Make sure that a big bill hits you know, your corporate accounting office at the And the defaults, you know, our 80% right. I would think products like this are actually doing the public cloud industry is, you know, a real surface you know, and you have to relearn the whole thing. bottoms of our thank you slides and, uh, and, uh, avail themselves of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rick PewPERSON

0.99+

RickPERSON

0.99+

John JamesPERSON

0.99+

JohnPERSON

0.99+

GoogleORGANIZATION

0.99+

Nick ChasePERSON

0.99+

AWSORGANIZATION

0.99+

fourQUANTITY

0.99+

86%QUANTITY

0.99+

80%QUANTITY

0.99+

fiveQUANTITY

0.99+

firstQUANTITY

0.99+

MirantORGANIZATION

0.99+

threeQUANTITY

0.99+

Two secondsQUANTITY

0.99+

one hourQUANTITY

0.99+

53%QUANTITY

0.99+

33%QUANTITY

0.99+

2020DATE

0.99+

each providerQUANTITY

0.99+

secondQUANTITY

0.99+

TodayDATE

0.99+

third timeQUANTITY

0.99+

AzizPERSON

0.98+

ThioPERSON

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

twoQUANTITY

0.98+

eightQUANTITY

0.97+

OneQUANTITY

0.97+

first thingQUANTITY

0.97+

first couple 1000 timesQUANTITY

0.96+

two flavorsQUANTITY

0.96+

Prem SolutionsORGANIZATION

0.96+

MirandaORGANIZATION

0.96+

single platformsQUANTITY

0.95+

last yearDATE

0.95+

dozens of ticketsQUANTITY

0.95+

dozensQUANTITY

0.94+

past weekDATE

0.93+

a dayQUANTITY

0.93+

KubernetesTITLE

0.92+

CapexORGANIZATION

0.92+

each oneQUANTITY

0.92+

single laptopQUANTITY

0.92+

last eight monthsDATE

0.92+

couple of weeks agoDATE

0.91+

One perspectiveQUANTITY

0.91+

two containersQUANTITY

0.91+

an hourQUANTITY

0.9+

AzureORGANIZATION

0.9+

a monthQUANTITY

0.88+

three interfaceQUANTITY

0.87+

azureORGANIZATION

0.87+

FrancesPERSON

0.87+

dayQUANTITY

0.83+

Dr Enterprise ContainerORGANIZATION

0.82+

PremORGANIZATION

0.82+

RG AORGANIZATION

0.81+

WORGANIZATION

0.8+

Miran TousORGANIZATION

0.79+

Cooper NeteasePERSON

0.78+

Kubernetes A.TITLE

0.77+

CronTITLE

0.76+

Dr Underprice Container CloudORGANIZATION

0.76+

one dayQUANTITY

0.75+

five different cloudsQUANTITY

0.72+

Moran TousPERSON

0.7+

single apQUANTITY

0.68+

Miran TousPERSON

0.67+

Dr EnterpriseORGANIZATION

0.65+

G C P.ORGANIZATION

0.61+

90COMMERCIAL_ITEM

0.61+

weeksQUANTITY

0.61+

LensORGANIZATION

0.61+

5 Things We Are Thinking About for the Future AIOps and Other Things to Watch For


 

>>Well, welcome everybody to our last session of the day. I want to introduce you to Sean O'Meara. Orfield Cto. Hey, Sean. >>Hey, Nick. Good afternoon. It's been a crazy day. It has. It's been a busy run up to today in a busy day with a lot of great things going on. You know, we've heard from Adrian on his strategy this morning. The great way the Marantz is moving forward. We announced our new product line. You know, we spoke about the new doctor Enterprise Container Cloud line, New future for Mirant. Us. We had a great lineup of customers share their story. We introduced lanes following on the lanes launch a couple of weeks ago. Andi, we're introducing new great projects like our mosque project. New way to deliver open stack going into the future on then in parallel sel. This We ran a great tutorial tracker teachers you all about how to use these new products, and hopefully you'll go and everyone had opportunity to go and look through guys. Yeah. What's next? What is next? Yeah, lots going on. A lot of new things that we're thinking about for the future. Obviously, a lot of work to do on what we have right now. A lot of great things coming. But, you know, we've had this opportunity to talk about all these cool things that are coming down the road. And everybody these days seems to be talking about topics like edge computing or hybrid cloud. Or, you know, hyper scale data centers, even things like disaster recovery is a service. Andi, you know, we talk a lot about things like hyper converged, but frankly, it's boring. It's one thing a little. Good morning. Uh, you know, you and I have been talking about these topics for a while now, and I think it's about time when we spoke about some of the cool things that we're thinking about for the future, not necessarily looking out for the road map, but ideas for the future. Things that may could have an impact on the way we do business going to. So today we're gonna talk a little bit about things like pervasive computing. A nick, what is pervasive computing. >>Well, basically pervasive computing is when everything that you interact with, for the most part, is computerized. So in some ways, we're already there in that You know your phone is a computer. Your refrigerator may have a computer in it. Um, your smart watch your car has a computer in it. And the the most obvious sign of that is this whole Internet of things where, you know your vacuum is, uh is connected to your phone and all of that. And so pervasive computing is this, uh is this sense of you don't even really think about it. You just kind of assume that everything is computerized. >>So how is that different from ubiquitous computing? >>Oh, God. You hit, You hit my hot button. Okay, so if you look, there are a lot of places that will say that pervasive computing and ubiquitous computing are the same thing, but not the same thing. Don't use them interchangeably. They're not the same thing. You big. What is computing is where you can do your computing virtually anywhere. So, for example, you know, I've got, uh I've got a document. I started it on my laptop. I can then go and finish it sitting on the beach on my phone. Or, you know, I can go and do it in a coffee shop or a library. or wherever. So the idea of ubiquitous computing is similar in that, yes, there's computing everywhere, but it's more about your data being universally accessible. So essentially it is cloud computing. That is what this whole ubiquitous computing thing is about. >>Okay on that then differs from pervasive computing in the fact that pervasive is the devices that we have all around us versus the access to those devices. >>Exactly. It's it's really it's more about the data. So ubiquitous computing is more about My data is stored in some central place, and I could hit it from anywhere. There is a device, whereas pervasive computing is there is a device almost everywhere. Okay, so yeah, >>So why Why do we as Moran takes care about the vice of computing? >>Well, pervasive computing brings up a whole lot of new issues, and it's coming up really fast. I mean, you last night I was watching, you know, commercial where you know, somebody a woman's coming out and starting her car with her phone. Um, which sounds really cool. Um, but you know what they say Anything that you can access, you know, with your computer is hackable. So, you know, there are security issues that need to be considered when it comes to all of this, but that's that's the downside. But there's just this huge upside on pervasive computing that it's so exciting when you think about this. I mean, think about a world where remember I said your refrigerator might be attached to the network. Well, what if you could rent out space on your refrigerator to somebody someplace else in a secure way? Of course. You know what? If you could define your personal network as all of these devices that you own and it doesn't matter where your workloads run or, you know, you could define all of this stuff in such a way that the connectivity between objects is really huge. Um, so you know, I mean, you look at things like, you know, I f t t you know, it's like get a notification when the International space station passes over your house. Okay? I don't know why I would need that. Um, but it's the kind of thing that people >>would have a nine year old. You can run him outside and show Z. Oh, >>there you go. There you go. So I mean, that kind of level of connectivity between objects is really really it gives us this new level off. Uh, this new level of functionality that we would never even considered even 10 years ago. Um, it also extends the life of objects that we already have. So, you know, maybe you've got that, uh, that computerized vacuum cleaner, and you don't like the way that it you don't like the pattern that uses in your house. So you re program it or, uh Or I watched. I watched a guy decide that he didn't want to buy multiple vacuums for his house. So he programmed his programa will act Hume to fly between floors. It was actually pretty funny. Um, I it's some people just have too much time. >>It's driving the whole world of programmable at all levels. Really? Like the projects coming out of the car industry of creating a programmable car would fit into that category. Then, I >>suppose absolutely, absolutely needs developer tool kits. Um, that make it possible for anybody to re program these devices that you never would have thought of reprogramming before. So it's important. So do >>we want to talk about the questions. We would love people to give us some feedback on at this stage. >>I would love to talk about these questions. So what we did is we put together, uh, we put together a place for you to answer questions. If you're not watching this live. If you're watching this live, please go ahead. Drop your ideas in the chat. We would love to discuss them, you know. Do you want to see more of this? Or does it? Conversely, Does it scare you, Sean? You What? >>What do you >>think about these questions? >>Well, I mean, for me, the idea of the connected world at one level, the engineering me loves the idea. Another level. It comes to these questions of privacy. Vegas questions off. How do I control this going into the future? What prevents somebody from taking over my flying vacuum cleaner? I'm using it, you know? So it's an interesting question. I think there's a lot of cool, cool ideas. Yeah, and a lot of work to be done. I really want to hear other people's ideas as well and see how we can take this into the future. >>Definitely, definitely. I mean, look I mean, we're joking about it, but, you know, when somebody hacks into your grandmother's insulin pump, maybe not so funny. >>Yeah, a very real risk. >>A very real risk. A very real risk. But yeah, I mean, we'd love Thio. We'd love to hear how you'd like to see this used. So that's that's my That's kind of what I've been thinking about thes days. Um, but, you know, Sean, uh, now, you I know you are really concerned about this whole issue of developers and how they feel about infrastructure. So I would love to hear what you've got to say on that. >>Yeah, I'd like to sex, but a bit about that. You know, we we've done a lot of work over the last few years looking at how developing our history has been very focused on operations, but without big drive towards supporting developers providing better infrastructure for developers. One of the interesting things that keeps coming up to the four on Do you know, the way the world is changing is that big question is, do developers actually give a damn about infrastructure in any way, shape or form? Um, you know, ultimately more and more development languages and tools abstract that underlying infrastructure. What communities does is basically abstract. The infrastructure away, Um, mawr and more options. They're coming to market, which you can quite literally creating application without out of a writing a line of code. Um, so this morning, way Dio, we're doing it all the time, sometimes without even realizing it on. I think the definition of what a developer is is also changing to a certain extent. So you know the big question, which I have on which I'd like to understand Maureen, from talking to low developers is due. Developers care about infra What is it that you expect from infrastructure? What do they want going into the future? How are they going to interact with that infrastructure? I My personal opinion is that they don't really care about infrastructure, that they're going to find more ways to completely abstract away from that. And they just want to focus on delivering applications faster and getting value to market. But I might be wrong, and I'd really like to hear people's impact ideas and thoughts on that >>on. And that's exactly and that's why we're asking this question. Developers out there. Do you care? Or do you just want the whole thing completely abstracted away from you >>on? If you do care why, If you don't, what would you like to see? Another. It's a couple of questions to ask, but really like to hear those opinions on bond. You know, Do you just want the operations guys to live with it? You never want to hear about it again, just fine. It's actually good to say that we'll work it out. >>Yes, and that there's nothing. There's nothing wrong with pushing that up stack >>pretty much what we're trying to do here. >>Well, it is what we're trying to do. But at the same time, we want to do what's good for developers. And if you developers or like No, don't don't do that. Well, we want to know because, you know, we don't wanna work away here and some ivory tower and wind up with something that's not good for >>you after school. So cool. So, yeah, there are some other interesting things we're talking about. >>I know, I know. This is This is one of my favorites. This is one of my favorites. >>Zoo this? Yes. While >>we're on the subject of not getting involved with the infrastructure. Go ahead, Sean. Tell us about it. >>Thing is a pet topic of mine and something that that we've spoken about a lot. And thanks something that we we have spent many nights talking about. The idea is AI ops using artificial intelligence to drive operations within our infrastructure. And so a lot of people ask me, You know why? Um, essentially, What the hell is a I out on? I have answered this question many times, and it does often seem that we all take this AI ops thing for granted or look at it in a different way. To me, it is essentially, it's it's automation on steroids. That's what it boils down Thio. It's using intelligence systems that to replace the human cerebellum. I mean, let's just be blunt about this. We're trying to replace humans. Onda reason for that is we humans less meat sacks are airplane. We make mistakes all the time and compared to computers were incredibly slow. Um, you know, that's really the simplest point with the scale of modern infrastructure that we're dealing with the sheer volume. I mean, we've gone from, you know, thousands to tens of thousands of the EMS to now hundreds and thousands of containers spread across multiple time zones. Multiple places. We need to come up with better ways of managing this on the old fashioned stick through mechanism of automation. It's just too limited for that. Right >>when we say we want to replace meat sacks, we mean in a good way. >>We mean in a good way. I know it's a bit of a harsh way of putting it. Um, ultimately, humans have ability for creativity that machines just don't have. But machines can do other things, and they could do analysis of data a lot faster than we can. Quite often, we have to present that data to humans to have invalidate that information. But, you know, one of the options for us is to use artificial intelligence, quantified data, um, correlated, you know, look for root cause and then provide that information to us in such a way that we can make valid decisions based on that information a lot faster than we could otherwise, >>right? So what are the what are the implications? What are the practical implications of doing this so >>practically we can analyze massive amounts of data a lot faster than a single human. Could we even just a normal type system that's searching? We We have the tools to learn by looking at data and have machines do it a lot faster than we can. We can take action faster based on that data, because we get the data foster. We can take action and much more complex action that involves maybe many different layers of tasking much, much faster. Um, on we could start to do maintenance operations and maintenance tasks without having to wait for human beings to wake up or get to an office. But more importantly, we could start making tasks happen very complex tasks in a very specific orders, with much less potential for error. And those are the kinds of areas we're looking at. >>That's that's true. So how do you kind of see this moving forward? I mean, obviously, we're not gonna go from nothing to Skynet, and hopefully we never get to Skynet. Well, >>depends if you are in control of Skynet or not. Ultimately, Dionysus little computer. Um, practically speaking, we have a few things Thio hoops to jump through our suppose before we can look at where else is going to be really effective on the first one is a trust issue. We have to learn to trust it. And to do that, we have to put in a position where it can learn and start providing us that data analysis on that inference and then having humans validated. That's practically the very first step. No, it's a trust issue. You know, we've seen been watching sci fi for the last 30 years. Class on. Do you know the computers take over? Well, ultimately, is that real or not? Um, if we look at how we gonna get there? Probably midterm. Adaptive maintenance, maybe infrastructure orchestration. Smart allocation of resource is across cloud services. Well, >>we can talk for a minute About what that would would actually look like. So, I mean, we could talk about, you know, abs, midterms. I mean, in a practical sense, how would that actually work? >>Yeah, Okay. It's a great question. So, practically speaking, the first thing we're gonna do is we're going to start to collect all this data. We're gonna find all this data. I mean, the modern computer systems that we have infrastructure systems. We are producing many hundreds of gigabytes, sometimes terabytes of logging data every day. The majority of it gives far 13. I mean, we throw the majority of their logging information away or if it's not thrown away, it's stored some way for security purposes and never analyzed. So let's start by taking their data and actually analyzing it. To do that, we have laid and correlated, >>so we >>gotta put it all together. We've got a match it and we've got to start building patents. We're going to start looking for the patterns. This is where I is particularly good at starting to help us. Bold patterns start to look for those patterns. Initially, humans will have to do some training. Um, once we have that patent, once we've got that working, we can now start having the AI systems start to do some affairs. E, here's the recalls. So we the system can tell us based on the data based on the patterns we've been learning. We know from the past debt. If those three network links get full bad example, we're gonna have a failure in Region X, right. So start telling us while those network links of filling up tell us before they fall rather than after their full always they're falling up as we see trending information now seems like a simple I could do trending information with just normal monitoring systems. But if I can start to correlate that with greater users in, you know, Beijing Office versus Users in California office filling up those links and different times of the day, I can now start to make much more clever decisions, which is a human on its own, to try and correlate that information, which is be insane once you've done that way to go to the next stage, which is not to have the system act do actions for us. Based on that information right now, we're starting to get close to the scan it. Speaking of this doesn't have to be a big, complex pile of change. Smart ai solution. I have data on that AI solution is talking to my existing automation solutions to action. That change. That's how I see this moving forward, >>right? So essentially you, instead of saying, you know, deploy this too. Uh, this workload to AWS, you would say deploy this. Yeah, And then the system would look and go. Okay, It's this kind of workload. At this time of day at this size, it's gonna interact with this and this and this. And so it's gonna be best off in this region of this cloud provider on then. Uh, you know, two days from now, when the prices drop, we're gonna put it over there, >>even taking a different different. Spoken exactly that it could be. The Beijing office is coming online. Let's move the majority of the workload to a cloud that's closer to them. Reducing the network bandwidth. Yeah, and inference. Andi Also reducing the impact on international lines as Beijing winds down for the day, I can just move the majority of the workload into California on board Europe. In between, it's very simple examples, but have humans do that would be very complex and very time consuming >>exactly. And end. Just having humans notice those patterns would be difficult. But once you have the system noticing those patterns, then the humans could start to think, How can I take advantage of this, you know, So as you are talking about much longer term in the actual applicant patients themselves. So you know, everything can be optimized that way so >>everybody may optimized way can optimize down to the way we even potentially write applications in the future. Humans were still deciding the base logic. Humans were still deciding the creative components of that. Right as we as we build things, we can start to optimize them, breaking down into smaller and smaller units that are much more specific. But the complexity goes up. When we do that right. I want to use AI and AI solutions to start to manage that complexity across multiple spaces. Multiple time zones, etcetera. >>Exactly. Exactly. So. So that's the question, you know. What do you guys think? You know, we really want to know >>on Dhere again. You know, we mentioned this around the beginning, but do you think you could trust in a iob sedition? What would it take for you to trust in our absolution? And where do you practically see it being used in the short term? >>Yeah, that's that's the big question is where do you see it being used? Where would you like it to be used, you know? Is there something that you don't think would be possible, but you would like to see it, you know. But the main thing is, on a practical matter, what would you like to see? >>Let me ask. The question is like a different way. Do you have a problem that we could solve within a isolation today? E, They're really well >>right. A re a world problem. And And assuming that, you know, we are not gonna, you know, take over the world. >>Yeah. Important. My evil plan is to take over the world with >>man. I'm so sorry. First >>had to let that draw. >>I did. I did. I'm so sorry. Okay, Alright. So that's so That's a I ops. And we like I said, if you're watching this live, throw in the chat. We want to hear your ideas. If your, um if you're doing this, if you're watching this on the replay, go to the survey because we way, we really want to hear your ideas and your opinions. All right, So moving right along. All right. What the heck are you know, kernels? >>Uh, lovely questions. So, you know, the whole world is talking about containers today way we're talking about containers today. But containers like VMS or just one way to handle compute Andi. They're more and more ideas that are out there today, and people have been trying different ways off, shrinking the size of the compute environment. COMPUTER Paxil Another cool way of looking at this and saying That's been around for a little while. But it's getting your attraction to learn to sing called unique kernels, and what they are is they're basically highly optimized. Execute a bles that include the operating system, Um, there on OS settle libraries, um, and some very simplified application code all mixed into a very, very tiny package. Easiest way to describe them. They're super simplified. And I were talking about in the eye ops discussion this idea off taking everything into smaller and smaller individual functions but creating a certain level of complexity. Well, if we look at uni kernels, those are those smaller and smaller bundles and functions. They interact directly with the hardware or through a hyper visor. Um, so actually, no overhead. I mean the overhead If you just look at what a modern you clinics operating system is made up of these days, there are so many different parts and components. Even just the colonel has got anything from, you know, 5 to 7 different parts to it. Plus, of course, drivers and a boot loader. Then we look at the system libraries that set on top of that, you know? And then they're demons and utilities and shells and scream components and, you know, additional colonel stacks that go on top of that for hyper visors. What we're trying to say is, what, This text of space, I'm >>getting tired. Just listening, >>Thio. I'm tired talking about it. You know that the unique colonel, really, it just takes over their complexity. It puts the application the OS on the basic libraries necessary. That application in tow, one really tiny package. Um, yeah. Give you an idea what we're talking about here. We're talking about memory footprints or time package footprints in the kilobytes. You know, a small container is considered 100 make plus, we're talking kilobytes. We're talking memory utilization in the kilobyte two megabytes space because there's no no fact, no fluff, no unnecessary components. And then only the CPU that it needs. >>So Bill Gates was right 6. 40 k is all anybody will ever need >>Potentially. Yeah, right. E, there was there was an IBM CEO who said even less at some point. So we'll see >>how that go. What goes around comes around. >>But one of the really interesting things about this small size, which is really critical, is how fast they can boot. Yeah, we're talking boot times measured in 30 seconds. Wow, We're talking the ability to spin up specific functions only when you need them. Now, if we look at the knock on effect of that, we're looking at power saving. Who knew? Run the app when I need it because there's no Leighton. See to start it up. The app is tiny so I can pack a lot mawr into a lot less space game power seconds. But when I start looking at where you were talking about earlier, which the basic compute idea in the world all of a sudden that tiny little arm chips it in my raspberry pi that's running my fridge, My raspberry pi equivalent that's running my fridge no longer has a fact operating system around it. I can run tens thousands, potentially off these very tiny specific devices when I need them. Wow, I'm kind of excited about it. I'm excited by the idea. You >>can hear that >>I'm a hardware geek from from many, many moons ago on DSO. I kind of like the idea of being able to better utilize along this very low powered hardware that we have lying around and really take it into the future. Well, that's good. Yeah. So I'm not going to kill, not going to kill containers. But it is a parallel technology that I'm very interested in >>that that is true. Now what does it I mean in terms of, like, attack surface. That means it's got a much smaller attack surface, though, right? >>Yeah. Great. Great point. I mean, there's no there's no fluff. There's no extra components in the system. Therefore, the attack surface is very, very small. Um, you know, and because they're so small and can be distributed much, much faster and much more easily updating and upgrading them as much easier way can we can upgrade a 60 k b file across a GPRS connection on which I certainly can't do with 600 make, uh, four gig VM 600 made container. You know, just unrealistic. Um, e >>I was just going to say so. So now these. You know, kernels, they're they're so small. And they have on Lee what they absolutely need. Now, how do you access the hardware? >>So the hardware is accessed via hyper visor. So you have to have some kind of hyper visor running on top of the hard way. But because Because we need very little from their type adviser, we don't actually need to interact with that very much. It could be a very cut down operating system. Very, very simplified operating system. We're also not trying to run another layer on top of that. We're not We're not ending up with multiple potential VMS or something underneath it were completely removed. That layer, um, the the drivers, the necessary drivers are built into that particular colonel device. >>Oh, okay. That makes sense. >>Tiny footprint easily distributed, um, and once again, very specialized, >>right? Right. Well, that makes sense. Okay. So, yeah, I mean, I guess so. These these individual stacks, you know, comparing virtual machines to containers to unit colonels, there just a completely different architecture. But I can see how that would How That would work where you have the hi perverse. A little hyper buys are on top of rented teeth. OK, so moving right along certain. Where do we see these being used? >>Um, it's early days, although there are some very good practical applications out there. There's a big, big ecosystem of people trying different ways for this I o ts off the obvious immediate place. I i o t s a quick, easy place for something very specialized. Um, what's interesting to me? And you mentioned this earlier. You know, we're talking about medical devices. We're talking about potentially disposable medical devices. Now, if I can keep those devices to run on really low power very, very cheap, um, CPUs and all of a sudden I've got a device that is available to a lot more people. I don't need a massive, powerful CPU. I just need saying that runs a very specific function really fast, A very small scale. I could do well disposable devices. I can build medical devices that are so small we can potentially swallow them and other areas which are really interesting. And I spoke a little bit about it, but it's energy efficiency. Where We need to be very, very energy efficient. No. And that can also impact on massively scalable systems where I want to deal with tens of thousands of potential transactions from users going into a system. I can spin them up only when I need them. I don't need to keep them running all the time again. It comes back to that low latency on then. Anyway, that an incredibly fast food time is valuable. Um, a car, you know, Think about it. If if my if my electric car is constantly draining that battery when it's parked in the garage and I'm traveling or if it takes 20 minutes from my car to boot up its clinics. Colonel, when I wanted, I'm going to get very irritated. Well, >>that and if you have a specific function, you know, like, identify that thing, Yeah, it would be good if you haven't smashed into it before. Identified it as a baby carriage e dark today. Yes. >>So, Nick, you know, these is all really interesting topics. Um, yeah. We spoke about air ops. We spoke about the impact is gonna have on humans. Um, all of these changes to the world that we're living in from computer systems, the impact it's having on our lives biggest. An interesting question about the ethics of all of this >>ethics of all of this. Yes, because let's be let's be realistic. There are actual riel concerns when it comes to privacy, when it comes to how corporations operate, when it comes to how governments operate. Um, there are areas of the world's where, how all of this has has moved, it's absolutely I'll be honest, absolutely terrifying the economic disparity. Um, but when you really come right down to it, um, it's all about the human control over the technology because all of these ethical issues are are in our hands. Okay, we could joke about Sky Net. We can joke about things like that, but this is one place that technology can't help us. We have to do this. We have to be aware of what's going on. We have to be aware. Are they using facial recognition? Uh, you know, when you go to X y Z, are they using recidivism algorithms in sentencing? And how is that? How is that going? Is it? Are those algorithms fair? Certain groups get longer sentences because historical data, uh, is skewed. Be educated. Know how this works? Don't be afraid of any of this. None of this is, uh, none of this is rocket science. Really? Come right down to it. I mean, it's it's not simple, but you can learn this. You can do it. >>Ask good questions. Be interested to be part of the part of the discussion. Not just a passive bystander. >>Exactly. Don't just complain about what you think is going on. Learn about what is actually going on and be active, where you see something that needs to be fixed. So that's what that's what we can do about it. We need to be aware that there's an issue or potential issues, and we need to step in and fix it. So that z myself box, I'll step down zone >>important topic. And it's one that we all can have influence on on bits one. Those who are us who are actually involved in building these systems for the future. We can help make sure that the rules are there. That's right. Systems are built correctly on that. We have open dialogues and discussions around these points and topics and on going away, was she? I think we're coming to the end of the time on hopefully we've kept everybody interested in some of the things that we think are cool for the future. And we're putting our efforts into E O. But I think we need to wrap this up now. So, Nick, great chatting to you is always >>always, always a pleasure, Sean. >>It's been an amazing week. Um, been amazing. Couple of weeks, everybody leading up to this event on bond. No, thank you, everybody for listening to us. Please go and download and try. Dr. Enterprise, Uh, the container card is available. Will post the links here to better understand what we've been doing. Go and have a look through the tutorial track. You'll hear my voice. I'm sure you'll hear next voice and make other people's voices through those tutorials. Hopefully, we keep you all interested and then going download and try lens, Please. Finally, we want your feedback. We're interested to hear what you think would be the great ideas. Good, Bad. Otherwise let us know what you think about products. We are striving to make them better all the time. >>Absolutely. And we want your involvement. Was it all right? Thank you all. Bye bye. Yeah,

Published Date : Sep 15 2020

SUMMARY :

I want to introduce you to Uh, you know, you and I have been talking about these topics for a while now, of that is this whole Internet of things where, you know your vacuum What is computing is where you can do your computing virtually that we have all around us versus the access to those devices. It's it's really it's more about the data. on pervasive computing that it's so exciting when you think about this. You can run him outside and show Z. Um, it also extends the life of objects that we already have. Like the projects coming out of the car industry of creating a programmable car would to re program these devices that you never would have thought of reprogramming we want to talk about the questions. put together, uh, we put together a place for you to answer questions. I'm using it, you know? you know, when somebody hacks into your grandmother's insulin pump, maybe not so funny. Um, but, you know, Sean, uh, now, you I know you are really the four on Do you know, the way the world is changing is that big question is, Or do you just want the whole thing completely abstracted what would you like to see? Yes, and that there's nothing. Well, we want to know because, you know, we don't wanna work away here and some you after school. I know, I know. we're on the subject of not getting involved with the infrastructure. I mean, we've gone from, you know, thousands to you know, look for root cause and then provide that information to us in such a way that we can make valid We can take action faster based on that data, because we get the data foster. So how do you kind of see this moving And to do that, we have to put in a position where it can learn and start providing So, I mean, we could talk about, you know, abs, midterms. the modern computer systems that we have infrastructure systems. I have data on that AI solution is talking to my existing Uh, you know, two days from now, Let's move the majority of the workload to a cloud that's closer to them. you know, So as you are talking about much longer term in the actual applicant patients But the complexity goes up. What do you guys think? You know, we mentioned this around the beginning, but do you think you could Yeah, that's that's the big question is where do you see it being used? Do you have a problem that we could solve And And assuming that, you know, we are not My evil plan is to take over the world with I'm so sorry. What the heck are you know, kernels? Even just the colonel has got anything from, you know, 5 to 7 getting tired. that the unique colonel, really, it just takes over their complexity. So we'll see how that go. to spin up specific functions only when you need them. I kind of like the idea of being able to better utilize along this very low powered hardware that we have lying around and that that is true. you know, and because they're so small and can be distributed much, much faster and much more easily updating and upgrading Now, how do you access the So you have to have some kind That makes sense. But I can see how that would How That would work where you have I can build medical devices that are so small we can potentially swallow them and like, identify that thing, Yeah, it would be good if you So, Nick, you know, these is all really interesting topics. Um, but when you really come right down to it, um, it's all about Be interested to be part of the part of the Don't just complain about what you think is going on. Nick, great chatting to you is always We're interested to hear what you think would be the great ideas. Thank you all.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Sean O'MearaPERSON

0.99+

AdrianPERSON

0.99+

20 minutesQUANTITY

0.99+

hundredsQUANTITY

0.99+

IBMORGANIZATION

0.99+

MaureenPERSON

0.99+

SeanPERSON

0.99+

NickPERSON

0.99+

Bill GatesPERSON

0.99+

5QUANTITY

0.99+

thousandsQUANTITY

0.99+

CaliforniaLOCATION

0.99+

AWSORGANIZATION

0.99+

Sky NetORGANIZATION

0.99+

30 secondsQUANTITY

0.99+

60 k bQUANTITY

0.99+

first stepQUANTITY

0.99+

tens thousandsQUANTITY

0.99+

ThioPERSON

0.99+

FirstQUANTITY

0.99+

todayDATE

0.99+

two megabytesQUANTITY

0.99+

fourQUANTITY

0.98+

MarantzORGANIZATION

0.98+

BeijingLOCATION

0.98+

MoranPERSON

0.98+

oneQUANTITY

0.98+

OneQUANTITY

0.98+

6. 40 kQUANTITY

0.98+

tens of thousandsQUANTITY

0.98+

EuropeLOCATION

0.97+

600COMMERCIAL_ITEM

0.97+

hundreds of gigabytesQUANTITY

0.97+

MirantORGANIZATION

0.97+

13QUANTITY

0.96+

Beijing OfficeORGANIZATION

0.96+

DioPERSON

0.96+

10 years agoDATE

0.95+

SkynetORGANIZATION

0.93+

one levelQUANTITY

0.93+

LeightonORGANIZATION

0.93+

VegasLOCATION

0.93+

one of my favoritesQUANTITY

0.92+

Couple of weeksQUANTITY

0.92+

this morningDATE

0.92+

first oneQUANTITY

0.91+

terabytesQUANTITY

0.91+

AndiPERSON

0.91+

one wayQUANTITY

0.91+

nine year oldQUANTITY

0.9+

VM 600COMMERCIAL_ITEM

0.88+

two daysQUANTITY

0.88+

5 ThingsQUANTITY

0.87+

7 different partsQUANTITY

0.87+

couple of weeks agoDATE

0.86+

Orfield CtoPERSON

0.85+

last 30 yearsDATE

0.84+

DionysusPERSON

0.84+

last nightDATE

0.82+

100 make plusQUANTITY

0.81+

raspberry piCOMMERCIAL_ITEM

0.81+

many moons agoDATE

0.81+

one thingQUANTITY

0.8+

thousands of containersQUANTITY

0.79+

DhereORGANIZATION

0.79+

EnterprisePERSON

0.78+

single humanQUANTITY

0.78+

lastDATE

0.73+

EnterpriseORGANIZATION

0.72+

first thingQUANTITY

0.72+

three network linksQUANTITY

0.69+

yearsDATE

0.63+

DSOORGANIZATION

0.61+

SEAGATE AI FINAL


 

>>C G technology is focused on data where we have long believed that data is in our DNA. We help maximize humanity's potential by delivering world class, precision engineered data solutions developed through sustainable and profitable partnerships. Included in our offerings are hard disk drives. As I'm sure many of you know, ah, hard drive consists of a slider also known as a drive head or transducer attached to a head gimbal assembly. I had stack assembly made up of multiple head gimbal assemblies and a drive enclosure with one or more platters, or just that the head stacked assembles into. And while the concept hasn't changed, hard drive technology has progressed well beyond the initial five megabytes, 500 quarter inch drives that Seagate first produced. And, I think 1983. We have just announced in 18 terabytes 3.5 inch drive with nine flatters on a single head stack assembly with dual head stack assemblies this calendar year, the complexity of these drives further than need to incorporate Edge analytics at operation sites, so G Edward stemming established the concept of continual improvement and everything that we do, especially in product development and operations and at the end of World War Two, he embarked on a mission with support from the US government to help Japan recover from its four time losses. He established the concept of continual improvement and statistical process control to the leaders of prominent organizations within Japan. And because of this, he was honored by the Japanese emperor with the second order of the sacred treasure for his teachings, the only non Japanese to receive this honor in hundreds of years. Japan's quality control is now world famous, as many of you may know, and based on my own experience and product development, it is clear that they made a major impact on Japan's recovery after the war at Sea Gate. The work that we've been doing and adopting new technologies has been our mantra at continual improvement. As part of this effort, we embarked on the adoption of new technologies in our global operations, which includes establishing machine learning and artificial intelligence at the edge and in doing so, continue to adopt our technical capabilities within data science and data engineering. >>So I'm a principal engineer and member of the Operations and Technology Advanced Analytics Group. We are a service organization for those organizations who need to make sense of the data that they have and in doing so, perhaps introduce a different way to create an analyzed new data. Making sense of the data that organizations have is a key aspect of the work that data scientist and engineers do. So I'm a project manager for an initiative adopting artificial intelligence methodologies for C Gate manufacturing, which is the reason why I'm talking to you today. I thought I'd start by first talking about what we do at Sea Gate and follow that with a brief on artificial intelligence and its role in manufacturing. And I'd like them to discuss how AI and machine Learning is being used at Sea Gate in developing Edge analytics, where Dr Enterprise and Cooper Netease automates deployment, scaling and management of container raised applications. So finally, I like to discuss where we are headed with this initiative and where Mirant is has a major role in case some of you are not conversant in machine learning, artificial intelligence and difference outside some definitions. To cite one source, machine learning is the scientific study of algorithms and statistical bottles without computer systems use to effectively perform a specific task without using explicit instructions, relying on patterns and inference Instead, thus, being seen as a subset of narrow artificial intelligence were analytics and decision making take place. The intent of machine learning is to use basic algorithms to perform different functions, such as classify images to type classified emails into spam and not spam, and predict weather. The idea and this is where the concept of narrow artificial intelligence comes in, is to make decisions of a preset type basically let a machine learn from itself. These types of machine learning includes supervised learning, unsupervised learning and reinforcement learning and in supervised learning. The system learns from previous examples that are provided, such as images of dogs that are labeled by type in unsupervised learning. The algorithms are left to themselves to find answers. For example, a Siris of images of dogs can be used to group them into categories by association that's color, length of coat, length of snout and so on. So in the last slide, I mentioned narrow a I a few times, and to explain it is common to describe in terms of two categories general and narrow or weak. So Many of us were first exposed to General Ai in popular science fiction movies like 2000 and One, A Space Odyssey and Terminator General Ai is a I that can successfully perform any intellectual task that a human can. And if you ask you Lawn Musk or Stephen Hawking, this is how they view the future with General Ai. If we're not careful on how it is implemented, so most of us hope that is more like this is friendly and helpful. Um, like Wally. The reality is that machines today are not only capable of weak or narrow, a I AI that is focused on a narrow, specific task like understanding, speech or finding objects and images. Alexa and Google Home are becoming very popular, and they can be found in many homes. Their narrow task is to recognize human speech and answer limited questions or perform simple tasks like raising the temperature in your home or ordering a pizza as long as you have already defined the order. Narrow. AI is also very useful for recognizing objects in images and even counting people as they go in and out of stores. As you can see in this example, so artificial intelligence supplies, machine learning analytics inference and other techniques which can be used to solve actual problems. The two examples here particle detection, an image anomaly detection have the potential to adopt edge analytics during the manufacturing process. Ah, common problem in clean rooms is spikes in particle count from particle detectors. With this application, we can provide context to particle events by monitoring the area around the machine and detecting when foreign objects like gloves enter areas where they should not. Image Anomaly detection historically has been accomplished at sea gate by operators in clean rooms, viewing each image one at a time for anomalies, creating models of various anomalies through machine learning. Methodologies can be used to run comparative analyses in a production environment where outliers can be detected through influence in an automated real Time analytics scenario. So anomaly detection is also frequently used in machine learning to find patterns or unusual events in our data. How do you know what you don't know? It's really what you ask, and the first step in anomaly detection is to use an algorithm to find patterns or relationships in your data. In this case, we're looking at hundreds of variables and finding relationships between them. We can then look at a subset of variables and determine how they are behaving in relation to each other. We use this baseline to define normal behavior and generate a model of it. In this case, we're building a model with three variables. We can then run this model against new data. Observations that do not fit in the model are defined as anomalies, and anomalies can be good or bad. It takes a subject matter expert to determine how to classify the anomalies on classify classification could be scrapped or okay to use. For example, the subject matter expert is assisting the machine to learn the rules. We then update the model with the classifications anomalies and start running again, and we can see that there are few that generate these models. Now. Secret factories generate hundreds of thousands of images every day. Many of these require human toe, look at them and make a decision. This is dull and steak prone work that is ideal for artificial intelligence. The initiative that I am project managing is intended to offer a solution that matches the continual increased complexity of the products we manufacture and that minimizes the need for manual inspection. The Edge Rx Smart manufacturing reference architecture er, is the initiative both how meat and I are working on and sorry to say that Hamid isn't here today. But as I said, you may have guessed. Our goal is to introduce early defect detection in every stage of our manufacturing process through a machine learning and real time analytics through inference. And in doing so, we will improve overall product quality, enjoy higher yields with lesser defects and produce higher Ma Jin's. Because this was entirely new. We established partnerships with H B within video and with Docker and Amaranthus two years ago to develop the capability that we now have as we deploy edge Rx to our operation sites in four continents from a hardware. Since H P. E. And in video has been an able partner in helping us develop an architecture that we have standardized on and on the software stack side doctor has been instrumental in helping us manage a very complex project with a steep learning curve for all concerned. To further clarify efforts to enable more a i N M l in factories. Theobald active was to determine an economical edge Compute that would access the latest AI NML technology using a standardized platform across all factories. This objective included providing an upgrade path that scales while minimizing disruption to existing factory systems and burden on factory information systems. Resource is the two parts to the compute solution are shown in the diagram, and the gateway device connects to see gates, existing factory information systems, architecture ER and does inference calculations. The second part is a training device for creating and updating models. All factories will need the Gateway device and the Compute Cluster on site, and to this day it remains to be seen if the training devices needed in other locations. But we do know that one devices capable of supporting multiple factories simultaneously there are also options for training on cloud based Resource is the stream storing appliance consists of a kubernetes cluster with GPU and CPU worker notes, as well as master notes and docker trusted registries. The GPU nodes are hardware based using H B E l 4000 edge lines, the balance our virtual machines and for machine learning. We've standardized on both the H B E. Apollo 6500 and the NVIDIA G X one, each with eight in video V 100 GP use. And, incidentally, the same technology enables augmented and virtual reality. Hardware is only one part of the equation. Our software stack consists of Docker Enterprise and Cooper Netease. As I mentioned previously, we've deployed these clusters at all of our operations sites with specific use. Case is planned for each site. Moran Tous has had a major impact on our ability to develop this capability by offering a stable platform in universal control plane that provides us, with the necessary metrics to determine the health of the Kubernetes cluster and the use of Dr Trusted Registry to maintain a secure repository for containers. And they have been an exceptional partner in our efforts to deploy clusters at multiple sites. At this point in our deployment efforts, we are on prem, but we are exploring cloud service options that include Miranda's next generation Docker enterprise offering that includes stack light in conjunction with multi cluster management. And to me, the concept of federation of multi cluster management is a requirement in our case because of the global nature of our business where our operation sites are on four continents. So Stack Light provides the hook of each cluster that banks multi cluster management and effective solution. Open source has been a major part of Project Athena, and there has been a debate about using Dr CE versus Dr Enterprise. And that decision was actually easy, given the advantages that Dr Enterprise would offer, especially during a nearly phase of development. Cooper Netease was a natural addition to the software stack and has been widely accepted. But we have also been a work to adopt such open source as rabbit and to messaging tensorflow and tensor rt, to name three good lab for developments and a number of others. As you see here, is well, and most of our programming programming has been in python. The results of our efforts so far have been excellent. We are seeing a six month return on investment from just one of seven clusters where the hardware and software cost approached close to $1 million. The performance on this cluster is now over three million images processed per day for their adoption has been growing, but the biggest challenge we've seen has been handling a steep learning curve. Installing and maintaining complex Cooper needs clusters in data centers that are not used to managing the unique aspect of clusters like this. And because of this, we have been considering adopting a control plane in the cloud with Kubernetes as the service supported by Miranda's. Even without considering, Kubernetes is a service. The concept of federation or multi cluster management has to be on her road map, especially considering the global nature of our company. Thank you.

Published Date : Sep 15 2020

SUMMARY :

at the end of World War Two, he embarked on a mission with support from the US government to help and the first step in anomaly detection is to use an algorithm to find patterns

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SeagateORGANIZATION

0.99+

hundreds of yearsQUANTITY

0.99+

two partsQUANTITY

0.99+

pythonTITLE

0.99+

six monthQUANTITY

0.99+

World War TwoEVENT

0.99+

C GateORGANIZATION

0.99+

oneQUANTITY

0.99+

Stephen HawkingPERSON

0.99+

Sea GateORGANIZATION

0.99+

JapanLOCATION

0.99+

Lawn MuskPERSON

0.99+

TerminatorTITLE

0.99+

1983DATE

0.99+

one partQUANTITY

0.99+

two examplesQUANTITY

0.99+

A Space OdysseyTITLE

0.99+

five megabytesQUANTITY

0.99+

3.5 inchQUANTITY

0.99+

second partQUANTITY

0.99+

18 terabytesQUANTITY

0.99+

first stepQUANTITY

0.99+

hundredsQUANTITY

0.99+

bothQUANTITY

0.98+

NVIDIAORGANIZATION

0.98+

over three million imagesQUANTITY

0.98+

firstQUANTITY

0.98+

each siteQUANTITY

0.98+

H B E. Apollo 6500COMMERCIAL_ITEM

0.98+

each clusterQUANTITY

0.98+

each imageQUANTITY

0.98+

one sourceQUANTITY

0.98+

todayDATE

0.98+

G X oneCOMMERCIAL_ITEM

0.98+

CooperPERSON

0.98+

second orderQUANTITY

0.98+

JapanORGANIZATION

0.98+

HamidPERSON

0.97+

Dr EnterpriseORGANIZATION

0.97+

Cooper NeteaseORGANIZATION

0.97+

eachQUANTITY

0.97+

OneTITLE

0.97+

TheobaldPERSON

0.97+

nine flattersQUANTITY

0.97+

one devicesQUANTITY

0.96+

SirisTITLE

0.96+

hundreds of thousands of imagesQUANTITY

0.96+

Docker EnterpriseORGANIZATION

0.95+

DockerORGANIZATION

0.95+

seven clustersQUANTITY

0.95+

two years agoDATE

0.95+

US governmentORGANIZATION

0.95+

MirantORGANIZATION

0.95+

Operations and Technology Advanced Analytics GroupORGANIZATION

0.94+

four time lossesQUANTITY

0.94+

WallyPERSON

0.94+

JapaneseOTHER

0.93+

two categoriesQUANTITY

0.93+

H B E l 4000COMMERCIAL_ITEM

0.9+

H BORGANIZATION

0.9+

three variablesQUANTITY

0.9+

General AiTITLE

0.87+

G EdwardPERSON

0.87+

Google HomeCOMMERCIAL_ITEM

0.87+

$1 millionQUANTITY

0.85+

MirandaORGANIZATION

0.85+

Sea GateLOCATION

0.85+

AlexaTITLE

0.85+

500 quarter inch drivesQUANTITY

0.84+

KubernetesTITLE

0.83+

single headQUANTITY

0.83+

eightQUANTITY

0.83+

DrTITLE

0.82+

variablesQUANTITY

0.81+

this calendar yearDATE

0.78+

H P. E.ORGANIZATION

0.78+

2000DATE

0.73+

Project AthenaORGANIZATION

0.72+

Rx SmartCOMMERCIAL_ITEM

0.69+

dualQUANTITY

0.68+

V 100COMMERCIAL_ITEM

0.65+

closeQUANTITY

0.65+

four continentsQUANTITY

0.64+

GPQUANTITY

0.62+

ON DEMAND SWARM ON K8S FINAL NEEDS CTA SLIDE


 

>>welcome to the session. Long live swarm with containers and kubernetes everywhere we have this increasing cloud complexity at the same time that we're facing economic uncertainty and, of course, to navigate this. For most companies, it's a matter of focusing on speed and on shipping and iterating their code faster. Now. For many, Marantz is customers. That means using docker swarm rather than kubernetes to handle container orchestration. We really believe that the best way to increase your speed to production is choice, simplicity and security. So we wanted to bring you a couple of experts to talk about the state of swarm and Docker enterprise and how you can make best use of both of you. So let's get to it. Well, good afternoon or good morning, depending on where you are on and welcome to today's session. Long live swarm. I am Nick Chase. I'm head of content here at Mantis and I would like to introduce you to our two Panelists today eight of Manzini. Why don't you introduce yourself? >>I am a van CNI. I'm a solutions architect here at Moran Tous on work primarily with Docker Enterprise System. I have a long history of working with support team. Um, at what used to be Ah Docker Enterprise, part of Docker Inc. >>Yeah, Okay. Great. And Don Power. >>I, um Yeah, I'm Don Power on the docker. Captain Docker, community leader. Right now I run our Dev Ops team for Citizens Bank out of Nashville, Tennessee, and happy to be here. >>All right, Excellent. So All right, so thank you both for coming. Now, before we say anything else, I want to go ahead and kind of name the elephant in the room. There's been a lot of talk about the >>future. Yeah, that's right. Um, swarm as it stands right now, um, we have, ah, very vested interest in keeping our customers on who want to continue using swarm, functional and keeping swarm a viable alternative or complement to kubernetes. However you see the orchestration war playing out as it were. >>Okay? It's hardly a war at this point, but they do work together, and so that's >>absolutely Yeah, I I definitely consider them more of like, complimentary services, um, using the right tool for the job. Sort of sense. They both have different design goals when they were originally created and set out so I definitely don't see it as a completely one or the other kind of decision and that they could both be used in the same environment and similar clusters to run whatever workload that you have. >>Excellent. And we'll get into the details of all that as we go along. So that's terrific. So I have not really been involved in in the sort of swarm area. So set the stage for us where we kind of start out with all of this. Don I know that you were involved and so guys said, set the stage for us. >>Sure, Um I mean so I've been a heavy user of swarm in my past few roles. Professionally, we've been running containers in production with Swarm for coming up on about four years. Now, Um, in our case, we you know, we looked at what was available at the time, and of course you had. Kubernetes is your biggest contender out there, but like I just mentioned, the one of the things that really led us to swarm is it's design goals were very different than kubernetes. So Kubernetes tries to have an answer for absolutely every scenario where swarm tries to have an answer for, like, the 80% of problems or challenges will say that you might come across 80% of the workloads. Um, I had a better way of saying that, but I think I got my point across >>E Yeah, I think I think you hit the nail on the head. Um, Kubernetes in particular with the way that kubernetes itself is an a P I I believe that kubernetes was, um, you know, written as a toolkit. It wasn't really intended to be used by end users directly. It was really a way to build platforms that run containers. And because it's this really, really extensible ap I you can extend it to manage all sorts of resource is swarm doesn't have that X sensibility aspect, but what it was designed to do, it does very, very well and very easily in a very, very simple sort of way. Um, it's highly opinionated about the way that you should use the product, but it works very effectively. It's very easy to use. It's very low. Um, not low effort, but low. Ah, low barrier to entry. >>Yes. Yes. Absolutely. I was gonna touch on the same thing. It's very easy for someone to come in. Pick up swarm. You know they don't They don't have to know anything about the orchestrator on day one. Most people that are getting into this space are very familiar with Docker. Compose um, and entering from Docker compose into swarm is changing one command that you would run on the command line. >>Yeah, very, very trivial to if you are already used to building docker files using composed, organize your deployment into stacks of related components. It's trivial to turn on swarm mode and then deploy your container set to a cluster. >>Well, excellent. So answer this question for me. Is the swarm of today the same as the swarm of, you know, the original swarm. So, like when swim first started is that the same is what we have now >>it's kind of ah, complicated story with the storm project because it's changed names and forms a few times. Originally in is really somewhere around 2014 in the first version, and it was a component that you really had to configure and set up separately from Docker Ah, the way that it was structured. Ah, you would just have docker installed on a number of servers are machines in your cluster. And then you would organize them into a swarm by bringing your own database and some of the tooling to get those nodes talking to each other and to organize your containers across all of your docker engines. Ah, few years later, the swarm project was retooled and baked into the docker engine. And, um, this is where we sort of get the name change from. So originally it was a feature that we called swarm. Ah. Then the Swarm Kit project was released on Get Hub and baked directly into the engine, where they renamed it as swarm mode. Because now it is a motile option that you just turn on as a button in the docker engine and because it's already there the, um, the tuning knobs that you haven't swarm kit with regard to how what my time outs are and some of these other sort of performance settings there locked there, they're there. It's part of the opinionated set of components that builds up the docker engine is that we bring in the Swarm Kit project with a certain set of defaults and settings. And that is how it operates in today's version of Docker engine. >>Uh, okay for that, that makes sense. That makes sense. So ah, so don, I know you have pretty strong feelings about this topic, but it is swarm still viable in a world that's sort of increasingly dominated by Kubernetes. >>Absolutely. And you were right. I'm very passionate about this topic where I work. We're we're doing almost all of our production work lives on swarm we only have out of Ah, we've got something like 600 different services between three and 4000 containers. At any given point in time. Out of all of those projects, all of those services we've only run into two or three that don't kind of fit into the opinionated model of swarm. So we are running those on KUBERNETES in the same cluster using Moranis is Docker enterprise offering. But, um, no, that's a very, very small percentage of services that we didn't have an answer for in swarm with one. The one case that really gets us just about every time is scaling state full services. But you're gonna have very few staple services in most environments for things like micro service architecture, which is predominantly what we build out. Swarm is perfect. It's simple. It's easy to use you, don't you? Don't end up going for miles of yamma files trying to figure out the one setting that you didn't get exactly right? Um yeah, the other Thea the other big piece of it that way really led us to adopting it so heavily in the beginning is, you know, the overlay network. So your networks don't have to span the whole cluster like they do with kubernetes. So we could we could set up a network isolation between service A and service B, just by use using the built in overlay networks. That was a huge component that, like I said, let us Teoh adopting it so heavily when we first got started. >>Excellent. You look like you're about to say something in a >>Yeah, I think that speaks to the design goals for each piece of software. On the way that I've heard this described before is with regard to the networking piece the ah, the docker networking under the hood, um, feels like it was written by a network engineer. The way that the docker engine overlay networks communicate uses ah, VX lan under the hood, which creates pseudo V lands for your containers. And if two containers aren't on the same Dylan, there's no way they can communicate with each other as opposed to the design of kubernetes networking, which is really left to the C and I implementation but still has the design philosophy of one big, flat sub net where every I p could reach every other i p and you control what is allowed to access, what by policy. So it's more of an application focused Ah design. Whereas in Docker swarm on the overlay networking side, it's really of a network engineering sort of focus. Right? >>Okay, got it. Well, so now how does all this fit in with Docker enterprise now? So I understand there's been some changes on how swarm is handled within Docker Enterprise. Coming with this new release, >>Docker s O swarm Inside Docker Enterprise is represented as both the swarm classic legacy system that we shift way back in 2014 on and then also the swarm mode that is curly used in the docker engine. Um, the Swarm Classic back end gives us legacy support for being able to run unmanaged plane containers onto a cluster. If you were to take Docker ce right now, you would find that you wouldn't be able to just do a very basic docker run against a whole cluster of machines. You can create services using the swarms services, a p I but, um, that that legacy plane container support is something that you have to set up external swarm in order to provide. So right now, the architecture of Docker Enterprise UCP is based on some of that legacy code from about five or six years ago. Okay. Ah, that gives us ability to deploy plane containers for use cases that require it as well as swarm services for those kinds of workloads that might be better served by the built in load balancing and h A and scaling features that swarm provides. >>Okay, so now I know that at one point kubernetes was deployed within Docker Enterprise as you create a swarm cluster and then deploy kubernetes on top of swarm. >>Correct? That is how the current architecture works. >>Okay. All right. And then, um what is what is where we're going with this like, Are we supposed to? Are we going to running Swarm on top of kubernetes? What's >>the the design goals for the future of swarm within branches? Stocker Enterprise are that we will start the employing Ah, like kubernetes cluster features as the base and a swarm kit on top of kubernetes. So it is like you mentioned just a reversal of the roles. I think we're finding that, um, the ability to extend kubernetes a p I to manage resource is is valuable at an infrastructure and platform level in a way that we can't do with swarm. We still want to be able to run swarm workloads. So we're going to keep the swarm kit code the swarm kit orchestration features to run swarm services as a part of the platform to keep the >>got it. Okay, so, uh, if I'm a developer and I want to run swarm, but my company's running kubernetes what? What are my one of my options there? Well, I think >>eight touched on it pretty well already where you know, it depends on your design goals, and you know, one of the other things that's come up a few times is Thea. The level of entry for for swarm is much, much simpler than kubernetes. So I mean, it's it's kind of hard to introduce anything new. So I mean, a company, a company that's got most of their stuff in kubernetes and production is gonna have a hard time maybe looking at a swarm. I mean, this is gonna be, you know, higher, higher up, not the boots on the ground. But, um, you know, the the upper management, that's at some point, you have to pay for all their support, all of it. What we did in our approach. Because there was one team already using kubernetes. We went ahead and stood up a small cluster ah, small swarm cluster and taught the developers how to use it and how to deploy code to it. And they loved it. They thought it was super simple. A time went on, the other teams took notice and saw how fast these guys were getting getting code deployed, getting services up, getting things usable, and they would look over at what the innovation team was doing and say, Hey, I I want to do that to, uh, you know, so there's there's a bunch of different approaches. That's the approach we took and it worked out very well. It looks like you wanted to say something too. >>Yeah, I think that if you if you're if you're having to make this kind of decision, there isn't There isn't a wrong choice. Ah, it's never a swarm of its role and your organization, right? Right. If you're if you're an individual and you're using docker on your workstation on your laptop but your organization wants to standardize on kubernetes there, there are still some two rules that Mike over Ah, pose. And he's manifest if you need to deploy. Coop resource is, um if you are running Docker Enterprise Swarm kit code will still be there. And you can run swarm services as regular swarm workloads on that component. So I I don't want to I don't want people to think that they're going to be like, locked into one or the other orchestration system. Ah, there the way we want to enable developer choice so that however the developer wants to do their work, they can get it done. Um Docker desktop. Ah, ships with that kubernetes distribution bundled in it. So if you're using a Mac or Windows and that's your development, uh, system, you can run docker debt, turn on your mode and run the kubernetes bits. So you have the choices. You have the tools to deploy to either system. >>And that's one of the things that we were super excited about when they introduced Q. Burnett ease into the Docker Enterprise offering. So we were able to run both, so we didn't have to have that. I don't want to call it a battle or argument, but we didn't have to make anybody choose one or the other. We, you know, we gave them both options just by having Docker enterprise so >>excellent. So speaking of having both options, let's just say for developers who need to make a decision while should I go swarm, or should I go kubernetes when it sort of some of the things that they should think about? >>So I think that certain certain elements of, um, certain elements of containers are going to be agnostic right now. So the the the designing a docker file and building a container image, you're going to need to know that skill for either system that you choose to operate on. Ah, the swarm value. Some of the storm advantage comes in that you don't have to know anything beyond that. So you don't have to learn a whole new A p I a whole new domain specific language using Gamel to define your deployment. Um, chances are that if you've been using docker for any length of time, you probably have a whole stack of composed files that are related to things that you've worked on. And, um, again, the barrier to entry to getting those running on swarm is very low. You just turn it on docker stack, deploy, and you're good to go. So I think that if you're trying to make that choice, if you I have a use case that doesn't require you to manage new resource is if you don't need the Extensible researchers part, Ah, swarm is a great great, great viable option. >>Absolutely. Yeah, the the recommendation I've always made to people that are just getting started is start with swarm and then move into kubernetes and going through the the two of them, you're gonna figure out what fits your design principles. What fits your goals. Which one? You know which ones gonna work best for you. And there's no harm in choosing one or the other using both each one of you know, very tailor fit for very various types of use cases. And like I said, kubernetes is great at some things, but for a lot of other stuff, I still want to use swarm and vice versa. So >>on my home lab, for all my personal like services that I run in my, uh, my home network, I used storm, um, for things that I might deploy onto, you know, a bit this environment, a lot of the ones that I'm using right now are mainly tailored for kubernetes eso. I think especially some of the tools that are out there in the open source community as well as in docker Enterprise helped to bridge that gap like there's a translator that can take your compose file, turn it into kubernetes. Yeah, Mel's, um, if if you're trying to decide, like on the business side, should we standardize on former kubernetes? I think like your what? What functionality are you looking at? Out of getting out of your system? If you need things like tight integration into a ah infrastructure vendor such as AWS Azure or VM ware that might have, like plug ins for kubernetes. You're now you're getting into that area where you're managing Resource is of the infrastructure with your orchestration. AP I with kube so things like persistent volumes can talk to your storage device and carve off chunks of storage and assign those two pods if you don't have that need or that use case. Um, you know, KUBERNETES is bringing in a lot of these features that you maybe you're just not taking advantage of. Um, similarly, if you want to take advantage of things like auto scaling to scale horizontally, let's say you have a message queue system and then a number of workers, and you want to start scaling up on your workers. When your CPU hits a certain a metric. That is something that Kubernetes has built right into it. And so, if you want that, I would probably suggest that you look at kubernetes if you don't need that, or if you want to write some of that tooling yourself. Swarm doesn't have an object built into it that will do automatic horizontal scaling based on some kind of metric. So I always consider this decision as a what features are the most I available to you and your business that you need to Yep. >>All right. Excellent. Well, and, ah, fortunately, of course, they're both available on Docker Enterprise. So aren't we lucky? All right, so I am going to wrap this up. I want to thank Don Bauer Docker captain, for coming here and spending some time with us and eight of Manzini. I would like to thank you. I know that the the, uh, circumstances are less than ideal here for your recording today, but we appreciate you joining us. Um and ah, both of you. Thank you very much. And I want to invite all of you. First of all, thank you for joining us. We know your time is valuable and I want to invite you all Teoh to take a look at Docker Enterprise. Ah, follow the link that's on your screen and we'll see you in the next session. Thank you all so much. Thank you. >>Thank you, Nick.

Published Date : Sep 14 2020

SUMMARY :

So we wanted to bring you a couple of experts to talk about the state of swarm I have a long history of working with support Tennessee, and happy to be here. kind of name the elephant in the room. However you see the orchestration to run whatever workload that you have. Don I know that you were involved Um, in our case, we you know, we looked at what was Um, it's highly opinionated about the way that you should use is changing one command that you would run on the command line. Yeah, very, very trivial to if you are already used to building docker of, you know, the original swarm. in the first version, and it was a component that you really had to configure and set up separately So ah, so don, I know you have pretty strong to figure out the one setting that you didn't get exactly right? You look like you're about to say something in a On the way that I've heard this described before is with regard to the networking piece Well, so now how does all this fit in with Docker you have to set up external swarm in order to provide. was deployed within Docker Enterprise as you create a swarm cluster That is how the current architecture works. is what is where we're going with this like, Are we supposed to? a part of the platform to keep the I think I mean, this is gonna be, you know, higher, So you have the choices. And that's one of the things that we were super excited about when they introduced Q. So speaking of having both options, let's just say Some of the storm advantage comes in that you don't have to know anything beyond the two of them, you're gonna figure out what fits your design principles. available to you and your business that you need to Yep. I know that the the, uh, circumstances are less than

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Nick ChasePERSON

0.99+

80%QUANTITY

0.99+

twoQUANTITY

0.99+

2014DATE

0.99+

Citizens BankORGANIZATION

0.99+

threeQUANTITY

0.99+

MarantzORGANIZATION

0.99+

AWSORGANIZATION

0.99+

bothQUANTITY

0.99+

two podsQUANTITY

0.99+

MantisORGANIZATION

0.99+

first versionQUANTITY

0.99+

600 different servicesQUANTITY

0.99+

todayDATE

0.99+

Ah Docker EnterpriseORGANIZATION

0.99+

both optionsQUANTITY

0.99+

each pieceQUANTITY

0.99+

Docker Inc.ORGANIZATION

0.99+

few years laterDATE

0.99+

one caseQUANTITY

0.99+

4000 containersQUANTITY

0.98+

Docker Enterprise SystemORGANIZATION

0.98+

Q. BurnettPERSON

0.98+

one teamQUANTITY

0.98+

ManziniORGANIZATION

0.98+

Don PowerPERSON

0.98+

Docker EnterpriseORGANIZATION

0.98+

DockerTITLE

0.98+

FirstQUANTITY

0.98+

Docker EnterpriseTITLE

0.97+

oneQUANTITY

0.97+

MoranisORGANIZATION

0.97+

about four yearsQUANTITY

0.97+

two PanelistsQUANTITY

0.97+

NickPERSON

0.97+

Stocker EnterpriseORGANIZATION

0.97+

six years agoDATE

0.96+

DylanPERSON

0.95+

Moran TousORGANIZATION

0.95+

Don BauerPERSON

0.95+

two containersQUANTITY

0.95+

two rulesQUANTITY

0.94+

WindowsTITLE

0.93+

Nashville, TennesseeLOCATION

0.93+

one commandQUANTITY

0.93+

firstQUANTITY

0.92+

KubernetesORGANIZATION

0.92+

eightQUANTITY

0.91+

Docker enterpriseTITLE

0.89+

KUBERNETESORGANIZATION

0.88+

GamelTITLE

0.88+

day oneQUANTITY

0.87+

one big, flat sub netQUANTITY

0.86+

MacCOMMERCIAL_ITEM

0.83+

service AOTHER

0.82+

Docker enterpriseTITLE

0.81+

milesQUANTITY

0.8+

each oneQUANTITY

0.8+

Mike ovPERSON

0.78+

dockerORGANIZATION

0.78+

service BOTHER

0.77+

AzureTITLE

0.75+

Reliance Jio: OpenStack for Mobile Telecom Services


 

>>Hi, everyone. My name is my uncle. My uncle Poor I worked with Geo reminds you in India. We call ourselves Geo Platforms. Now on. We've been recently in the news. You've raised a lot off funding from one of the largest, most of the largest tech companies in the world. And I'm here to talk about Geos Cloud Journey, Onda Mantis Partnership. I've titled it the story often, Underdog becoming the largest telecom company in India within four years, which is really special. And we're, of course, held by the cloud. So quick disclaimer. Right. The content shared here is only for informational purposes. Um, it's only for this event. And if you want to share it outside, especially on social media platforms, we need permission from Geo Platforms limited. Okay, quick intro about myself. I am a VP of engineering a geo. I lead the Cloud Services and Platforms team with NGO Andi. I mean the geo since the beginning, since it started, and I've seen our cloud footprint grow from a handful of their models to now eight large application data centers across three regions in India. And we'll talk about how we went here. All right, Let's give you an introduction on Geo, right? Giorgio is on how we became the largest telecom campaign, India within four years from 0 to 400 million subscribers. And I think there are There are a lot of events that defined Geo and that will give you an understanding off. How do you things and what you did to overcome massive problems in India. So the slide that I want to talkto is this one and, uh, I The headline I've given is, It's the Geo is the fastest growing tech company in the world, which is not a new understatement. It's eggs, actually, quite literally true, because very few companies in the world have grown from zero to 400 million subscribers within four years paying subscribers. And I consider Geo Geos growth in three phases, which I have shown on top. The first phase we'll talk about is how geo grew in the smartphone market in India, right? And what we did to, um to really disrupt the telecom space in India in that market. Then we'll talk about the feature phone phase in India and how Geo grew there in the future for market in India. and then we'll talk about what we're doing now, which we call the Geo Platforms phase. Right. So Geo is a default four g lt. Network. Right. So there's no to geo three g networks that Joe has, Um it's a state of the art four g lt voiceover lt Network and because it was designed fresh right without any two D and three G um, legacy technologies, there were also a lot of challenges Lawn geo when we were starting up. One of the main challenges waas that all the smart phones being sold in India NGOs launching right in 2000 and 16. They did not have the voice or lt chip set embedded in the smartphone because the chips it's far costlier to embed in smartphones and India is a very price and central market. So none of the manufacturers were embedding the four g will teach upset in the smartphones. But geos are on Lee a volte in network, right for the all the network. So we faced a massive problem where we said, Look there no smartphones that can support geo. So how will we grow Geo? So in order to solve that problem, we launched our own brand of smartphones called the Life um, smartphones. And those phones were really high value devices. So there were $50 and for $50 you get you You At that time, you got a four g B storage space. A nice big display for inch display. Dual cameras, Andi. Most importantly, they had volte chip sets embedded in them. Right? And that got us our initial customers the initial for the launch customers when we launched. But more importantly, what that enabled other oh, EMS. What that forced the audience to do is that they also had to launch similar smartphones competing smartphones with voltage upset embedded in the same price range. Right. So within a few months, 3 to 4 months, um, all the other way EMS, all the other smartphone manufacturers, the Samsung's the Micromax is Micromax in India, they all had volte smartphones out in the market, right? And I think that was one key step We took off, launching our own brand of smartphone life that helped us to overcome this problem that no smartphone had. We'll teach upsets in India and then in order. So when when we were launching there were about 13 telecom companies in India. It was a very crowded space on demand. In order to gain a foothold in that market, we really made a few decisions. Ah, phew. Key product announcement that really disrupted this entire industry. Right? So, um, Geo is a default for GLT network itself. All I p network Internet protocol in everything. All data. It's an all data network and everything from voice to data to Internet traffic. Everything goes over this. I'll goes over Internet protocol, and the cost to carry voice on our smartphone network is very low, right? The bandwidth voice consumes is very low in the entire Lt band. Right? So what we did Waas In order to gain a foothold in the market, we made voice completely free, right? He said you will not pay anything for boys and across India, we will not charge any roaming charges across India. Right? So we made voice free completely and we offer the lowest data rates in the world. We could do that because we had the largest capacity or to carry data in India off all the other telecom operators. And these data rates were unheard off in the world, right? So when we launched, we offered a $2 per month or $3 per month plan with unlimited data, you could consume 10 gigabytes of data all day if you wanted to, and some of our subscriber day. Right? So that's the first phase off the overgrowth and smartphones and that really disorders. We hit 100 million subscribers in 170 days, which was very, very fast. And then after the smartphone faith, we found that India still has 500 million feature phones. And in order to grow in that market, we launched our own phone, the geo phone, and we made it free. Right? So if you take if you took a geo subscription and you carried you stayed with us for three years, we would make this phone tree for your refund. The initial deposit that you paid for this phone and this phone had also had quite a few innovations tailored for the Indian market. It had all of our digital services for free, which I will talk about soon. And for example, you could plug in. You could use a cable right on RCR HDMI cable plug into the geo phone and you could watch TV on your big screen TV from the geophones. You didn't need a separate cable subscription toe watch TV, right? So that really helped us grow. And Geo Phone is now the largest selling feature phone in India on it. 100 million feature phones in India now. So now now we're in what I call the geo platforms phase. We're growing of a geo fiber fiber to the home fiber toe the office, um, space. And we've also launched our new commerce initiatives over e commerce initiatives and were steadily building platforms that other companies can leverage other companies can use in the Jeon o'clock. Right? So this is how a small startup not a small start, but a start of nonetheless least 400 million subscribers within four years the fastest growing tech company in the world. Next, Geo also helped a systemic change in India, and this is massive. A lot of startups are building on this India stack, as people call it, and I consider this India stack has made up off three things, and the acronym I use is jam. Trinity, right. So, um, in India, systemic change happened recently because the Indian government made bank accounts free for all one billion Indians. There were no service charges to store money in bank accounts. This is called the Jonathan. The J. GenDyn Bank accounts. The J out off the jam, then India is one of the few countries in the world toe have a digital biometric identity, which can be used to verify anyone online, which is huge. So you can simply go online and say, I am my ankle poor on duh. I verify that this is indeed me who's doing this transaction. This is the A in the jam and the last M stands for Mobil's, which which were held by Geo Mobile Internet in a plus. It is also it is. It also stands for something called the U. P I. The United Unified Payments Interface. This was launched by the Indian government, where you can carry digital transactions for free. You can transfer money from one person to the to another, essentially for free for no fee, right so I can transfer one group, even Indian rupee to my friend without paying any charges. That is huge, right? So you have a country now, which, with a with a billion people who are bank accounts, money in the bank, who you can verify online, right and who can pay online without any problems through their mobile connections held by G right. So suddenly our market, our Internet market, exploded from a few million users to now 506 106 100 million mobile Internet users. So that that I think, was a massive such a systemic change that happened in India. There are some really large hail, um, numbers for this India stack, right? In one month. There were 1.6 billion nuclear transactions in the last month, which is phenomenal. So next What is the impact of geo in India before you started, we were 155th in the world in terms off mobile in terms of broadband data consumption. Right. But after geo, India went from one 55th to the first in the world in terms of broadband data, largely consumed on mobile devices were a mobile first country, right? We have a habit off skipping technology generation, so we skip fixed line broadband and basically consuming Internet on our mobile phones. On average, Geo subscribers consumed 12 gigabytes of data per month, which is one of the highest rates in the world. So Geo has a huge role to play in making India the number one country in terms off broad banded consumption and geo responsible for quite a few industry first in the telecom space and in fact, in the India space, I would say so before Geo. To get a SIM card, you had to fill a form off the physical paper form. It used to go toe Ah, local distributor. And that local distributor is to check the farm that you feel incorrectly for your SIM card and then that used to go to the head office and everything took about 48 hours or so, um, to get your SIM card. And sometimes there were problems there also with a hard biometric authentication. We enable something, uh, India enable something called E K Y C Elektronik. Know your customer? We took a fingerprint scan at our point of Sale Reliance Digital stores, and within 15 minutes we could verify within a few minutes. Within a few seconds we could verify that person is indeed my hunk, right, buying the same car, Elektronik Lee on we activated the SIM card in 15 minutes. That was a massive deal for our growth. Initially right toe onboard 100 million customers. Within our and 70 days. We couldn't have done it without be K. I see that was a massive deal for us and that is huge for any company starting a business or start up in India. We also made voice free, no roaming charges and the lowest data rates in the world. Plus, we gave a full suite of cloud services for free toe all geo customers. For example, we give goTV essentially for free. We give GOTV it'll law for free, which people, when we have a launching, told us that no one would see no one would use because the Indians like watching TV in the living rooms, um, with the family on a big screen television. But when we actually launched, they found that GOTV is one off our most used app. It's like 70,000,080 million monthly active users, and now we've basically been changing culture in India where culture is on demand. You can watch TV on the goal and you can pause it and you can resume whenever you have some free time. So really changed culture in India, India on we help people liver, digital life online. Right, So that was massive. So >>I'm now I'd like to talk about our cloud >>journey on board Animal Minorities Partnership. We've been partners that since 2014 since the beginning. So Geo has been using open stack since 2014 when we started with 14 note luster. I'll be one production environment One right? And that was I call it the first wave off our cloud where we're just understanding open stack, understanding the capabilities, understanding what it could do. Now we're in our second wave. Where were about 4000 bare metal servers in our open stack cloud multiple regions, Um, on that around 100,000 CPU cores, right. So it's a which is one of the bigger clouds in the world, I would say on almost all teams, with Ngor leveraging the cloud and soon I think we're going to hit about 10,000 Bama tools in our cloud, which is massive and just to give you a scale off our network, our in French, our data center footprint. Our network introduction is about 30 network data centers that carry just network traffic across there are there across India and we're about eight application data centers across three regions. Data Center is like a five story building filled with servers. So we're talking really significant scale in India. And we had to do this because when we were launching, there are the government regulation and try it. They've gotten regulatory authority of India, mandates that any telecom company they have to store customer data inside India and none of the other cloud providers were big enough to host our clothes. Right. So we we made all this intellectual for ourselves, and we're still growing next. I love to show you how we grown with together with Moran says we started in 2014 with the fuel deployment pipelines, right? And then we went on to the NK deployment. Pipelines are cloud started growing. We started understanding the clouds and we picked up M C p, which has really been a game changer for us in automation, right on DNA. Now we are in the latest release, ofem CPM CPI $2019 to on open stack queens, which on we've just upgraded all of our clouds or the last few months. Couple of months, 2 to 3 months. So we've done about nine production clouds and there are about 50 internal, um, teams consuming cloud. We call as our tenants, right. We have open stack clouds and we have communities clusters running on top of open stack. There are several production grade will close that run on this cloud. The Geo phone, for example, runs on our cloud private cloud Geo Cloud, which is a backup service like Google Drive and collaboration service. It runs out of a cloud. Geo adds G o g S t, which is a tax filing system for small and medium enterprises, our retail post service. There are all these production services running on our private clouds. We're also empaneled with the government off India to provide cloud services to the government to any State Department that needs cloud services. So we were empaneled by Maiti right in their ego initiative. And our clouds are also Easter. 20,000 certified 20,000 Colin one certified for software processes on 27,001 and said 27,017 slash 18 certified for security processes. Our clouds are also P our data centers Alsop a 942 be certified. So significant effort and investment have gone toe These data centers next. So this is where I think we've really valued the partnership with Morantes. Morantes has has trained us on using the concepts of get offs and in fries cold, right, an automated deployments and the tool change that come with the M C P Morantes product. Right? So, um, one of the key things that has happened from a couple of years ago to today is that the deployment time to deploy a new 100 north production cloud has decreased for us from about 55 days to do it in 2015 to now, we're down to about five days to deploy a cloud after the bear metals a racked and stacked. And the network is also the physical network is also configured, right? So after that, our automated pipelines can deploy 100 0 clock in five days flight, which is a massive deal for someone for a company that there's adding bear metals to their infrastructure so fast, right? It helps us utilize our investment, our assets really well. By the time it takes to deploy a cloud control plane for us is about 19 hours. It takes us two hours to deploy a compu track and it takes us three hours to deploy a storage rack. Right? And we really leverage the re class model off M C. P. We've configured re class model to suit almost every type of cloud that we have, right, and we've kept it fairly generous. It can be, um, Taylor to deploy any type of cloud, any type of story, nor any type of compute north. Andi. It just helps us automate our deployments by putting every configuration everything that we have in to get into using infra introduction at school, right plus M. C. P also comes with pipelines that help us run automated tests, automated validation pipelines on our cloud. We also have tempest pipelines running every few hours every three hours. If I recall correctly which run integration test on our clouds to make sure the clouds are running properly right, that that is also automated. The re class model and the pipelines helpers automate day to operations and changes as well. There are very few seventh now, compared toa a few years ago. It very rare. It's actually the exception and that may be because off mainly some user letter as opposed to a cloud problem. We also have contributed auto healing, Prometheus and Manager, and we integrate parameters and manager with our even driven automation framework. Currently, we're using Stack Storm, but you could use anyone or any event driven automation framework out there so that it indicates really well. So it helps us step away from constantly monitoring our cloud control control planes and clothes. So this has been very fruitful for us and it has actually apps killed our engineers also to use these best in class practices like get off like in France cord. So just to give you a flavor on what stacks our internal teams are running on these clouds, Um, we have a multi data center open stack cloud, and on >>top of that, >>teams use automation tools like terra form to create the environments. They also create their own Cuba these clusters and you'll see you'll see in the next slide also that we have our own community that the service platform that we built on top of open stack to give developers development teams NGO um, easy to create an easy to destroy Cuban. It is environment and sometimes leverage the Murano application catalog to deploy using heats templates to deploy their own stacks. Geo is largely a micro services driven, Um um company. So all of our applications are micro services, multiple micro services talking to each other, and the leverage develops. Two sets, like danceable Prometheus, Stack stone from for Otto Healing and driven, not commission. Big Data's tax are already there Kafka, Patches, Park Cassandra and other other tools as well. We're also now using service meshes. Almost everything now uses service mesh, sometimes use link. Erred sometimes are experimenting. This is Theo. So So this is where we are and we have multiple clients with NGO, so our products and services are available on Android IOS, our own Geo phone, Windows Macs, Web, Mobile Web based off them. So any client you can use our services and there's no lock in. It's always often with geo, so our sources have to be really good to compete in the open Internet. And last but not least, I think I love toe talk to you about our container journey. So a couple of years ago, almost every team started experimenting with containers and communities and they were demand for as a platform team. They were demanding community that the service from us a manage service. Right? So we built for us, it was much more comfortable, much more easier toe build on top of open stack with cloud FBI s as opposed to doing this on bare metal. So we built a fully managed community that a service which was, ah, self service portal, where you could click a button and get a community cluster deployed in your own tenant on Do the >>things that we did are quite interesting. We also handle some geo specific use cases. So we have because it was a >>manage service. We deployed the city notes in our own management tenant, right? We didn't give access to the customer to the city. Notes. We deployed the master control plane notes in the tenant's tenant and our customers tenant, but we didn't give them access to the Masters. We didn't give them the ssh key the workers that the our customers had full access to. And because people in Genova learning and experimenting, we gave them full admin rights to communities customers as well. So that way that really helped on board communities with NGO. And now we have, like 15 different teams running multiple communities clusters on top, off our open stack clouds. We even handle the fact that there are non profiting. I people separate non profiting I peoples and separate production 49 p pools NGO. So you could create these clusters in whatever environment that non prod environment with more open access or a prod environment with more limited access. So we had to handle these geo specific cases as well in this communities as a service. So on the whole, I think open stack because of the isolation it provides. I think it made a lot of sense for us to do communities our service on top off open stack. We even did it on bare metal, but that not many people use the Cuban, indeed a service environmental, because it is just so much easier to work with. Cloud FBI STO provision much of machines and covering these clusters. That's it from me. I think I've said a mouthful, and now I love for you toe. I'd love to have your questions. If you want to reach out to me. My email is mine dot capulet r l dot com. I'm also you can also message me on Twitter at my uncouple. So thank you. And it was a pleasure talking to you, Andre. Let let me hear your questions.

Published Date : Sep 14 2020

SUMMARY :

So in order to solve that problem, we launched our own brand of smartphones called the So just to give you a flavor on what stacks our internal It is environment and sometimes leverage the Murano application catalog to deploy So we have because it was a So on the whole, I think open stack because of the isolation

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2015DATE

0.99+

IndiaLOCATION

0.99+

2014DATE

0.99+

two hoursQUANTITY

0.99+

$50QUANTITY

0.99+

3QUANTITY

0.99+

12 gigabytesQUANTITY

0.99+

three yearsQUANTITY

0.99+

MorantesORGANIZATION

0.99+

70,000,080 millionQUANTITY

0.99+

AndrePERSON

0.99+

three hoursQUANTITY

0.99+

SamsungORGANIZATION

0.99+

2000DATE

0.99+

70 daysQUANTITY

0.99+

GenovaLOCATION

0.99+

five daysQUANTITY

0.99+

2QUANTITY

0.99+

zeroQUANTITY

0.99+

0QUANTITY

0.99+

170 daysQUANTITY

0.99+

100 million subscribersQUANTITY

0.99+

Onda Mantis PartnershipORGANIZATION

0.99+

first phaseQUANTITY

0.99+

100 millionQUANTITY

0.99+

15 minutesQUANTITY

0.99+

10 gigabytesQUANTITY

0.99+

firstQUANTITY

0.99+

16DATE

0.99+

four yearsQUANTITY

0.99+

4 monthsQUANTITY

0.99+

one personQUANTITY

0.99+

49 pQUANTITY

0.99+

100 million customersQUANTITY

0.99+

todayDATE

0.99+

one billionQUANTITY

0.99+

Two setsQUANTITY

0.99+

155thQUANTITY

0.99+

oneQUANTITY

0.99+

one key stepQUANTITY

0.99+

last monthDATE

0.99+

first countryQUANTITY

0.98+

3 monthsQUANTITY

0.98+

around 100,000 CPU coresQUANTITY

0.98+

JoePERSON

0.98+

100QUANTITY

0.98+

27,001QUANTITY

0.98+

OneQUANTITY

0.98+

15 different teamsQUANTITY

0.98+

Android IOSTITLE

0.98+

one monthQUANTITY

0.98+

FranceLOCATION

0.98+

506 106 100 millionQUANTITY

0.98+

GeoORGANIZATION

0.98+

Elektronik LeeORGANIZATION

0.98+

FBIORGANIZATION

0.98+

one groupQUANTITY

0.98+

1.6 billion nuclear transactionsQUANTITY

0.98+

AndiPERSON

0.97+

Geo Mobile InternetORGANIZATION

0.97+

five storyQUANTITY

0.97+

PrometheusTITLE

0.97+

Dave Van Everen, Mirantis | Mirantis Launchpad 2020 Preview


 

>>from the Cube Studios in Palo Alto in Boston, connecting with thought leaders all around the world. This is a cube conversation. >>Hey, welcome back. You're ready, Jeffrey here with the Cuban Apollo Alto studios today, and we're excited. You know, we're slowly coming out of the, uh, out of the summer season. We're getting ready to jump back into the fall. Season, of course, is still covet. Everything is still digital. But you know, what we're seeing is a digital events allow a lot of things that you couldn't do in the physical space. Mainly get a lot more people to attend that don't have to get in airplanes and file over the country. So to preview this brand new inaugural event that's coming up in about a month, we have We have a new guest. He's Dave and Everen. He is the senior vice president of marketing. Former ran tous. Dave. Great to see you. >>Happy to be here today. Thank you. >>Yeah. So tell us about this inaugural event. You know, we did an event with Miranda's years ago. I had to look it up like 2014. 15. Open stack was hot and you guys sponsored a community event in the Bay Area because the open stack events used to move all over the country each and every year. But you guys said, and the top one here in the Bay Area. But now you're launching something brand new based on some new activity that you guys have been up to over the last several months. So let us give us give us the word. >>Yeah, absolutely. So we definitely have been organizing community events in a variety of open source communities over the years. And, you know, we saw really, really good success with with the Cube And are those events in opens tax Silicon Valley days? And, you know, with the way things have gone this year, we've really seen that virtual events could be very successful and provide a new, maybe slightly different form of engagement but still very high level of engagement for our guests and eso. We're excited to put this together and invite the entire cloud native industry to join us and learn about some of the things that Mantis has been working on in recent months. A zwelling as some of the interesting things that are going on in the Cloud native and kubernetes community >>Great. So it's the inaugural event is called Moran Sous launchpad 2020. The Wares and the Winds in September 16th. So we're about a month away and it's all online is their registration. Costars is free for the community. >>It's absolutely free. Eso everyone is welcome to attend You. Just visit Miranda's dot com and you'll see the info for registering for the event and we'd love it. We love to see you there. It's gonna be a fantastic event. We have multiple tracks catering to developers, operators, general industry. Um, you know, participants in the community and eso we'd be happy to see you on join us on and learn about some of the some of the things we're working on. >>That's awesome. So let's back up a step for people that have been paying as close attention as they might have. Right? So you guys purchase, um, assets from Docker at the end of last year, really taken over there, they're they're kind of enterprise solutions, and you've been doing some work with that. Now, what's interesting is we we cover docker con, um, A couple of months ago, a couple three months ago. Time time moves fast. They had a tremendously successful digital event. 70,000 registrants, people coming from all over the world. I think they're physical. Event used to be like four or 5000 people at the peak, maybe 6000 Really tremendous success. But a lot of that success was driven, really by the by the strength of the community. The docker community is so passionate. And what struck me about that event is this is not the first time these people get together. You know, this is not ah, once a year, kind of sharing of information and sharing ideas, but kind of the passion and and the friendships and the sharing of information is so, so good. You know, it's a super or, um, rich development community. You guys have really now taken advantage of that. But you're doing your Miranda's thing. You're bringing your own technology to it and really taking it to more of an enterprise solution. So I wonder if you can kind of walk people through the process of, you know, you have the acquisition late last year. You guys been hard at work. What are we gonna see on September 16. >>Sure, absolutely. And, you know, just thio Give credit Thio Docker for putting on an amazing event with Dr Khan this year. Uh, you know, you mentioned 70,000 registrants. That's an astounding number. And you know, it really is a testament thio. You know, the community that they've built over the years and continue to serve eso We're really, really happy for Docker as they kind of move into, you know, the next the next path in their journey and, you know, focus more on the developer oriented, um, solution and go to market. So, uh, they did a fantastic job with the event. And, you know, I think that they continue toe connect with their community throughout the year on That's part of what drives What drove so many attendees to the event assed faras our our history and progress with with Dr Enterprise eso. As you mentioned mid November last year, we did acquire Doctor Enterprise assets from Docker Inc and, um, right away we noticed tremendous synergy in our product road maps and even in the in the team's eso that came together really, really quickly and we started executing on a Siris of releases. Um that are starting Thio, you know, be introduced into the market. Um, you know, one was introduced in late May and that was the first major release of Dr Enterprise produced exclusively by more antis. And we're going to announce at the launch pad 2020 event. Our next major release of the Doctor Enterprise Technology, which will for the first time include kubernetes related in life cycle management related technology from Mirant is eso. It's a huge milestone for our company. Huge benefit Thio our customers on and the broader user community around Dr Enterprise. We're super excited. Thio provide a lot of a lot of compelling and detailed content around the new technology that will be announcing at the event. >>So I'm looking at the at the website with with the agenda and there's a little teaser here right in the middle of the spaceship Docker Enterprise Container Cloud. So, um, and I glanced into you got a great little layout, five tracks, keynote track D container track operations and I t developer track and keep track. But I did. I went ahead and clicked on the keynote track and I see the big reveal so I love the opening keynote at at 8 a.m. On the 76 on the September 16th is right. Um, I, Enel CEO who have had on many, many times, has the big reveal Docker Enterprise Container Cloud. So without stealing any thunder, uh, can you give us any any little inside inside baseball on on what people should expect or what they can get excited about for that big announcement? >>Sure, absolutely so I definitely don't want to steal any thunder from Adrian, our CEO. But you know, we did include a few Easter eggs, so to speak, in the website on Dr Enterprise. Container Cloud is absolutely the biggest story out of the bunch eso that's visible on the on the rocket ship as you noticed, and in the agenda it will be revealed during Adrian's keynote, and every every word in the product name is important, right? So Dr Enterprise, based on Dr Enterprise Platform Container Cloud and there's the new word in there really is Cloud eso. I think, um, people are going to be surprised at the groundbreaking territory that were forging with with this release along the lines of a cloud experience and what we are going to provide to not only I t operations and the Op Graders and Dev ops for cloud environment, but also for the developers and the experience that we could bring to developers As they become more dependent on kubernetes and get more hands on with kubernetes. We think that we're going thio provide ah lot of ways for them to be more empowered with kubernetes while at the same time lowering the bar, the bar or the barrier of entry for kubernetes. As many enterprises have have told us that you know kubernetes can be difficult for the broader developer community inside the organization Thio interact with right? So this is, uh, you know, a strategic underpinning of our our product strategy. And this is really the first step in a non going launch of technologies that we're going to make bigger netease easier for developing. >>I was gonna say the other Easter egg that's all over the agenda, as I'm just kind of looking through the agenda. It's kubernetes on 80 infrastructure multi cloud kubernetes Miranda's open stack on kubernetes. So Goober Netease plays a huge part and you know, we talk a lot about kubernetes at all the events that we cover. But as you said, kind of the new theme that we're hearing a little bit more Morris is the difficulty and actually managing it so looking, kind of beyond the actual technology to the operations and the execution in production. And it sounds like you guys might have a few things up your sleeve to help people be more successful in in and actually kubernetes in production. >>Yeah, absolutely. So, uh, kubernetes is the focus of most of the companies in our space. Obviously, we think that we have some ideas for how we can, you know, really begin thio enable enable it to fulfill its promise as the operating system for the cloud eso. If we think about the ecosystem that's formed around kubernetes, uh, you know, it's it's now really being held back on Lee by adoption user adoption. And so that's where our focus in our product strategy really lives is around. How can we accelerate the move to kubernetes and accelerate the move to cloud native applications on? But in order to provide that acceleration catalyst, you need to be able to address the needs of not only the operators and make their lives easier while still giving them the tools they need for things like policy enforcement and operational insights. At the same time, Foster, you know, a grassroots, um, upswell of developer adoption within their company on bond Really help the I t. Operations team serve their customers the developers more effectively. >>Well, Dave, it sounds like a great event. We we had a great time covering those open stack events with you guys. We've covered the doctor events for years and years and years. Eso super engaged community and and thanks for, you know, inviting us back Thio to cover this inaugural event as well. So it should be terrific. Everyone just go to Miranda's dot com. The big pop up Will will jump up. You just click on the button and you can see the full agenda on get ready for about a month from now. When when the big reveal, September 16th will happen. Well, Dave, thanks for sharing this quick update with us. And I'm sure we're talking a lot more between now in, uh, in the 16 because I know there's a cube track in there, so we look forward to interview in our are our guests is part of the part of the program. >>Absolutely. Eso welcome everyone. Join us at the event and, uh, you know, stay tuned for the big reveal. >>Everybody loves a big reveal. All right, well, thanks a lot, Dave. So he's Dave. I'm Jeff. You're watching the Cube. Thanks for watching. We'll see you next time.

Published Date : Aug 26 2020

SUMMARY :

from the Cube Studios in Palo Alto in Boston, connecting with thought leaders all around the world. But you know, what we're seeing is a digital Happy to be here today. But you guys said, and the top one here in the Bay Area. invite the entire cloud native industry to join us and The Wares and the Winds in September 16th. participants in the community and eso we'd be happy to see you on So you guys purchase, um, assets from Docker at the end of last year, you know, focus more on the developer oriented, um, solution and So I'm looking at the at the website with with the agenda and there's a little teaser here right in the on the on the rocket ship as you noticed, and in the agenda it will be revealed So Goober Netease plays a huge part and you know, we talk a lot about kubernetes at all the events that we cover. some ideas for how we can, you know, really begin thio enable You just click on the button and you can see the full agenda on uh, you know, stay tuned for the big reveal. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AdrianPERSON

0.99+

September 16DATE

0.99+

DavePERSON

0.99+

JeffreyPERSON

0.99+

Dave Van EverenPERSON

0.99+

JeffPERSON

0.99+

EverenPERSON

0.99+

DockerORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

September 16thDATE

0.99+

Docker IncORGANIZATION

0.99+

Bay AreaLOCATION

0.99+

late MayDATE

0.99+

EnelPERSON

0.99+

mid November last yearDATE

0.99+

5000 peopleQUANTITY

0.99+

fourQUANTITY

0.99+

70,000 registrantsQUANTITY

0.99+

Dr EnterpriseORGANIZATION

0.99+

BostonLOCATION

0.99+

todayDATE

0.99+

MirantisORGANIZATION

0.99+

8 a.m.DATE

0.99+

first timeQUANTITY

0.98+

Docker Enterprise Container CloudTITLE

0.98+

Doctor EnterpriseORGANIZATION

0.98+

FosterPERSON

0.98+

Dr EnterpriseTITLE

0.98+

2014. 15DATE

0.98+

first stepQUANTITY

0.98+

80QUANTITY

0.98+

Docker Enterprise Container CloudTITLE

0.98+

6000QUANTITY

0.97+

this yearDATE

0.97+

Cube StudiosORGANIZATION

0.96+

late last yearDATE

0.96+

Container CloudTITLE

0.96+

five tracksQUANTITY

0.96+

EasterEVENT

0.96+

MorrisPERSON

0.96+

The Wares and the WindsEVENT

0.95+

MirandaPERSON

0.94+

DrPERSON

0.94+

Silicon ValleyLOCATION

0.94+

first timeQUANTITY

0.94+

once a yearQUANTITY

0.93+

eachQUANTITY

0.9+

endDATE

0.9+

couple of months agoDATE

0.88+

LaunchpadCOMMERCIAL_ITEM

0.88+

Apollo Alto studiosORGANIZATION

0.87+

CloudTITLE

0.87+

about a monthQUANTITY

0.86+

ThioPERSON

0.85+

WillPERSON

0.85+

MantisPERSON

0.84+

MirantORGANIZATION

0.84+

Thio DockerPERSON

0.83+

Doctor EnterpriseTITLE

0.82+

a monthQUANTITY

0.82+

KhanPERSON

0.81+

first major releaseQUANTITY

0.81+

last yearDATE

0.8+

2020DATE

0.8+

couple three months agoDATE

0.79+

yearsQUANTITY

0.79+

yearsDATE

0.75+

Moran Sous launchpad 2020EVENT

0.72+

ThioORGANIZATION

0.72+

LeePERSON

0.71+

MirandaORGANIZATION

0.7+

oneQUANTITY

0.69+

Container CloudTITLE

0.67+

monthsDATE

0.66+

Dr Enterprise PlatformTITLE

0.65+

76DATE

0.64+

Elaine Yeung, Holberton School | Open Source Summit 2017


 

(upbeat music) >> Narrator: Live from Los Angeles it's The Cube covering Open Source Summit North America 2017. Brought to you by the Lennox Foundation and Red Hat. >> Welcome back, everyone. Live in Los Angeles for The Cube's exclusive coverage of the Open Source Summit North America. I'm John Furrier, your host, with my co-host, Stu Miniman. Our next guest is Elaine Yeung, @egsy on Twitter, check her out. Student at Holberton School? >> At Holberton School. >> Holberton School. >> And that's in San Francisco? >> I'm like reffing the school right here. (laughs) >> Looking good. You look great, so. Open Source is a new generation. It's going to go from 64 million libraries to 400 million by 2026. New developers are coming in. It's a whole new vibe. >> Elaine: Right. >> What's your take on this, looking at this industry right now? Looking at all this old, the old guard, the new guard's coming in, a lot of cool things happening. Apple's new ARKit was announced today. You saw VR and ARs booming, multimedia. >> Elaine: Got that newer home button. Right, like I-- >> It's just killer stuff happening. >> Stu: (laughs) >> I mean, one of the reason why I wanted to go into tech, and this is why I, like, when I told them that I applied to Holberton School, was that I really think at whatever next social revolution we have, technology is going to be somehow interval to it. It's probably not even, like, an existing technology right now. And, as someone who's just, like, social justice-minded, I wanted to be able to contribute in that way, so. >> John: Yeah. >> And develop a skillset that way. >> Well, we saw the keynote, Christine Corbett Moran, was talking really hardcore about code driving culture. This is happening. >> Elaine: Right. So this is not, like, you know, maybe going to happen, we're starting to see it. We're starting to see the culture being shaped by code. And notions of ruling classes and elites potentially becoming democratized 100% because now software, the guys and gals doing it are acting on it and they have a mindset-- >> Elaine: Right. >> That come from a community. So this is interesting dynamic. As you look at that, do you think that's closer to reality? Where in your mind's eye do you see it? 'Cause you're in the front lines. You're young, a student, you're immersed in that, in all the action. I wish I was in your position and all these great AI libraries. You got TensorFlow from Google, you have all this goodness-- >> Elaine: Right. >> Kind of coming in, I mean-- >> So you're, so let me make sure I am hearing your question right. So, you're asking, like, how do I feel about the democratization of, like, educ-- >> John: Yeah, yeah. Do you feel it? Are you there? Is it happening faster? >> Well, I mean, things are happening faster. I mean, I didn't have any idea of, like, how to use a terminal before January. I didn't know, like, I didn't know my way around Lennox or GitHub, or how to push a commit, (laughs) until I started at Holberton School, so. In that sense, I'm actually experiencing this democratization of-- >> John: Yeah. >> Of education. The whole, like, reason I'm able to go to this school is because they actually invest in the students first, and we don't have to pay tuition when we enroll. It's only after we are hired or actually, until we have a job, and then we do an income-share agreement. So, like, it's really-- >> John: That's cool. >> It's really cool to have, like, a school where they're basically saying, like, "We trust in the education that we're going to give you "so strongly that you're not going to pay up front. >> John: Yeah. >> "Because we know you're going to get a solid job and "you'll pay us at that point-- >> John: Takes a lot of pressure off, too. >> Yeah. >> John: 'Cause then you don't have to worry about that overhang. >> Exactly! I wrote about that in my essay as well. Yeah, just, like because who wants to, like, worry about student debt, like, while you're studying? So, now I can fully focus on learning C, learning Python (laughs) (mumbles) and stuff. >> Alright, what's the coolest thing that you've done, that's cool, that you've gotten, like, motivated on 'cause you're getting your hands dirty, you get the addiction. >> Stu: (laughs) >> Take us through the day in the life of like, "Wow, this is a killer." >> Elaine: I don't know. Normally, (laughs) I'm just kind of a cool person, so I feel like everything I-- no, no. (laughs) >> John: That's a good, that's the best answer we heard. >> (laughs) Okay, so we had a battle, a rap battle, at my school of programming languages. And so, I wrote a rap about Bash scripts and (laughs) that is somewhere on the internet. And, I'm pretty sure that's, like, one of the coolest things. And actually, coming out here, one of my school leaders, Sylvain, he told me, he was like, "You should actually put that, "like, pretty, like, front and center on your "like, LinkedIn." Or whatever, my profile. And what was cool, was when I meet Linus yesterday, someone who had seen my rap was there and it's almost like it was, like, set up because he was like, "Oh, are you the one "that was rapping Bash?" And, I was like, "Well, why yes, that was me." (laughs) >> John: (laughs) >> And then Linus said it was like, what did he say? He was like, "Oh, that's like Weird Al level." Like, just the fact that I would make up a rap about Bash Scripts. (laughs) >> John: That's so cool. So, is that on your Twitter handle? Can we find that on your Twitter handle? >> Yes, you can. I will-- >> Okay, E-G-S-Y. >> Yes. >> So, Elaine, you won an award to be able to come to this show. What's your take been on the show so far? What was exciting about you? And, what's your experience been so far? >> To come to the Summit. >> Stu: Yeah. >> Well, so, when I was in education as a dean, we did a lot of backwards planning. And so, I think for me, like, that's just sort of (claps hands). I was looking into the future, and I knew that in October I would need to, like, start looking for an internship. And so, one of my hopes coming out here was that I would be able to expand my network. And so, like that has been already, like that has happened like more than I even expected in terms of being able to meet new people, come out here and just, like, learn new things, but also just like hear from all these, everyone's experience in the industry. Everyone's been just super awesome (laughs) and super positive here. >> Yeah. We usually find, especially at the Open Source shows, almost everyone's hiring. You know, there's huge demand for software developers. Maybe tell us a little bit about Holberton school, you know, and how they're helping, you know, ramp people up and be ready for kind of this world? >> Yeah. So, it's a two-year higher education alternative, and it is nine months of programming. So, we do, and that's split up into three months low-level, so we actually we did C, where we, you know, programmed our own shell, we programmed printf. Then after that we followed with high-levels. So we studied Python, and now we're in our CIS Admin track. So we're finishing out the last three months. And, like, throughout it there's been a little bit, like, intermix. Like, we did binary trees a couple weeks ago, and so that was back in C. And so, I love it when they're, like, throwing, like, C at us when we've been doing Python for a couple weeks, and I'm like, "Dammit, I have to put semicolons (laughs) >> John: (laughs) >> "And start compiling. "Why do we have to compile this?" Oh, anyway, so, offtrack. Okay, so after those nine months, and then it's a six month internship, and after that it's nine months of specialization. And so there's different spec-- you can specialize in high-level, low-level, they'll work with you in whatever you, whatever the student, their interests are in. And you can do that either full-time student or do it part-time. Which most of the students that are in the first batch that started in January 2016, they're, most of them are, like, still working, are still working, and then they're doing their nine month specialization as, like, part-time students. >> Final question for you, Elaine. Share your personal thoughts on, as you're immersed in the coding and learning, you see the community, you meet some great people here, network expanding, what are you excited about going forward? As you look out there, as you finish it up and getting involved, what's exciting to you in the world ahead of you? What do you think you're going to jump into? What's popping out and revealing itself to you? >> I think coming to the conference and hearing Jim speak about just how diversity is important and also hearing from multiple speakers and sessions about the importance of collaboration and contributions, I just feel like Lennox and Open Source, this whole movement is just a really, it's a step in the right direction, I believe. And it's just, I think the recognition that by being diverse that we are going to be stronger for it, that is super exciting to me. >> John: Yeah. >> Yeah, and I just hope to be able to-- >> John: Yeah (mumbles) >> I mean, I know I'm going to be able to add to that soon. (laughs) >> Well, you certainly are. Thanks for coming on The Cube. Congratulations on your success. Thanks for coming, appreciate it. >> Elaine: Thank you, thank you. >> And this is The Cube coverage, live in LA, for Open Source Summit North America. I'm John Furrier, Stu Miniman. More live coverage after this short break. (upbeat music)

Published Date : Sep 12 2017

SUMMARY :

Brought to you by the Lennox Foundation and Red Hat. of the Open Source Summit North America. I'm like reffing the school It's going to go from 64 million libraries What's your take on this, Elaine: Got that newer I mean, one of the reason why I wanted to go into tech, Well, we saw the keynote, Christine Corbett Moran, you know, maybe going to happen, As you look at that, do you think that's closer to reality? so let me make sure I am hearing your question right. Do you feel it? I mean, I didn't have any idea of, like, and we don't have to pay tuition when we enroll. "so strongly that you're not going to pay up front. John: Takes a lot John: 'Cause then you don't have to worry (laughs) (mumbles) and stuff. you get the addiction. "Wow, this is a killer." Elaine: I don't know. that's the best answer we heard. and (laughs) that is somewhere on the internet. And then Linus said it was like, what did he say? So, is that on your Twitter handle? Yes, you can. So, Elaine, you won an award And so, like that has been already, you know, and how they're helping, you know, and so that was back in C. And you can do that either full-time student What do you think you're going to jump into? that by being diverse that we are going to be stronger for it, I mean, I know I'm going to Well, you certainly are. And this is The Cube coverage, live in LA,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ElainePERSON

0.99+

JohnPERSON

0.99+

LinusPERSON

0.99+

Elaine YeungPERSON

0.99+

Stu MinimanPERSON

0.99+

January 2016DATE

0.99+

SylvainPERSON

0.99+

John FurrierPERSON

0.99+

Red HatORGANIZATION

0.99+

JimPERSON

0.99+

San FranciscoLOCATION

0.99+

OctoberDATE

0.99+

nine monthsQUANTITY

0.99+

LALOCATION

0.99+

two-yearQUANTITY

0.99+

Christine Corbett MoranPERSON

0.99+

AppleORGANIZATION

0.99+

100%QUANTITY

0.99+

Lennox FoundationORGANIZATION

0.99+

six monthQUANTITY

0.99+

PythonTITLE

0.99+

Los AngelesLOCATION

0.99+

Holberton SchoolORGANIZATION

0.99+

GitHubORGANIZATION

0.99+

three monthsQUANTITY

0.99+

nine monthQUANTITY

0.99+

400 millionQUANTITY

0.99+

Open Source Summit North AmericaEVENT

0.99+

@egsyPERSON

0.99+

yesterdayDATE

0.99+

StuPERSON

0.99+

LennoxORGANIZATION

0.98+

2026DATE

0.98+

Open Source SummitEVENT

0.98+

Open Source Summit North America 2017EVENT

0.98+

GoogleORGANIZATION

0.97+

oneQUANTITY

0.96+

Weird AlPERSON

0.95+

ARKitTITLE

0.95+

firstQUANTITY

0.95+

first batchQUANTITY

0.94+

todayDATE

0.93+

Open Source Summit 2017EVENT

0.92+

64 million librariesQUANTITY

0.89+

The CubeORGANIZATION

0.89+

Bash ScriptsTITLE

0.88+

CTITLE

0.88+

LinkedInORGANIZATION

0.87+

OpenORGANIZATION

0.86+

couple weeks agoDATE

0.85+

North AmericaLOCATION

0.81+

Holberton schoolORGANIZATION

0.81+

TwitterORGANIZATION

0.78+

TensorFlowTITLE

0.76+

C.TITLE

0.68+

BashTITLE

0.68+

E-G-S-YTITLE

0.65+

schoolQUANTITY

0.64+

last three monthsDATE

0.58+

Chris Aniszczyk, CNCF | Open Source Summit 2017


 

(gentle music) >> Announcer: Live, from Los Angeles, it's theCUBE, covering Open Source Summit, North America, 2017, brought to you by the Linux Foundation and Red Hat. >> Okay welcome back, and we're live here in Los Angeles, this is theCUBE's exclusive coverage of the Linux Foundation's Open Source Summit North America. I'm John Furrier, your host with my co-host Stu Miniman. Our next guest is Chris Aniszczyk, who's the COO, Chief Operating Officer of the CNCF, the Cloud Native Compute Foundation, formerly Cube-Con, Cloud Native Foundation, all rolled into the most popular Linux Foundation project right now, very fashionable, cloud native, running on native clouds, Chris welcome back to theCUBE, good to see you. >> Awesome, it's been a while, great to be back. >> So you are the Chief Operating Officer of the hottest project, to me at least, in the Foundation. Not the most important, because there's a lot of really important, everything's important, you don't pick a favorite child, but, if one's trending, the CNCF is certainly trending, it's got the most sponsors, it's got the most participants, there's so much action going on, there's so much change and opportunity, around Kubernetes, around containers, around writing cloud-native applications. You've guys have really put together a nice foundation around that, nice group, congratulations. >> Thank you. >> Take a step back and explain to us, what the hell is the CNCF? We know what it is, we were there present at creation, but it's super-important, it's growing in relevance every day. Take a minute to explain. >> So I mean, you know, CNCF is all about providing a neutral home for cloud-native technology, and it's been about almost two years since our first board meeting and the idea was, there's a certain set of technology out there that are essentially micro-service-based, that live in containers that are centrally orchestrated by some process, right, that's essentially what we mean when we say cloud-native, right, and CNCF was seeded with Kubernetes as its first project, and as we've seen over the last couple of years, Kubernetes has grown quite well, they have a large community, diverse contributor base, and have done kind of extremely well. They're one of actually the fastest, highest velocity open source projects out there, maybe only, compared to the kernel is maybe a little bit faster but it's just great to kind of see it growing. >> Why is it so hot right now? What's the catalyst? >> So I think if we kind of step back and we look at the trends in industry, right, more and more companies are becoming software companies, you know, folks like John Deere, building IoT platforms. You need some type of infrastructure to run this stuff, and especially at scale. You know, imagine sensors in every tractor, farm or in every vehicle, you're going to need serious infrastructure and cloud native really is a way to scale those type of infrastructure needs and so this is kind of I think why you're seeing a lot of interest being piqued in CNCF-related technology. >> A lot of prototypes too. >> Chris, see you know, it's interesting, I look back you know, a year or two ago, and it was like, oh, it was like the orchestration wars, it was Swarm versus Mesos, and now I look at it in the last year it's like, wait, Mesos fully embracing it, MesosCon they're going to be talking about how Mesos is the best place to you know, Kubernetes on DCOS, containerd now part of the container wars, so the container wars, we're going to talk about OCI, you know, Amazon, Microsoft, of course Google, out there at the beginning. Is there anybody that's not on board that Kubernetes... >> I mean we really have the top five cloud providers in the world, depending on what metrics you look at, part of CNCF, you know there's some others out there that still aren't fully part of the family. Hopefully if you stay tuned over the next week or so you may hear some announcements coming from CNCF of other large cloudy-type companies joining the family. >> Every week there's a new platinum sponsor (Chris laughs) and you guys are getting a check every week it seems. >> To me it's great to see companies stepping up to the play and actually sustaining open source foundations that are critical to the actual business, and I think that it's great to see this involvement. So to me I'm personally thrilled, 'cause otherwise we'd be in a situation where if the top five cloud providers in the world weren't part of CNCF, maybe they'd be trying to do their own initiative, so it's great that we have these companies at the table, and all trying to build, you know, find their own pathway to cloud-native. >> You guys are hyper-growth right now, and you're new too, you're still kind of you know, >> Chris: Less than two years old! >> I mean it's amazing. So I want to put a little Jim Zemlin test to you, (Chris laughs) which is, in his keynote today he talked about, this is the big kind of event for the whole community of open source to come together, and again, you're talking 64 million libraries out there now. He projected by 2026, 400 million, it literally is a hockey stick growth, so you got growth there, so he talked about four things, my summary. Project health, so healthiness, sustainability, secure code, training, new members. What's your strategy re those four things? Keeping the CNCF healthy, you don't eat too much and choke on all of that growth... >> Yeah, so in terms of projects, we have a very unique governance structure in place when we designed CNCF. So we kind of have this independent technical operating committee, we kind of jokingly refer to them as a technical supreme court, but they are made up of people from, kind of luminaries in the container cloud-native space, they're from competing companies too, but they try to really wear an independent hat and make sure that we're, projects that we're accepting are high quality, are a good fit for the foundation, and so it's actually fairly hard to get a project in CNCF, 'cause it really requires the blessing of this TOC. So, even though we have 10 projects now in about two years, I think that's about a project every two months, which is an okay pace. The other unique thing that we're doing is we have different levels of projects, we have inception, incubation and graduation. Right now, we have no graduated projects in CNCF, believe it or not, Kubernetes has not graduated yet because they're still finalizing their governance for the project and they're almost there. Once they do that, they'll most likely graduate. >> They'll walk cap and gown all nine yards, eh? >> Exactly, it'll be great. December we'll have the cap and gown ceremony. But the other unique thing is we're not, we do annual kind of reviews for some of our projects, certain levels will be annually reviewed, and if they're not longer healthy or a good fit, we're okay archiving them, or telling, you know, telling them you know, maybe you're not a good fit anymore for the foundation, or you know. And so I think you have to have a process in place where sometimes you do have to move things to the attic. >> Do you have a high bar on the projects >> The initial bar is extremely, extremely high, and I think over time, we may see some projects that get recycled or moved to the attic, or maybe they maybe merged together, we'll see, so we're thinking about this already, so... >> John: Okay, security? >> Security, so we, all projects in CNCF that graduate have to partake in the core infrastructures best practices badging program, so if the CII has this great effort that is basically helping to ensure projects meet a minimal level of best practices that make their projects secure. You know, it doesn't give you like full-blown guarantee, but these are good practices. >> So you were leveraging pre-existing work, classic, open-source ethos. >> Exactly, and they have like a set of domain experts completely focused on security building out these practices and you'll notice Kubernetes recently merged in the CII Best Practices badge, so if you go to the readme, you'll actually see it, and you'll click through and you'll see all the things that they've had to sign off and check on that they participate in, and so all of our projects are kind of going >> Training. >> Training, yeah, we just recently announced couple things. One is we have a >> Looking good so far, you get an A plus. >> Yeah, so as of today we've launched the Certified Kubernetes Administrator Program or CKA for short. So we have folks that are getting trained on, and are having official stamps that they are certified Kubernetes administrators, and to me that's huge, given like how hot the space is, having some stamp of approval that they are really certified in the space is huge. So we also offer free training through edX, so we launched some training courses earlier, and to be honest, if you look at our member companies, lots of great folks out there providing training material. >> So one of the keynotes that Christine Corbett Moran was talking about in her keynote was, more inclusion so there's no ruling class. Now I know you really have a ruling class going on with your high bar, I get that. How are you getting new members in, what's the strategy, who are the new members, how are you going to manage the perception possibly that a few people control the swing votes at potentially big projects? >> So here what's interesting is, people joining CNCF, like I mentioned before, we have a TOC, right? So there's kind of this separation of, I don't say church and state, but like, so the governing board, people who pay to join CNCF, they pay to sustain our open source projects, and so essentially they help with, they pay for marketing, staff, events and so on. They actually don't have technical influence over the projects. You don't have to be a member to have technical influence over our projects. People join CNCF because they want to have a say in the overall budget of how marketing, events and stuff, and just overall support the organization. But on the technical side, there's this kind of firewall, there's an independent TOC, they make the technical decisions. You can't really pay to join that at all, you have to actually be heavily participating in that community. >> John: How does someone get in that group? Is there a code? >> They have to just be like a luminary, we have a kind of election process that happens every two or three years, depending on how things are structured, and it's independently elected by the CNCF member community, essentially, is the simplest way I can explain it. >> The other announcement you talked about, kind of the individual certification, but the KCSP sort of programs >> Correct, exactly. >> Maybe you can tell us a little bit about that. >> Yes, so we had a program set up so it's Kubernetes Certified Service Provider, KCSP, that basically >> rolls right off the tongue >> I know, right, exactly. Herbal space program, whatever, I think of sometimes video games when we say it, but essentially, the program was put in place that a lot of end users out there in companies that are new to cloud native, and they're new to Kubernetes, essentially want to find a trusted set of partners that they can rely on, services and other things, so we created KCSP as a way to vet a certain set of companies that have at least a minimum of three people that have passed the Kubernetes certification exam that I talked about, and are essentially participating upstream in some way actively in the Kubernetes community. So we got a couple handfuls of companies that have launched, which is great, and so now, given that we're growing so fast, companies out there that are early end users that are exploring the space now have a trusted set of companies that go look at, and we're hoping to grow that program over time too. So this is just phase one. >> All right, so Chris, the other thing that I want to make sure we talk about, the Open Container Initiative, so I think it was originally OCP, which of course is, >> Open Container Project which when OCP was announced, it was like, okay, the cold war of Docker versus CoreOS versus everybody else, (Chris laughs) trying to figure out what that container format was, we all shook hands, I took a nice selfie with Ben who was CEO at the time, and everybody. So 1.0 is out. So, container's fully mature, ready to be rolled out right? But what does it mean? >> So I mean it's funny 'cause I basically joined the Linux Foundation, to help both start CNCF and OCI around the same time, right, and OCI was very narrowly scoped to only care about a small set of container-specific issues. One around how do you actually really run containers, start, stop, all that kind of life cycle bit, and how are containers laid out on disk, we call that the image specification. So you have the runtime spec and the image spec, and those are just very limited core pieces, like that OCI was not opinionated on networking or storage or any of, those are all left to other initiatives. And so after almost two years, we shipped 1.0, we got basically all the major container players to agree that this is 1.0 and we're going to build off from this, and so if you look at Docker with it's containerd project, or you know, fully adopting OCI, the Mesos community is, Cloud Foundry, even AWS announced their container register's supporting OCI, so we got the 1.0 out there, now we're going to see an abundance of people building tools and other things. I think you'll see more end users out there exploring containers. I've talked to a lot of companies that I can't necessarily name, but there's a lot of folks out there that may not dive into container technology until there is actually a mature standard and they feel like this technology is just not going to go away or they're going to get locked into some specific platforms. So, with 1.0 out the door, you'll see over the next six to 12 months, more tools being built. We're actually working to roll out a certification program so you get that nice little, you know, hey, this product is OCI-certified and supports the spec, so you'll see that happen over the next... >> Okay, so you've got the runtime spec and the image format spec, >> Yep, those are the two big ones. >> All 1.0, we're ready to roll, what's the roadmap >> Yeah, what's next. So there are early discussions about what other mature areas are out there kind of in container land right now. There are some discussions around distribution, so having a standard API to basically fetch and push container images out there. If you look at it, each container registry has basically a different set of APIs, and wouldn't it be nice if we could all kind of easily work together and have maybe one set, a way to kind of distribute these things. So there are some early discussions around potentially building out a distribution specification, but that's something that the technical community has to decide within OCI to do, and so over the next couple of months we're having some meetings, we're doing a bigger meeting at DockerCon Europe coming up in October to basically try to figure out what's really next. So I think after we shipped 1.0 a lot of people took a little bit of a breather, a break, and say like, congratulate themselves, take some vacation over the summer, and now we're going to get back into the full swing of things over the next couple of months. >> Say, what's the big conversation here, obviously at your event in Austin, it's got a plug for, theCUBE will be live covering it as well. >> I know, I'm excited. >> What's the uptake, what's the conversation in the hallways, any meetings, give us some >> Yeah, so we're doing >> I know there's some big announcement coming on Wednesday, there's some stuff happening >> Yeah, so, you know, first coming Wednesday, so like I mentioned, we have 10 projects right now in CNCF. We have two projects currently out for vote. So one of them is Envoy. There's a company you've probably heard of, Lyft, ride-sharing company, but Envoy essentially is their fancy service mesh that powers the Lyft platform, and many other companies out there are actually taking advantage of Envoy. Google's playing around with it, integrating into the Istio project, which is pretty powerful, but Envoy is currently, it was invited by the TOC for a formal vote, the voting period started last week, so we're collecting votes from the nine TOC members, and once that voting period is hopefully we can announce whether the project was accepted or not. The other project in the pipeline is a project called Jaeger, which is from Uber, you know, nice to have Uber >> John: Jaegermeister. >> Yeah, Jaegermeister, a bit like it. It's nice to have a product from Uber, another product from Lyft, kind of it's nice to see >> And if you have too much Jaeger, you have to take the Lyft to get home, right? >> Exactly, correct. So you know, just like Envoy, Jaeger is, you know, was formally invited by the TOC, it's out for vote, and hopefully we'll count the votes soon and figure out if it gets accepted or not. So Jaeger is focused on distributed tracing, so one problem in micro-services land is once you kind of like refactor your application to kind of be micro-services-based, actually tracing and figuring out what happens when things go wrong is hard, and you need a really good set of distributed tracing tools, 'cause otherwise it's like the worst murder mystery, you have like no idea what's happened, so having solid distributed tracing solution like Jaeger is great, 'cause in CNCF we're going to have a project called OpenTracing, but that's just kind of like the spec of how you do things, there's no full-blown client-server distributed >> For instance you usually need it for manageability >> Exactly, and that's what Jaeger provides, and I'm excited to kind of have these two projects under consideration in CNCF. >> Is manageability the hottest thing going on right now in terms of conversations? (Chris sighs) Or is it more stability and getting projects graduating? >> Yeah, so like our big focus is like, we want to see projects graduate, kind of meet the minimum bar that the TOC set up for graduated projects. In terms of other hot areas that are under discussion in CNCF are storage, so for example we have a storage working group that's been working hard to kind of bring in all the vendors and different storage folks together, and there's some early work called the container storage interface, we call it CSI for short, and so you know there's another project at CNCF called CNI, which basically tried to build a standard around how networking is done in container land. CSI is doing the same thing because, you know, it's no fun rewriting your storage drivers for all the different orchestration systems out there, and so why not get together and build out a standard that is used by Kubernetes, by Mesos, by Cloud Foundry, by Docker, and just have it so they all work across these things. So that's what's happening, and it's still early days, but there's a lot of excitement in that. >> Okay, the event in Austin, what can people expect? Cube-Con. >> You're literally going to have the biggest gathering of Kubernetes and cloud-native talent. It's actually going to be one of our biggest events probably for the Linux Foundation at all. We're probably going to get 3-4,000 people minimum out there, and I'm stoked, we're going to have some... Schedule's not fully announced yet. I do secretly know some of the keynotes potentially, but just wait for that announcement, I promise you it's going to be great. >> And one question I get, just I thought I'd bring it up since you're here in the hot seat, lot of people coming in with, supporting you guys on the governing side, not even cyclical. How are you going to service them, how are you going to scale up, do you have confidence that you have the ability to execute against those sponsorships, support the members, what's your plan, can you share some insights, clarify that? >> You know, pressure makes diamonds, right? We have a lot of people at the right table, and we are doing some hiring, so we have a couple spots open for developer advocacy, technical writing, you know additive things that help our project overall. We're also trying to hire a head of marketing. So like, we are in the process of expanding the organization. >> Do you feel comfortable... >> I feel comfortable, like things are growing, things are moving at a fast clip, but we're doing the best we can to hire and don't be surprised if you hear some announcements soon about some fun hires. >> Well it's been great for us covering, we've been present and creating, if you will, this movement, which has been kind of cool, because it kind of a confluence of a couple of things coming together. >> Chris: Yeah, absolutely. >> It's just been really fun to watch, just the momentum from the cloud really early days, 2009 timeframe to now, it's been a real nice ride and congratulations to the entire community. >> Thank you, like for me it's just exciting to have all these companies sitting together at the same table, having Amazon join, and the other top fighters, all basically committing to saying, we are in the cloud-native, we may have different ways of getting there, but we're all committed working together at some level. So I'm stoked. >> Great momentum, and you guys doing some great work, congratulations. >> Thank you very much. >> And you know it's working when I get focused, hey can you, so and so, I'm like, oh yeah, no problem, oh wow, they're big time now, you guys are big time. Congratulations. >> Thank you, it's in phase one now, like we have the right people at the table >> Don't screw it up! (John and Chris laugh) As they say. It's on yours. Chris Aniszczyk, who's the COO of the Cloud Native Compute Foundation, the hottest area of Linux Foundation right now, a lot of action on cloud, cloud-native developers where DevOps is meeting, lot of progress in application development. Still, they're really only two years old, get involved, more inclusion the better. It's theCUBE, Cube coverage of CNCF. We'll be in Austin in December. >> Chris: Yep, six to eight. >> December 6 to 8, we'll be there live. More live coverage coming back in Los Angeles here for the Open Source Summit North America after this short break.

Published Date : Sep 12 2017

SUMMARY :

brought to you by the Linux Foundation and Red Hat. of the CNCF, the Cloud Native Compute Foundation, of the hottest project, to me at least, in the Foundation. Take a step back and explain to us, So I mean, you know, CNCF is all about and so this is kind of I think why you're seeing a lot talking about how Mesos is the best place to you know, in the world, depending on what metrics you look at, and you guys are getting a check every week it seems. and all trying to build, you know, find their own Keeping the CNCF healthy, you don't eat too much and so it's actually fairly hard to get a project in CNCF, for the foundation, or you know. and I think over time, we may see some projects so if the CII has this great effort So you were leveraging pre-existing work, One is we have a you get an A plus. and to be honest, if you look at our member companies, So one of the keynotes that Christine Corbett Moran and just overall support the organization. is the simplest way I can explain it. and they're new to Kubernetes, the cold war of Docker versus CoreOS the Linux Foundation, to help both start CNCF and OCI All 1.0, we're ready to roll, and so over the next couple of months Say, what's the big conversation here, and once that voting period is hopefully we can announce It's nice to have a product from Uber, the spec of how you do things, and I'm excited to kind of have these two projects CSI is doing the same thing because, you know, Okay, the event in Austin, what can people expect? I do secretly know some of the keynotes potentially, lot of people coming in with, supporting you guys We have a lot of people at the right table, and don't be surprised if you we've been present and creating, if you will, and congratulations to the entire community. having Amazon join, and the other top fighters, and you guys doing some great work, congratulations. And you know it's working when I get focused, the hottest area of Linux Foundation right now, for the Open Source Summit North America

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

AmazonORGANIZATION

0.99+

Chris AniszczykPERSON

0.99+

JohnPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Cloud Native Compute FoundationORGANIZATION

0.99+

10 projectsQUANTITY

0.99+

Stu MinimanPERSON

0.99+

Linux FoundationORGANIZATION

0.99+

BenPERSON

0.99+

Red HatORGANIZATION

0.99+

AWSORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Cloud Native FoundationORGANIZATION

0.99+

UberORGANIZATION

0.99+

two projectsQUANTITY

0.99+

John FurrierPERSON

0.99+

OCIORGANIZATION

0.99+

Los AngelesLOCATION

0.99+

OctoberDATE

0.99+

CNCFORGANIZATION

0.99+

WednesdayDATE

0.99+

DecemberDATE

0.99+

AustinLOCATION

0.99+

Cube-ConORGANIZATION

0.99+

oneQUANTITY

0.99+

Christine Corbett MoranPERSON

0.99+

LyftORGANIZATION

0.99+

2026DATE

0.99+

JaegermeisterPERSON

0.99+

2009DATE

0.99+

Jim ZemlinPERSON

0.99+

last weekDATE

0.99+

last yearDATE

0.99+

CNIORGANIZATION

0.99+

JaegerPERSON

0.99+

John DeereORGANIZATION

0.99+

nineQUANTITY

0.99+

one setQUANTITY

0.99+

EnvoyORGANIZATION

0.99+

todayDATE

0.99+

first projectQUANTITY

0.99+

8DATE

0.98+

3-4,000 peopleQUANTITY

0.98+

three yearsQUANTITY

0.98+

December 6DATE

0.98+

Cloud FoundryORGANIZATION

0.98+

Open Source SummitEVENT

0.98+

Susan Blocher, HPE - HPE Discover 2017


 

>> Announcer: Live from Las Vegas, it's the Cube, covering HPE Discover 2017. Brought to you by Hewlett Packard Enterprise. (techno music) >> Okay, welcome back everyone. We are here live in Las Vegas for the Cube's exclusive three-day coverage, we're on day two of HPE, Hewlett Packard Enterprise Discover 2017. I'm John Furrier, my cohost Dave Vellante. Our next guest is Susan Blocher, Vice President of Marketing, Data Center Infrastructure Group, part of Hewlett Packard Enterprise. Welcome back to the Cube, great to see you. >> Great to see you both again. >> So a lot of great stuff, so I want to just going to, a lot of buzz, Gen-10, a lot of new capabilities. Let's get right into it. Hard news, what's the update? What's going down at the show? >> We made the magic happen for this discoverer. It's just really exciting. So, hard news, we really focused on three areas for our customers: new levels of agility across their hybrid IT infrastructure, which means automation, better performance enhancements, taking things that used to be very manual and making them run sort of seamlessly, number one. Number two, security, and we're talking breakthrough security. So this is where we've been able to leverage some unique opportunities like the fact that we build our own silicane to put a silicane root of trust, or an immutable fingerprint, right into our silicane that can never be changed. That fingerprint will not let infected firmware startup. No matter, you know, as long as the firmware is the right firmware, it'll boot up seamlessly. If it's in any way been compromised you will, it will not let that server boot up. Super secure infrastructure. Last but not least, our customers were telling us economic control. We need economic control. We need cloud-like economics. We need pay-as-you-go. We need the ability to get capacity on demand. Those are the things that we really innovated on for this year. >> One of the things that's coming out, we had Bob Moran on the security thing. >> Yeah. >> And James Morrison >> Yes. >> I want to call him Jim Morrison >> Oh, exciting. >> I used to be >> I know. >> a big Doors fan. >> Yeah. >> I respect his name, I'm sure he gets that all the time. >> Yeah. The security on the silicane >> Yeah. >> is interesting to me because now you're seeing things like block changes, immutable environments wherevpeople have this trust relationship. That really hits the ransomeware side of things really in a big way. >> Susan: Yes. >> What else is that hitting? That is, to me, the big news, is that, is the security at the server-level because there's no perimeter anymore in this cloud-like environment, so this is kind of a cool way, explain more, just take minute to talk about the security piece because I think to me that's game-changer. >> It's super fascinating, and you know, I'm quoting somebody else, so I'm blatantly stealing somebody's line, but I was reading an article where somebody said firmware is a cesspool of Trojan horse opportunity for cyber attackers, and that took me aback because I was like, Boy, those are some strong words there, but really with all of the investment that companies have made over the years in data security, application security, network security, no one was focusing on the servers, and frankly, there's a million lines of code, and I'm sure Bob said that, there's a million lines of code booting up to get your servers up and running that no one has protected up until now. And so, we recognize about two years ago, that this was a huge threat, and increasing everyday, and boy, two years later, we're in the nick of time, to give customers really the peaace of mind of that security. >> One of the things that Wikibon just put out in terms of reports on research that I find fascinating that ties into this trend that I want to get your reaction on is, I think they're the only research firm that put this out actually, is they actually size the true private cloud market at about 260 billion, and that's not including the hybrid piece. That means, on-prem, cloud-like capabilities for on-premise data centers, which means, hey, that's not really going away, so it points to that narrative that, oh, data centers are moving to the cloud, so that's kind of probably not going to happen any time soon, but the cloud-like capabilities are there. But one of the interesting stats is that, is billions of dollars in cost-shift from labor, to hire differentiated, higher yield, or differentiated stuff inside the organization. So IT's not getting smaller, it's getting changing. >> Susan: That's right. >> So, how are you guys taking the Gen-10 and other things, and helping customers abstract away those tasks? >> Yeah, exactly. So look, all of our customers are really doing hybrid IT now, and so they're doing some things on-premise, they're doing some things off-premise, and frankly, it makes sense. But there's a tremendous amount of compromises that they have to make on both sides of the coin, and so what we've been talking about, a new compute experience, and that's really what we mean. It's not saying that you should have everything on-premise, or that you should move everything to the cloud. It's really saying, how do we give you the best end to end experience across agility, security, and economic control, so that the trade-offs that you're making, are not trade-offs on the pros or cons of those side loads of IT, but really looking at it from a what kind of business outcomes do I need to drive, and that's how I make my decisions. >> So, if you go back to around 2010, John, we were talking the Cube about a couple of observations. And it sort of coincided with the ascendancy of the public cloud. We said that the hyper-scale guys will spend time, engineering time, to save money, and then automate stuff, but the Enterprise guys, they'll spend money to save time. They don't have all of those engineering resources and we talked about that for a while, and it kind of got old and sort of boring. Fast-forward to 2017, and that's exactly what happened is vendors have put in a lot of effort to create cloud-like capabilities, and to John's point, is you're seeing a shift in staffing away from undifferentiated stuff, so talk about what that means for the data center infrastructure group, sort of how you position and how you talk to customers and message them about your role and how you add value. >> Yeah, absolutely. So look, first of all, we don't talk about just data center infrastructure. I think that's really where it starts because frankly, customers are talking about their data, they're talking about their applications, they're talking about how to bring intelligence to their hybrid IT experience, and so what we're talking to them about, is really how do we bring that together for them? We're talking about software-defined intelligence, how we're leveraging HPE One View to automate the deployment of applications across what could be a complex apology, but doing it absolutely in an automated seamless way. We're talking about how we're taking iLo and building the security in, but we're also doing things like intelligence system tuning where we're partnering with Intel and really figuring out how to take what is the Intel turbo-boost mode, from their processors, and make it even better. And so a lot of applications can't take advantage of the turbo-boost mode because there's a bit of when you hit that high frequency, you get a little bit of jitter, and that jitter creates latency, and so a lot of applications like core banking, video streaming, high frequency trading, they can't use turbo-mode because of that jitter that creates latency. We've been able to figure out, partnering with Intel, how to dampen a little bit of that speed, but still get turbo-mood and eliminate that jitter, so no latency. For the first time, these applications have been able to take advantage of turbo-mode. And what we figured out is even though we dampened it a little bit, they actually perform better with that little bit of dampening than they would've if we had shot them up with full turbo mode, right? So super exciting innovations with that. >> Sounds like Pied Piper. (laughter) >> But this is the kind of innovation that's going on in the systems world, and another observation we've seen on the Cube is, we go to a lot of events, is that systems is back. There's kind of an under current going on in the industry where hardware and operating systems folks are now part of big transformations, whether it's hyper-scale or in-service providor and Enterprise, so how are you guys looking at the compute differently if the notion of a server is shifting, and they're maybe consuming IT differently, where the channel partner might become a provider, and all these things are going on, how do you guys look at this new style of computer, our Meg says the changing landscape of compute. >> The changing landscape. It's all about really understanding our customers, and who they are, and how we can look at their unique needs and then segment our value and our portfolio toward them, so you talked about hyper-scale users, like service providers, cloud service providers, small and medium sized businesses, Enterprise customers, Telco environments, high-performance computing, super computing. What we realize is that one size does not fit all, and that's really what it comes down to, and that's one of the trade-offs of the public cloud environment, there's lots of good things about public cloud, but one of the trade-offs is it's kind of commodity hardware and one size fits all, but if you're trying to do any kind of mission critical applications, like I said, high frequency trading, you need super computing capabilities, you need deep analytics, machine-learning, whatever the case might be, it's not... You really need to specialize the infrastructure, and HPE is right there working with our customers regardless of their needs and their segments, we've got the solutions that will help them do that. >> So one of the things I'm inferring from some of your comments, I want to ask you about marketing. I always struggle with marketing. (laughter) You're shifting the message from product, product, product to business impact. >> Susan: Yes! >> Okay, that's clear. What else is working in marketing these days? It's never one silver bullet, but there's belly to belly, there's events like this, there's obviously old-school email marketing, there's social media. What are you finding as a marketing problem? >> We talk a lot about digital transformation for our customers, but digital transformation has come to marketing, so that's the biggest thing. We have made a huge shift at Hewlett Packard Enterprise in digital marketing. So everything that we're doing, even an event like this, which is physical, but it used to be kind of a one-off. We do all this prep, and then the week would go by, and it would disappear, and that would be the end of it. We're learning to build snackable content assets that have life after life after life, we're really embracing the social media, we've built a whole new digital marketing platform, we've shifted from what I would call traditional demand generation into really reaching our customers through digital marketing in every country globally. Huge, amazing metamorphosis, and frankly, with the announcement of the new HPE compute experience, and the Gen-10 platform, and the world's most secure industry-standard servers, it is the perfect timing of bringing all of this incredible innovation of technology to market at the same time that we're innovating around marketing, so the next 12 months, it's going to be super exciting. >> Eating your own innovations, if it were. >> That's right, that's right. >> Congratulations on the Gen-10 launch, and all the great goodness you guys got going on the security thing, a big deal. >> A big deal. >> Looking forward to following up on that further after the show, to keep it going. Certainly, there's digital aspects here in the Cube will be available on Youtube.com, slash and the name of course, the Cube Gems and highlights, all available. Thanks so much for joining us on the Cube, really appreciate it, more live coverage from HP Discover 2017. After this short break, stay with us. I'm John Furrier with my co-host Dave Vellante. We'll be right back. (techno music)

Published Date : Jun 7 2017

SUMMARY :

Brought to you by Hewlett Packard Enterprise. for the Cube's exclusive three-day coverage, What's going down at the show? We need the ability to get capacity on demand. One of the things that's coming out, we had Bob Moran all the time. The security on the silicane That really hits the ransomeware side of things is the security at the server-level that companies have made over the years and that's not including the hybrid piece. so that the trade-offs that you're making, We said that the hyper-scale guys and building the security in, Sounds like Pied Piper. that's going on in the systems world, and that's one of the trade-offs So one of the things I'm inferring from some but there's belly to belly, so the next 12 months, it's going to be super exciting. and all the great goodness you guys got going on after the show, to keep it going.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Susan BlocherPERSON

0.99+

Jim MorrisonPERSON

0.99+

JohnPERSON

0.99+

SusanPERSON

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

James MorrisonPERSON

0.99+

BobPERSON

0.99+

John FurrierPERSON

0.99+

Bob MoranPERSON

0.99+

2017DATE

0.99+

Las VegasLOCATION

0.99+

three-dayQUANTITY

0.99+

first timeQUANTITY

0.99+

billions of dollarsQUANTITY

0.99+

two years laterDATE

0.99+

WikibonORGANIZATION

0.99+

IntelORGANIZATION

0.98+

both sidesQUANTITY

0.98+

a million linesQUANTITY

0.98+

about 260 billionQUANTITY

0.97+

oneQUANTITY

0.97+

OneQUANTITY

0.96+

three areasQUANTITY

0.96+

one sizeQUANTITY

0.95+

bothQUANTITY

0.95+

one silver bulletQUANTITY

0.95+

CubeCOMMERCIAL_ITEM

0.95+

MegPERSON

0.94+

this yearDATE

0.94+

day twoQUANTITY

0.94+

Gen-10OTHER

0.93+

iLoTITLE

0.93+

HPEORGANIZATION

0.93+

2010DATE

0.83+

Pied PiperPERSON

0.81+

Data Center Infrastructure GroupORGANIZATION

0.79+

about two years agoDATE

0.77+

Packard Enterprise Discover 2017EVENT

0.77+

next 12 monthsDATE

0.75+

HewlettORGANIZATION

0.74+

Number twoQUANTITY

0.73+

Youtube.comORGANIZATION

0.72+

Gen-10QUANTITY

0.71+

TelcoORGANIZATION

0.69+

HPEEVENT

0.68+

VicePERSON

0.65+

One ViewCOMMERCIAL_ITEM

0.62+

HPORGANIZATION

0.62+

HPE Discover 2017EVENT

0.61+

HPETITLE

0.59+

DoorsORGANIZATION

0.57+

coupleQUANTITY

0.5+

Discover 2017EVENT

0.42+