Image Title

Search Results for YAML:

Satish Puranam & Rebecca Riss, Ford | KubeCon + CloudNativeCon NA 2022


 

(bright music) (crowd talking indistinctly in the background) >> Hey guys, welcome back to Detroit, Michigan. theCUBE is live at KubeCon + CloudNativeCon 2022. You might notice something really unique here. Lisa Martin with our newest co-host of theCUBE, Savannah Peterson! Savannah, it's great to see you. >> It's so good to be here with you (laughs). >> I know, I know. We have a great segment coming up. I always love talking couple things, cars, one, two, with companies that have been around for a hundred plus years and how they've actually transformed. >> Oh yeah. >> Ford is here. You have a great story about how you, about Ford. >> Ford brought me to Detroit the first time. I was here at the North American International Auto Show. Some of you may be familiar, and the fine folks from Ford brought me out to commentate just like this, as they were announcing the Ford Bronco. >> Satish: Oh wow. >> Which I am still lusting after. >> You don't have one yet? >> For the record. No, I don't. My next car's got to be an EV. Although, ironically, there's a Ford EV right behind us here on set today. >> I know, I know. >> Which we were both just contemplating before we went live. >> It's really shiny. >> We're going to have to go check it out. >> I have to check it out. Yep, we'll do that. Yeah. Well, please welcome our two guests from Ford, Satish Puranam, is here, The Technical Leader at Cloud and Rebecca Risk, Principal Architect, developer relations. We are so excited to have you guys on the program. >> Clearly. >> Thanks for joining us. (all laugh) >> Thank you for having us. >> I love you're Ford enthusiasts! Yeah, that's awesome. >> I drive a Ford. >> Oh, awesome! Thank you. >> I can only say that's one car company here. >> That's great. >> Yes, yes. >> Great! Thank you a lot. >> Thank you for your business! >> Absolutely. (all laugh) >> So, Satish, talk to us a little bit about- I mean I think of Cloud as a car company but it seems like it's a technology company that makes cars. >> Yes. Talk to us about Ford as a Cloud first, technology driven company, and then we're going to talk about what you're doing with Red Hat and Boston University. >> Yeah, I'm like everything that all these cars that you're seeing, beautiful right behind us it's all built on, around, and with technology, right? So there's so much code goes into these cars these days, it's probably, it's mind boggling to think that probably your iPhones might be having less code as opposed to these cars. Everything from control systems, everything is code. We don't do any more clay models. Everything is done digital, 3D, virtual reality and all that stuff. So all that takes code, all of that takes technology. And we have been in that journey for the last- since 2016 when we started our first mobile app and all that stuff. And of late we have been like, heavily invested in Google. Moving a lot of these experiences, data acquisition systems AI/ML modeling for like all the autonomous cars. It's all technology and like from the day it is conceived, to the day it is marketed, to the day when you show up for a servicing, and hopefully soon how you can buy and you know, provide feedback to us, is all technology that drives all of this stuff. So it's amazing for us to see everything that we go and immerse ourselves in the technology. There is a real life thing that we can see what we all do for it, right? So- >> Yes, we're only sorry that our audience can't actually see the car, >> Yep. >> but we'll get some B-roll for you later on. Rebecca, talk a little bit about your role. Here we are at KubeCon, Savannah and I and John were talking when we went live this morning, that this is huge. That the show floor is massive, a lot bigger than last year. The collaboration and the spirit of the community is not only alive and well, as we heard in the keynote this morning, it's thriving. >> Yeah. >> Talk about developer relations at Ford and what you are helping to drive in your role. >> Yeah, so my team is all about helping developers work faster with different platforms that my team curates and produces, so that our developers don't have to deal with all of the details of setting up their environments to actually code. And we have really great people, kind of the top software developers in the company, are part of my team to produce those products that other people can use, and accelerate their development. And we have a great relationship with the developers in the company and outside with the different vendor relationships that we have, to make sure that we're always producing the next platform with the next tech stack that our developers will want to continue to use to produce the really great products that we are all about making at Ford. >> Let's dig in there a little bit because I'm curious and I suspect you both had something to do with it. How did you approach your Cloud Native transformation and how do you evaluate new technologies for the team? >> It's sometimes- many a times I would say it's like dogfooding and like experimentation. >> Yeah. Isn't anything in innovation a lot of- >> Yeah, a lot of experimentation. We started our, as I said, the Cloud Native journey back in 2016 with Cloud Foundry and things, technologies around that. Soon realized, that there was like a lot of buzz around that time. Twelve-Factor was a thing, Stateless was a thing. And then all those Stateful needs to drive the Stateless. So where do we do that thing? And the next logical iteration was Kubernetes was bursting upon the scene at that time. So we started doing a lot of experimentation. >> Like the Kool-Aid man, burst on the Kubernetes scene- >> Exactly right. >> Through the wall. >> So, the question is like, why can't we do? I think we were like crazy enough to say that Kubernetes people are talking about our serverless or Twelve-Factor on Kubernetes. We are crazy enough to do Stateful on Kubernetes and we've been doing it successfully for five years. So it's a lot about experimentation. I think good chunk of experiments that we do do not yield the results that we get, but many a times, some of them are like Gangbusters. Like, other aspects that we've been doing of late is like partnering with Becky and rest of the organization, right? Because they are the people who are like closest to the developers. We are somewhat behind the scenes doing some things but it is Becky and the rest of the architecture teams who are actually front and center with the customers, right? So it is the collaborative effort that we've been working through past few years that has been really really been useful and coming around and helping us to make some of these products really beautiful. >> Yeah, well you make a lot of beautiful products. I think we've all, I think we've all seen them. Something that I think is really interesting and part of why I was so excited for this interview, and kind of nudged John out, was because you've been- Ford has been investing in technology in a committed way for decades and I don't think most people are aware of that. When I originally came out to Dearborn, I learned that you've had a head of VR who happens to be a female. For what it's worth, Elizabeth, who's been running VR for you for two and a half decades, for 25 years. >> Satish: Yep. >> That is an impressive commitment. What is that like from a culture perspective inside of Ford? What is the attitude around innovation and technology? >> So I've been a long time Ford employee. I just celebrated my 29th year. >> Oh, wow! >> Congratulations! >> Wow, congrats! That's a huge deal. >> Yeah, it's a huge deal. I'm so proud of my career and all that Ford has brought to me and it's just a testament. I have many colleagues like me who've been there for their whole career or have done other things and come to Ford and then spent another 20 years with us because we foster the culture that makes you want to stay. We have development programs to allow you to upscale and change your role and learn new things and play with the new technologies that people are interested in doing and really make an impact to our community of developers at Ford or the company itself and the results that we're delivering. So to have that, you know, culture for so many years that people really love to work. They love to work with the people that they're working with. They love to stay engaged and they love the fact that you can have many different careers within the same umbrella, which we call the "blue oval". And that's really why I've been there for so long. I think I probably had 13 very unique and different jobs along the way. It's as if I left, and you know shopped around my skills elsewhere. But I didn't ever have to leave the company. It's been fabulous. >> The cultural change and adoption of- embracing modern technology- Cloud Native automotive software is impressive because a lot of historied companies, you guys have been there a long time, have challenges with that because it's really hard to get an entire moving, you'll call it the blue oval, to change and adapt- >> Savannah: I love that. >> and be willing to experiment. So that that is impressive. Talk about, you go by Becky, so I'll call you Becky, >> Rebecca/Becky: Yeah. >> The developer culture in terms of the developers really being the center of the nucleus of influencing the direction in which the company's going. I imagine that they probably are fairly influential. >> Yeah, so I had a very- one of the unique positions I held was a culture change for our department, Information Technology in 2016. >> Satish: Yeah. >> As the teacher was involved with moving us to the cloud, I was responsible- >> You are the transformation team! This is beautiful. I love this. We've got the right people on the show. >> Yeah, we do. >> I was responsible for changing the culture to orient our employees to pay attention to what do we want to create for tomorrow? What are the kind of skills we need to trust each other to move quickly. And that was completely unique. >> Satish: Yeah. >> Like I had men in the trenches delivering software before that, and then plucked out because they wanted someone, you know who had authentic experience with our development team to be that voice. And it was such a great investment that Ford continues to do is invest in our culture transformation. Because with each step forward that we do, we have to refine what our priorities are. And you do that through culture transformation and culture management. And that's been, I think really, the key to our successful pivots that we've made over the last six years that we've been able to continue to refine and hone where we really want to go through that culture movement. >> Absolutely. I think if I could add another- >> Please. >> spotlight to it is like the biggest thing about Ford has been among various startup-like culture, right? So the idea is that we encourage people to think outside the box, right? >> Savannah: Or outside the oval? >> Right! (laughs) >> Lisa: Outside the oval, yes! >> Absolutely! Right. >> So the question is like, you can experiment with various things, new technologies and you will get all the leadership support to go along with it. I think that is very important too and like we can be in the trenches and talk about all of these nice little things but who the heck would've thought that, you know Kubernetes was announced in 2015, in late 2016, we have early dev Kubernetes clusters already running. 2017, we are live with workloads on Kubernetes! >> Savannah: Early adopters over here. >> Yeah. >> Yeah. >> I'm like all of this thing doesn't happen without lot of foresight and support from the leadership, but it's also the grassroot efforts that is encouraged all along to be on the front end of all of these things and try different things. Some of them may not work >> Savannah: Right. >> But that's okay. But how do we know we are doing something, if you're not failing? We have to fail in order to do something, right? >> Lisa: I always say- >> So I think that's been a great thing that is encouraged very often and otherwise I would not be doing, I've done a whole bunch of stuff at Ford. Without that kind of ability to support and have an appetite for, some of those things would not have been here at all. >> I always say failure is not a bad F-word. >> Satish: Yep. >> Savannah: I love that. >> But what you're talking about there is kind of like driving this hot wheel of experimentation. You have to have the right culture and the mindset- >> Satish: Absolutely. >> to do that. Try fail, move on, learn, iterate, go. >> Satish: Correct. >> You guys have a great partnership with Red Hat and Boston University. You're speaking about that later today. >> Satish: Yes. >> Unpack that for us. What, from a technical perspective, what are you doing and what's it resulting in? >> Yeah, I think the biggest thing is Becky was talking about as during this transformation journey, is lot has changed in very small amount of time. So we traditionally been like, "Hey, here's a spreadsheet of things I need you to deliver for me" to "Here is a catalog of things, you can get it today and be successful with it". That is frightening to several of our developers. The goal, one of the things that we've been working with Q By Example, Red Hat and all the thing, is that how can we lower the bar for the developers, right? Kubernetes is great. It's also a wall of YAML. >> It's extremely complex, number one complaint. >> The question is how can I zero on? I'm like, if we go back think like when we talk about in cars with human-machine interfaces, which parts do I need to know? Here's the steering wheel, here's the gas pedal, or here's the brake. As long as you know these two, three different things you should be fairly be okay to drive those things, right? So the idea of some of the things with enablementing we are trying to do is like reduce that barrier, right? Reduce- lower the bar so that more people can participate in it. >> One of the ways that you did that was Q By Example, right, QBE? >> Satish: Yes, Yes. >> Can you tell us a little bit more about that as you finish this answer? >> Yeah, I think the biggest thing with Q By Example is like Q By Example gives you the small bite-sized things about Kubernetes, right? >> Savannah: Great place to start. >> But what we wanted to do is that we wanted to reinforce that learning by turning into a real world living example app. We took part info, we said, Hey, what does it look like? How do I make sure that it is highly available? How do I make sure that it is secure? Here is an example YAML of it that you can literally verbatim copy and paste into your editor and click run and then you will get an instant gratification feedback loop >> I was going to say, yeah, they feel like you're learning too! >> Yes. Right. So the idea would be is like, and then instead of giving you just a boring prose text to read, we actually drop links to relevant blog posts saying that, hey you can just go there. And that has been inspirational in terms of like and reinforcing the learning. So that has been where we started working with the Boston University, Red Hat and the community around all of that stuff. >> Talk a little bit about, Becky, about some of the business outcomes. You mentioned things like upskilling the workforce which is really nice to hear that there's such a big focus on it. But I imagine too, there's more participation in the community, but also from an end customer perspective. Obviously, everything Ford's doing is to serve the end customers >> Becky: Right. How does this help the end customer have that experience that they really, these days, demand with patience being something that, I think, is gone because of the pandemic? >> Right? Right. So one of the things that my team does is we create the platforms that help Accelerate developers be successful and it helps educate them more quickly on appropriate use of the platforms and helps them by adopting the platforms to be more secure which inherently lead to the better results for our end customers because their data is secure because the products that they have are well created and they're tested thoroughly. So we catch all those things earlier in the cycle by using these platforms that we help curate and produce. And that's really important because, like you had mentioned, this steep learning curve associated with Kubernetes, right? >> Savannah: Yeah. >> So my team is able to kind of help with that abstraction so that we solve kind of the higher complex problems for them so that developers can move faster and then we focus our education on what's important for them. We use things like Q By Example, as a source instead of creating that content ourselves, right? We are able to point them to that. So it's great that there's that community and we're definitely involved with that. But that's so important to help our developers be successful in moving as quickly as they want and not having 20,000 people solve the same problems. >> Satish: (chuckles) Yeah. >> Each individually- >> Savannah: you don't need to! >> and sometimes differently. >> Savannah: We're stronger together, you know? >> Exactly. >> The water level rises together and Ford is definitely a company that illustrates that by example. >> Yeah, I'm like, we can't make a better round wheel right? >> Yeah! So, we have to build upon what we have already been built ahead of us. And I think a lot of it is also about how can we give back and participate in the community, right? So I think that is paramount for us as like, here we are in Detroit so we're trying to recruit and show people that you know, everything that we do is not just old car and sheet metal >> Savannah: Combustion. >> and everything and right? There's a lot of tech goes and sometimes it is really, really cool to do that. And biggest thing for us is like how can we involve our community of developers sooner, earlier, faster without actually encumbering them and saying that, hey here is a book, go master it. We'll talk two months later. So I think that has been another journey. I think that has been a biggest uphill challenge for us is that how can we actually democratize all of these things for everybody. >> Yeah. Well no one better to try than you I would suspect. >> We can only try and hope everything turns out well, right? >> You know, as long as there's room for the bumpers on the lane for if you fail. >> Exactly. >> It sounds like you're driving the program in the right direction. Closing question for you, what's next? Is electric the future? Is Kubernetes the future? What's Ford all in on right now, looking forward? (crowd murmuring in the background) >> Data is the king, right? >> Savannah: Oh, okay, yes! >> Data is a new currency. We use that for several things to improve the cars improve the quality of autonomous driving Is Level 5 driving here? Maybe will be here soon, we'll see. But we are all working towards it, right? So machine learning, AI feedback. How do you actually post sale experience for example? So all of these are all areas that we are working to. We are, may not be getting like Kubernetes in a car but we are putting Kubernetes in plants. Like you order a Marquis or you order a Bronco, you see that here. Here's where in the assembly line your car is. It's taking pictures. It's actually taking pictures on Kubernetes platform. >> That's pretty cool. >> And it is tweeting for you on the Twitter and the social media platform. So there's a lot of that. So it is real and we are doing it. We need more help. A lot of the community efforts that we are seeing and a lot of the innovation that is happening on the floor here, it's phenomenal. The question is how we can incorporate those things into our workflows. >> Yeah, well you have the right audience for that here. You also have the right attitude, >> Exactly. >> the right appetite, and the right foundation. Becky, last question for you. Top three takeaways from your talk today. If you're talking to the developer community you want to inspire: Come work for us! What would you say? >> If you're ready to invest in yourself and upskill and be part of something that is pretty remarkable, come work for us! We have many, many different technical career paths that you can follow. We invest in our employees. When you master something, it's time for you to move on. We have career growth for you. It's been a wonderful gift to me and my family and I encourage everyone to check us out careers.ford.com or stop by our booth if you're happen to be here in person. >> Satish: Absolutely! >> We have our curated job openings that are specific for this community, available. >> Satish: Absolutely. >> Love it. Perfect close. Nailed pitch there. I'm sure you're all going to check out their job page. (all laugh) >> Exactly! And what you talked about, the developer experience, the customer experience are inextricably linked and you guys are really focused on that. Congratulations on all the work that you've done. We got to go get a selfie with that car girl. >> Yes, we do. >> Absolutely. >> We got to show them, we got to show the audience what it looks like on the inside too. We'll do a little IG video. (Lisa laughs) >> Absolutely. >> We will show you that for our guests and my cohost, Savannah Peterson. Lisa Martin here live in Detroit with theCUBE at KubeCon and CloudNativeCon 2022. The one and only John Furrier, who you know gets FOMO, is going to be back with me next. So stick around. (all laugh) (bright music)

Published Date : Oct 27 2022

SUMMARY :

it's great to see you. It's so good to be We have a great segment coming up. You have a great story Some of you may be For the record. Which we were both just I have to check it out. Thanks for joining us. I love you're Ford Thank you. I can only say that's Thank you a lot. (all laugh) So, Satish, talk to Talk to us about Ford as a Cloud first, to the day when you show of the community is not and what you are helping don't have to deal with all of the details something to do with it. a times I would say it's in innovation a lot of- a lot of buzz around that time. So it is the collaborative Something that I think is What is the attitude around So I've been a long time Ford employee. That's a huge deal. So to have that, you know, culture So that that is impressive. of influencing the direction one of the unique positions You are the transformation What are the kind of skills we need that Ford continues to do is I think Absolutely! So the question is that is encouraged all along to be on the We have to fail in order Without that kind of ability to support I always say failure and the mindset- to do that. You're speaking about that later today. what are you doing and and all the thing, is that It's extremely complex, So the idea of some of the things it that you can literally and the community around in the community, but also from is gone because of the pandemic? So one of the things so that we solve kind of a company that illustrates and show people that really cool to do that. try than you I would suspect. for the bumpers on the in the right direction. areas that we are working to. and a lot of the innovation You also have the right attitude, and the right foundation. that you can follow. that are specific for to check out their job page. and you guys are really focused on that. We got to show them, we is going to be back with me next.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ElizabethPERSON

0.99+

RebeccaPERSON

0.99+

2016DATE

0.99+

SatishPERSON

0.99+

Lisa MartinPERSON

0.99+

BeckyPERSON

0.99+

13QUANTITY

0.99+

FordORGANIZATION

0.99+

LisaPERSON

0.99+

Savannah PetersonPERSON

0.99+

Red HatORGANIZATION

0.99+

SavannahPERSON

0.99+

2015DATE

0.99+

DetroitLOCATION

0.99+

John FurrierPERSON

0.99+

Rebecca RiskPERSON

0.99+

JohnPERSON

0.99+

Satish PuranamPERSON

0.99+

Rebecca RissPERSON

0.99+

Boston UniversityORGANIZATION

0.99+

25 yearsQUANTITY

0.99+

five yearsQUANTITY

0.99+

2017DATE

0.99+

two guestsQUANTITY

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

careers.ford.comOTHER

0.99+

last yearDATE

0.99+

29th yearQUANTITY

0.99+

20,000 peopleQUANTITY

0.99+

KubeConEVENT

0.99+

Detroit, MichiganLOCATION

0.99+

twoQUANTITY

0.99+

20 yearsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

two months laterDATE

0.99+

OneQUANTITY

0.99+

EachQUANTITY

0.98+

CloudORGANIZATION

0.98+

late 2016DATE

0.98+

KubernetesTITLE

0.98+

Winning Cloud Models - De facto Standards or Open Clouds | Supercloud22


 

(bright upbeat music) >> Welcome back, everyone, to the "Supercloud 22." I'm John Furrier, host of "The Cube." This is the Cloud-erati panel, the distinguished experts who have been there from day one, watching the cloud grow, from building clouds, and all open source stuff as well. Just great stuff. Good friends of "The Cube," and great to introduce back on "The Cube," Adrian Cockcroft, formerly with Netflix, formerly AWS, retired, now commentating here in "The Cube," as well as other events. Great to see you back out there, Adrian. Lori MacVittie, Cloud Evangelist with F5, also wrote a great blog post on supercloud, as well as Dave Vellante as well, setting up the supercloud conversation, which we're going to get into, and Chris Hoff, who's the CTO and CSO of LastPass who's been building clouds, and we know him from "The Cube" before with security and cloud commentary. Welcome, all, back to "The Cube" and supercloud. >> Thanks, John. >> Hi. >> All right, Lori, we'll start with you to get things going. I want to try to sit back, as you guys are awesome experts, and involved from building, and in the trenches, on the front lines, and Adrian's coming out of retirement, but Lori, you wrote the post setting the table on supercloud. Let's start with you. What is supercloud? What is it evolving into? What is the north star, from your perspective? >> Well, I don't think there's a north star yet. I think that's one of the reasons I wrote it, because I had a clear picture of this in my mind, but over the past, I don't know, three, four years, I keep seeing, in research, my own and others', complexity, multi-cloud. "We can't manage it. They're all different. "We have trouble. What's going on? "We can't do anything right." And so digging into it, you start looking into, "Well, what do you mean by complexity?" Well, security. Migration, visibility, performance. The same old problems we've always had. And so, supercloud is a concept that is supposed to overlay all of the clouds and normalize it. That's really what we're talking about, is yet another abstraction layer that would provide some consistency that would allow you to do the same security and monitor things correctly. Cornell University actually put out a definition way back in 2016. And they said, "It's an architecture that enables migration "across different zones or providers," and I think that's important, "and provides interfaces to everything, "makes it consistent, and normalizes the network," basically brings it all together, but it also extends to private clouds. Sometimes we forget about that piece of it, and I think that's important in this, so that all your clouds look the same. So supercloud, big layer on top, makes everything wonderful. It's unicorns again. >> It's interesting. We had multiple perspectives. (mumbles) was like Snowflake, who built on top of AWS. Jerry Chan, who we heard from earlier today, Greylock Penn's "Castles in the Cloud" saying, "Hey, you can have a moat, "you can build an advantage and have differentiation," so startups are starting to build on clouds, that's the native cloud view, and then, of course, they get success and they go to all the other clouds 'cause they got customers in the ecosystem, but it seems that all the cloud players, Chris, you commented before we came on today, is that they're all fighting for the customer's workloads on their infrastructure. "Come bring your stuff over to here, "and we'll make it run better." And all your developers are going to be good. Is there a problem? I mean, or is this something else happening here? Is there a real problem? >> Well, I think the north star's over there, by the way, Lori. (laughing) >> Oh, there it is. >> Right there. The supercloud north star. So indeed I think there are opportunities. Whether you call them problems or not, John, I think is to be determined. Most companies have, especially if they're a large enterprise, whether or not they've got an investment in private cloud or not, have spent time really trying to optimize their engineering and workload placement on a single cloud. And that, regardless of your choice, as we take the big three, whether it's Amazon, Google, or Microsoft, each of them have their pros and cons for various types of workloads. And so you'll see a lot of folks optimizing for a particular cloud, and it takes a huge effort up and down the stack to just get a single cloud right. That doesn't take into consideration integrations with software as a service, instantiated, oftentimes, on top of infrastructure of the service that you need to supplement where the obstruction layer ends in infrastructure of the service. You've seen most IS players starting to now move up-chain, as we predicted years ago, to platform as a service, but platforms of various types. So I definitely see it as an opportunity. Previous employers have had multiple clouds, but they were very specifically optimized for the types of workloads, for example, in, let's say, AWS versus GCP, based on the need for different types and optimized compute platforms that each of those providers ran. We never, in that particular case, thought about necessarily running the same workloads across both clouds, because they had different pricing models, different security models, et cetera. And so the challenge is really coming down to the fact that, what is the cost benefit analysis of thinking about multi-cloud when you can potentially engineer the resiliency or redundancy, all the in-season "ilities" that you might need to factor into your deployments on a single cloud, if they are investing at the pace in which they are? So I think it's an opportunity, and it's one that continues to evolve, but this just reminds me, your comments remind me, of when we were talking about OpenStack versus AWS. "Oh, if there were only APIs that existed "that everybody could use," and you saw how that went. So I think that the challenge there is, what is the impetus for a singular cloud provider, any of the big three, deciding that they're going to abstract to a single abstraction layer and not be able to differentiate from the competitors? >> Yeah, and that differentiation's going to be big. I mean, assume that the clouds aren't going to stay still like AWS and just not stop innovating. We see the devs are doing great, Adrian, open source is bigger and better than ever, but now that's been commercialized into enterprise. It's an ops problem. So to Chris's point, the cost benefit analysis is interesting, because do companies have to spin up multiple operations teams, each with specialized training and tooling for the clouds that they're using, and does that open up a can of worms, or is that a good thing? I mean, can you design for this? I mean, is there an architecture or taxonomy that makes it work, or is it just the cart before the horse, the solution before the problem? >> Yeah, well, I think that if you look at any large vendor... Sorry, large customer, they've got a bit of everything already. If you're big enough, you've bought something from everybody at some point. So then you're trying to rationalize that, and trying to make it make sense. And I think there's two ways of looking at multi-cloud or supercloud, and one is that the... And practically, people go best of breed. They say, "Okay, I'm going to get my email "from Google or Microsoft. "I'm going to run my applications on AWS. "Maybe I'm going to do some AI machine learning on Google, "'cause those are the strengths of the platforms." So people tend to go where the strength is. So that's multi-cloud, 'cause you're using multiple clouds, and you still have to move data and make sure they're all working together. But then what Lori's talking about is trying to make them all look the same and trying to get all the security architectures to be the same and put this magical layer, this unicorn magical layer that, "Let's make them all look the same." And this is something that the CIOs have wanted for years, and they keep trying to buy it, and you can sell it, but the trouble is it's really hard to deliver. And I think, when I go back to some old friends of ours at Enstratius who had... And back in the early days of cloud, said, "Well, we'll just do an API that abstracts "all the cloud APIs into one layer." Enstratius ended up being sold to Dell a few years ago, and the problem they had was that... They didn't have any problem selling it. The problem they had was, a year later, when it came up for renewal, the developers all done end runs around it were ignoring it, and the CIOs weren't seeing usage. So you can sell it, but can you actually implement it and make it work well enough that it actually becomes part of your core architecture without, from an operations point of view, without having the developers going directly to their favorite APIs around them? And I'm not sure that you can really lock an organization down enough to get them onto a layer like that. So that's the way I see it. >> You just defined- >> You just defined shadow shadow IT. (laughing) That's pretty- (crosstalk) >> Shadow shadow IT, yeah. >> Yeah, shadow shadow it. >> Yeah. >> Yeah. >> I mean, this brings up the question, I mean, is there really a problem? I mean, I guess we'll just jump to it. What is supercloud? If you can have the magic outcome, what is it? Enstratius rendered in with automation? The security issues? Kubernetes is hot. What is the supercloud dream? I guess that's the question. >> I think it's got easier than it was five, 10 years ago. Kubernetes gives you a bunch of APIs that are common across lots of different areas, things like Snowflake or MongoDB Atlas. There are SaaS-based services, which are across multiple clouds from vendors that you've picked. So it's easier to build things which are more portable, but I still don't think it's easy to build this magic API that makes them all look the same. And I think that you're going to have leaky abstractions and security being... Getting the security right's going to be really much more complex than people think. >> What about specialty superclouds, Chris? What's your view on that? >> Yeah, I think what Adrian is alluding to, those leaky abstractions, are interesting, especially from the security perspective, 'cause I think what you see is if you were to happen to be able to thin slice across a set of specific types of workloads, there is a high probability given today that, at least on two of the three major clouds, you could get SaaS providers that sit on those same infrastructure of the service clouds for you, string them together, and have a service that technically is abstracted enough from the things you care about to work on one, two, or three, maybe not all of them, but most SaaS providers in the security space, or identity space, data space, for example, coexist on at least Microsoft and AWS, if not all three, with Google. And so you could technically abstract a service to the point that you let that level of abstract... Like Lori said, no computer science problem could not be... So, no computer science problem can't be solved with more layers of abstraction or misdirection... Or redirection. And in that particular case, if you happen to pick the right vendors that run on all three clouds, you could possibly get close. But then what that really talks about is then, if you built your seven-layer dip model, then you really have specialty superclouds spanning across infrastructure of the service clouds. One for your identity apps, one for data and data layers, to normalize that, one for security, but at what cost? Because you're going to be charged not for that service as a whole, but based on compute resources, based on how these vendors charge across each cloud. So again, that cost-benefit ratio might start being something that is rather imposing from a budgetary perspective. >> Lori, weigh in on this, because the enterprise people love to solve complexity with more complexity. Here, we need to go the other way. It's a commodity. So there has to be a better way. >> I think I'm hearing two fundamental assumptions. One, that a supercloud would force the existing big three to implement some sort of equal API. Don't agree with that. There's no business case for that. There's no reason that could compel them to do that. Otherwise, we would've convinced them to do that, what? 10, 15 years ago when we said we need to be interoperable. So it's not going to happen there. They don't have a good reason to do that. There's no business justification for that. The other presumption, I think, is that we would... That it's more about the services, the differentiated services, that are offered by all of these particular providers, as opposed to treating the core IaaS as the commodity it is. It's compute, it's some storage, it's some networking. Look at that piece. Now, pull those together by... And it's not OpenStack. That's not the answer, it wasn't the answer, it's not the answer now, but something that can actually pull those together and abstract it at a different layer. So cloud providers don't have to change, 'cause they're not going to change, but if someone else were to build that architecture to say, "all right, I'm going to treat all of this compute "so you can run your workloads," as Chris pointed out, "in the best place possible. "And we'll help you do that "by being able to provide those cost benefit analysis, "'What's the best performance, what are you doing,' "And then provide that as a layer." So I think that's really where supercloud is going, 'cause I think that's what a lot of the market actually wants in terms of where they want to run their workloads, because we're seeing that they want to run workloads at the edge, "a lot closer to me," which is yet another factor that we have to consider, and how are you going to be moving individual workloads around? That's the holy grail. Let's move individual workloads to where they're the best performance, the security, cost optimized, and then one layer up. >> Yeah, I think so- >> John Considine, who ultimately ran CloudSwitch, that sold to Verizon, as well as Tom Gillis, who built Bracket, are both rolling in their graves, 'cause what you just described was exactly that. (Lori laughing) Well, they're not even dead yet, so I can't say they're rolling in their graves. Sorry, Tom. Sorry, John. >> Well, how do hyperscalers keep their advantage with all this? I mean, to that point. >> Native services and managed services on top of it. Look how many flavors of managed Kubernetes you have. So you have a choice. Roll your own, or go with a managed service, and then differentiate based on the ability to take away and simplify some of that complexity. Doesn't mean it's more secure necessarily, but I do think we're seeing opportunities where those guys are fighting tooth and nail to keep you on a singular cloud, even though, to Lori's point, I agree, I don't think it's about standardized APIs, 'cause I think that's never going to happen. I do think, though, that SaaS-y supercloud model that we were talking about, layering SaaS that happens to span all the three infrastructure of the service are probably more in line with what Lori was talking about. But I do think that portability of workload is given to you today within lots of ways. But again, how much do you manage, and how much performance do you give up by running additional abstraction layers? And how much security do you give up by having to roll your own and manage that? Because the whole point was, in many cases... Cloud is using other people's computers, so in many cases, I want to manage as little of it as I possibly can. >> I like this whole SaaS angle, because if you had the old days, you're on Amazon Web Services, hey, if you build a SaaS application that runs on Amazon, you're all great, you're born in the cloud, just like that generations of startups. Great. Now when you have this super pass layer, as Dave Vellante was riffing on his analysis, and Lori, you were getting into this pass layer that's kind of like SaaS-y, what's the SaaS equation look like? Because that, to me, sounds like a supercloud version of saying, "I have a workload that runs on all the clouds equally." I just don't think that's ever going to happen. I agree with you, Chris, on that one. But I do see that you can have an abstraction that says, "Hey, I don't really want to get in the weeds. "I don't want to spend a lot of ops time on this. "I just want it to run effectively, and magic happens," or, as you said, some layer there. How does that work? How do you see this super pass layer, if anything, enabling a different SaaS game? >> I think you hit on it there. The last like 10 or so years, we've been all focused on developers and developer productivity, and it's all about the developer experience, and it's got to be good for them, 'cause they're the kings. And I think the next 10 years are going to be very focused on operations, because once you start scaling out, it's not about developers. They can deliver fast or slow, it doesn't matter, but if you can't scale it out, then you've got a real problem. So I think that's an important part of it, is really, what is the ops experience, and what is the best way to get those costs down? And this would serve that purpose if it was done right, which, we can argue about whether that's possible or not, but I don't have to implement it, so I can say it's possible. >> Well, are we going to be getting into infrastructure as code moves into "everything is code," security, data, (laughs) applications is code? I mean, "blank" is code, fill in the blank. (Lori laughing) >> Yeah, we're seeing more of that with things like CDK and Pulumi, where you are actually coding up using a real language rather than the death by YAML or whatever. How much YAML can you take? But actually having a real language so you're not trying to do things in parsing languages. So I think that's an interesting trend. You're getting some interesting templates, and I like what... I mean, the counterexample is that if you just go deep on one vendor, then maybe you can go faster and it is simpler. And one of my favorite vendor... Favorite customers right now that I've been talking to is Liberty Mutual. Went very deep and serverless first on AWS. They're just doing everything there, and they're using CDK Patterns to do it, and they're going extremely fast. There's a book coming out called "The Value Flywheel" by Dave Anderson, it's coming out in a few months, to just detail what they're doing, but that's the counterargument. If you could pick one vendor, you can go faster, you can get that vendor to do more for you, and maybe get a bigger discount so you're not splitting your discounts across vendors. So that's one aspect of it. But I think, fundamentally, you're going to find the CIOs and the ops people generally don't like sitting on one vendor. And if that single vendor is a horizontal platform that's trying to make all the clouds look the same, now you're locked into whatever that platform was. You've still got a platform there. There's still something. So I think that's always going to be something that the CIOs want, but the developers are always going to just pick whatever the best tool for building the thing is. And a analogy here is that the developers are dating and getting married, and then the operations people are running the family and getting divorced. And all the bad parts of that cycle are in the divorce end of it. You're trying to get out of a vendor, there's lawyers, it's just a big mess. >> Who's the lawyer in this example? (crosstalk) >> Well... (laughing) >> Great example. (crosstalk) >> That's why ops people don't like lock-in, because they're the ones trying to unlock. They aren't the ones doing the lock-in. They're the ones unlocking, when developers, if you separate the two, are the ones who are going, picking, having the fun part of it, going, trying a new thing. So they're chasing a shiny object, and then the ops people are trying to untangle themselves from the remains of that shiny object a few years later. So- >> Aren't we- >> One way of fixing that is to push it all together and make it more DevOps-y. >> Yeah, that's right. >> But that's trying to put all the responsibilities in one place, like more continuous improvement, but... >> Chris, what's your reaction to that? Because you're- >> No, that's exactly what I was going to bring up, yeah, John. And 'cause we keep saying "devs," "dev," and "ops" and I've heard somewhere you can glue those two things together. Heck, you could even include "sec" in the middle of it, and "DevSecOps." So what's interesting about what Adrian's saying though, too, is I think this has a lot to do with how you structure your engineering teams and how you think about development versus operations and security. So I'm building out a team now that very much makes use of, thanks to my brilliant VP of Engineering, a "Team Topologies" approach, which is a very streamlined and product oriented way of thinking about, for example, in engineering, if you think about team structures, you might have people that build the front end, build the middle tier, and the back end, and then you have a product that needs to make use of all three components in some form. So just from getting stuff done, their ability then has to tie to three different groups, versus building a team that's streamlined that ends up having front end, middleware, and backend folks that understand and share standards but are able to uncork the velocity that's required to do that. So if you think about that, and not just from an engineering development perspective, but then you couple in operations as a foundational layer that services them with embedded capabilities, we're putting engineers and operations teams embedded in those streamlined teams so that they can run at the velocity that they need to, they can do continuous integration, they can do continuous deployment. And then we added CS, which is continuously secure, continuous security. So instead of having giant, centralized teams, we're thinking there's a core team, for example, a foundational team, that services platform, makes sure all the trains are running on time, that we're doing what we need to do foundationally to make the environments fully dev and operator and security people functional. But then ultimately, we don't have these big, monolithic teams that get into turf wars. So, to Adrian's point about, the operators don't like to be paned in, well, they actually have a say, ultimately, in how they architect, deploy, manage, plan, build, and operate those systems. But at the same point in time, we're all looking at that problem across those teams and go... Like if one streamline team says, "I really want to go run on Azure, "because I like their services better," the reality is the foundational team has a larger vote versus opinion on whether or not, functionally, we can satisfy all of the requirements of the other team. Now, they may make a fantastic business case and we play rock, paper, scissors, and we do that. Right now, that hasn't really happened. We look at the balance of AWS, we are picking SaaS-y, supercloud vendors that will, by the way, happen to run on three platforms, if we so choose to expand there. So we have a similar interface, similar capability, similar processes, but we've made the choice at LastPass to go all in on AWS currently, with respect to how we deliver our products, for all the reasons we just talked about. But I do think that operations model and how you build your teams is extremely important. >> Yeah, and to that point- >> And has the- (crosstalk) >> The vendors themselves need optionality to the customer, what you're saying. So, "I'm going to go fast, "but I need to have that optionality." I guess the question I have for you guys is, what is today's trade-off? So if the decision point today is... First of all, I love the go-fast model on one cloud. I think that's my favorite when I look at all this, and then with the option, knowing that I'm going to have the option to go to multiple clouds. But everybody wants lock-in on the vendor side. Is that scale, is that data advantage? I mean, so the lock-in's a good question, and then also the trade-offs. What do people have to do today to go on a supercloud journey to have an ideal architecture and taxonomy, and what's the right trade-offs today? >> I think that the- Sorry, just put a comment and then let Lori get a word in, but there's a lot of... A lot of the market here is you're building a product, and that product is a SaaS product, and it needs to run somewhere. And the customers that you're going to... To get the full market, you need to go across multiple suppliers, most people doing AWS and Azure, and then with Google occasionally for some people. But that, I think, has become the pattern that most of the large SaaS platforms that you'd want to build out of, 'cause that's the fast way of getting something that's going to be stable at scale, it's got functionality, you'd have to go invest in building it and running it. Those platforms are just multi-cloud platforms, they're running across them. So Snowflake, for example, has to figure out how to make their stuff work on more than one cloud. I mean, they started on one, but they're going across clouds. And I think that that is just the way it's going to be, because you're not going to get a broad enough view into the market, because there isn't a single... AWS doesn't have 100% of the market. It's maybe a bit more than them, but Azure has got a pretty solid set of markets where it is strong, and it's market by market. So in some areas, different people in some places in the world, and different vertical markets, you'll find different preferences. And if you want to be across all of them with your data product, or whatever your SaaS product is, you're just going to have to figure this out. So in some sense, the supercloud story plays best with those SaaS providers like the Snowflakes of this world, I think. >> Lori? >> Yeah, I think the SaaS product... Identity, whatever, you're going to have specialized. SaaS, superclouds. We already see that emerging. Identity is becoming like this big SaaS play that crosses all clouds. It's not just for one. So you get an evolution going on where, yes, I mean, every vendor who provides some kind of specific functionality is going to have to build out and be multi-cloud, as it were. It's got to work equally across them. And the challenge, then, for them is to make it simple for both operators and, if required, dev. And maybe that's the other lesson moving forward. You can build something that is heaven for ops, but if the developers won't use it, well, then you're not going to get it adopted. But if you make it heaven for the developers, the ops team may not be able to keep it secure, keep everything. So maybe we have to start focusing on both, make it friendly for both, at least. Maybe it won't be the perfect experience, but gee, at least make it usable for both sides of the equation so that everyone can actually work in concert, like Chris was saying. A more comprehensive, cohesive approach to delivery and deployment. >> All right, well, wrapping up here, I want to just get one final comment from you guys, if you don't mind. What does supercloud look like in five years? What's the Nirvana, what's the steady state of supercloud in five to 10 years? Or say 10 years, make it easier. (crosstalk) Five to 10 years. Chris, we'll start with you. >> Wow. >> Supercloud, what's it look like? >> Geez. A magic pane, a single pane of glass. (laughs) >> Yeah, I think- >> Single glass of pain. >> Yeah, a single glass of pain. Thank you. You stole my line. Well, not mine, but that's the one I was going to use. Yeah, I think what is really fascinating is ultimately, to answer that question, I would reflect on market consolidation and market dynamics that happens even in the SaaS space. So we will see SaaS companies combining in focal areas to be able to leverage the positions, let's say, in the identity space that somebody has built to provide a set of compelling services that help abstract that identity problem or that security problem or that instrumentation and observability problem. So take your favorite vendors today. I think what we'll end up seeing is more consolidation in SaaS offerings that run on top of infrastructure of the service offerings to where a supercloud might look like something I described before. You have the combination of your favorite interoperable identity, observability, security, orchestration platforms run across them. They're sold as a stack, whether it be co-branded by an enterprise vendor that sells all of that and manages it for you or not. But I do think that... You talked about, I think you said, "Is this an innovator's dilemma?" No, I think it's an integrator's dilemma, as it has always ultimately been. As soon as you get from Genesis to Bespoke Build to product to then commoditization, the cycle starts anew. And I think we've gotten past commoditization, and we're looking at niche areas. So I see just the evolution, not necessarily a revolution, of what we're dealing with today as we see more consolidation in the marketplace. >> Lori, what's your take? Five years, 10 years, what does supercloud look like? >> Part of me wants to take the pie in the sky unicorn approach. "No, it will be beautiful. "One button, and things will happen," but I've seen this cycle many times before, and that's not going to happen. And I think Chris has got it pretty close to what I see already evolving. Those different kinds of super services, basically. And that's really what we're talking about. We call them SaaS, but they're... X is a service. Everything is a service, and it's really a supercloud that can run anywhere, but it presents a different interface, because, well, it's easier. And I think that's where we're going to go, and that's just going to get more refined. And yes, a lot of consolidation, especially on the observability side, but that's also starting to consume the security side, which is really interesting to watch. So that could be a little different supercloud coming on there that's really focused on specific types of security, at least, that we'll layer across, and then we'll just hook them all together. It's an API first world, and it seems like that's going to be our standard for the next while of how we integrate everything. So superclouds or APIs. >> Awesome. Adrian... Adrian, take us home. >> Yeah, sure. >> What's your- I think, and just picking up on Lori's point that these are web services, meaning that you can just call them from anywhere, they don't have to run everything in one place, they can stitch it together, and that's really meant... It's somewhat composable. So in practice, people are going to be composable. Can they compose their applications on multiple platforms? But I think the interesting thing here is what the vendors do, and what I'm seeing is vendors running software on other vendors. So you have Google building platforms that, then, they will support on AWS and Azure and vice versa. You've got AWS's distro of Kubernetes, which they now give you as a distro so you can run it on another platform. So I think that trend's going to continue, and it's going to be, possibly, you pick, say, an AWS or a Google software stack, but you don't run it all on AWS, you run it in multiple places. Yeah, and then the other thing is the third tier, second, third tier vendors, like, I mean, what's IBM doing? I think in five years time, IBM is going to be a SaaS vendor running on the other clouds. I mean, they're already halfway there. To be a bit more controversial, I guess it's always fun to... Like I don't work for a corporate entity now. No one tells me what I can say. >> Bring it on. >> How long can Google keep losing a billion dollars a quarter? They've either got to figure out how to make money out of this thing, or they'll end up basically being a software stack on another cloud platform as their, likely, actual way they can make money on it. Because you've got to... And maybe Oracle, is that a viable cloud platform that... You've got to get to some level of viability. And I think the second, third tier of vendors in five, 10 years are going to be running on the primary platform. And I think, just the other final thing that's really driving this right now. If you try and place an order right now for a piece of equipment for your data center, key pieces of equipment are a year out. It's like trying to buy a new fridge from like Sub-Zero or something like that. And it's like, it's a year. You got to wait for these things. Any high quality piece of equipment. So you go to deploy in your data center, and it's like, "I can't get stuff in my data center. "Like, the key pieces I need, I can't deploy a whole system. "We didn't get bits and pieces of it." So people are going to be cobbling together, or they're going, "No, this is going to cloud, because the cloud vendors "have a much stronger supply chain to just be able "to give you the system you need. "They've got the capacity." So I think we're going to see some pandemic and supply chain induced forced cloud migrations, just because you can't build stuff anymore outside the- >> We got to accelerate supercloud, 'cause they have the supply. They are the chain. >> That's super smart. That's the benefit of going last. So I'm going to scoop in real quick. I can't believe we can call this "Web3 Supercloud," because none of us said "Web3." Don't forget DAO. (crosstalk) (indistinct) You have blockchain, blockchain superclouds. I mean, there's some very interesting distributed computing stuff there, but we'll have to do- >> (crosstalk) We're going to call that the "Cubeverse." The "Cubeverse" is coming. >> Oh, the "Cubeverse." All right. >> We will be... >> That's very meta. >> In the metaverse, Cubeverse soon. >> "Stupor cloud," perhaps. But anyway, great points, Adrian and Lori. Loved it. >> Chris, great to see you. Adrian, Lori, thanks for coming on. We've known each other for a long time. You guys are part of the cloud-erati, the group that has been in there from day one, and watched it evolve, and you get the scar tissue to prove it, and the experience. So thank you so much for sharing your commentary. We'll roll this up and make it open to everybody as additional content. We'll call this the "outtakes," the longer version. But really appreciate your time, thank you. >> Thank you. >> Thanks so much. >> Okay, we'll be back with more "Supercloud 22" right after this. (bright upbeat music)

Published Date : Aug 7 2022

SUMMARY :

Great to see you back out there, Adrian. and in the trenches, some consistency that would allow you are going to be good. by the way, Lori. and it's one that continues to evolve, I mean, assume that the and the problem they had was that... You just defined shadow I guess that's the question. Getting the security right's going to be the things you care about So there has to be a better way. build that architecture to say, that sold to Verizon, I mean, to that point. is given to you today within lots of ways. But I do see that you can and it's got to be good for code, fill in the blank. And a analogy here is that the developers (crosstalk) are the ones who are going, is to push it all together all the responsibilities the operators don't like to be paned in, the option to go to multiple clouds. and it needs to run somewhere. And maybe that's the other of supercloud in five to 10 years? A magic pane, a single that happens even in the SaaS space. and that's just going to get more refined. Adrian, take us home. and it's going to be, So people are going to be cobbling They are the chain. So I'm going to scoop in real quick. call that the "Cubeverse." Oh, the "Cubeverse." In the metaverse, But anyway, great points, Adrian and Lori. and you get the scar tissue to with more "Supercloud

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Lori MacVittiePERSON

0.99+

LoriPERSON

0.99+

AdrianPERSON

0.99+

Jerry ChanPERSON

0.99+

Dave AndersonPERSON

0.99+

Dave VellantePERSON

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Adrian CockcroftPERSON

0.99+

VerizonORGANIZATION

0.99+

Chris HoffPERSON

0.99+

John ConsidinePERSON

0.99+

The Value FlywheelTITLE

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

Tom GillisPERSON

0.99+

2016DATE

0.99+

IBMORGANIZATION

0.99+

fiveQUANTITY

0.99+

twoQUANTITY

0.99+

TomPERSON

0.99+

100%QUANTITY

0.99+

threeQUANTITY

0.99+

Castles in the CloudTITLE

0.99+

10 yearsQUANTITY

0.99+

EnstratiusORGANIZATION

0.99+

Cornell UniversityORGANIZATION

0.99+

John FurrierPERSON

0.99+

Five yearsQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

The CubeTITLE

0.99+

NetflixORGANIZATION

0.99+

FiveQUANTITY

0.99+

five yearsQUANTITY

0.99+

a year laterDATE

0.99+

OneQUANTITY

0.99+

secondQUANTITY

0.99+

Kristen Newcomer & Connor Gorman, Red Hat | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe, 2022, brought to you by red hat, the cloud native computing foundation and its ecosystem partners. >>Welcome to Valencia Spain in Coon cloud native con 2022 Europe. I'm Keith Townsend, along with my cohot on Rico senior, Etti senior it analyst at gig home. We are talking to amazing people, creators people contributing to all these open source projects. Speaking of open source on Rico. Talk to me about the flavor of this show versus a traditional like vendor show of all these open source projects and open source based companies. >>Well, first of all, I think that the real difference is that this is a real conference. Hmm. So real people talking about, you know, projects about, so the, the open source stuff, the experiences are, you know, on stage and there are not really too many product pitches. It's, it's about, it's about the people. It's about the projects. It's about the, the challenges they had, how they, you know, overcome some of them. And, uh, that's the main difference. I mean, it's very educative informative and the kind of people is different. I mean, developers, you know, SREs, you know, you find ends on people. I mean, people that really do stuff that that's a real difference. I mean, uh, quite challenginghow discussing with them, but really, I mean, because they're really opinionated, but >>So we're gonna get talked to, to a company that has boosts on the ground doing open source since the, almost the start mm-hmm <affirmative> Kirsten newcomer, director of hybrid platform security at red hat and, uh, Connor Gorman, senior principal software engineer at red hat. So Kirsten, we're gonna start with you security and Kubernetes, you know, is Kubernetes. It's a, it's a race car. If I wanted security, I'd drive a minivan. <laugh> >>That's, that's a great frame. I think, I think though, if we stick with your, your car analogy, right, we have seen cars in cars and safety in cars evolve over the years to the point where you have airbags, even in, you know, souped up cars that somebody's driving on the street, a race car, race cars have safety built into, right. They do their best to protect those drivers. So I think while Kubernetes, you know, started as something that was largely, you know, used by Google in their environment, you know, had some perimeter based security as Kubernetes has become adopted throughout enterprises, as people. And especially, you know, we've seen the adoption accelerate during the pandemic, the move to both public cloud, but also private cloud is really accelerated. Security becomes even more important. You can't use Kubernetes in banking without security. You can't use it, uh, in automotive without security telco. >>And Kubernetes is, you know, Telco's adoption, Telco's deploying 5g on Kubernetes on open shift. Um, and, and this is just so the security capabilities have evolved over time to meet the customers and the adopters really red hat because of our enterprise customer base, we've been investing in security capabilities and we make those contributions upstream. We've been doing that really from the beginning of our adoption of Kubernetes, Kubernetes 1.0, and we continue to expand the security capabilities that we provide. And which is one of the reasons, you know, the acquisition of stack rocks was, was so important to us. >>And, and actually we are talking about security at different levels. I mean, so yeah, and different locations. So you are securing an edge location differently than a data center or, or, or maybe, you know, the cloud. So there are application level security. So there are so many angles to take this. >>Yeah. And, and you're right. I mean, I, there are the layers of the stack, which starts, you know, can start at the hardware level, right. And then the operating system, the Kubernetes orchestration all the services, you need to have a complete Kubernetes solution and application platform and then the services themselves. And you're absolutely right. That an edge deployment is different than a deployment, uh, on, you know, uh, AWS or in a private da data center. Um, and, and yet, because there is this, if you, if you're leveraging the heart of Kubernetes, the declarative nature of Kubernetes, you can do Kubernetes security in a way that can be consistent across these environments with the need to do some additions at the edge, right? You may, physical security is more important at the edge hardware based encryption, for example, whereas in a, in a cloud provider, your encryption might be at the cloud provider storage layer rather than hardware. >>So how do you orchestrate, because we are talking about orchestration all day and how do you orchestrate all these security? >>Yep. So one of the things, one of the evolutions that we've seen in our customer base in the last few years is we used to have, um, a small number of large clusters that our customers deployed and they used in a multi-tenant fashion, right? Multiple teams from within the organization. We're now starting to see a larger number of smaller clusters. And those clusters are in different locations. They might be, uh, customers are both deploying in public cloud, as well as private, you know, on premises, um, edge deployments, as you mentioned. And so we've invested in, uh, multi cluster management and, or, you know, sort of that orchestration for orchestrators, right? The, and because again of the declarative nature of Kubernetes, so we offer, uh, advanced cluster management, red hat, advanced cluster management, which we open sourced as the multi cluster engine CE. Um, so that component is now also freely available, open source. We do that with everything. So if you need a way to ensure that you have managed the configuration appropriately across all of these clusters in a declarative fashion, right. It's still YAML, it's written in YAML use ACM use CE in combination with a get ops approach, right. To manage that, uh, to ensure that you've got that environment consistent. And, and then, but then you have to monitor, right. You have to, I'm wearing >>All of these stack rocks >>Fits in. I mean, yeah, sure. >>Yeah. And so, um, you know, we took a Kubernetes native approach to securing all of this. Right. And there's kind of, uh, we have to say, there's like three major life cycles. You have the build life cycle, right. You're building these imutable images to go deployed to production. Right. That should never change that are, you know, locked at a point in time. And so you can do vulnerability scanning, you can do compliance checks at that point right. In the build phase. But then you put those in a registry, then those go and be deployed on top of Kubernetes. And you have the configuration of your application, you know, including any vulnerabilities that may exist in those images, you have the R back permissions, right. How much access does it have to the cluster? Is it exposed on the internet? Right. What can you do there? >>And then finally you have, the runtime perspective of is my pod is my container actually doing what I think it's supposed to do. Is it accessing all the right things? Is it running all the right processes? And then even taking that runtime information and influencing the configuration through things like network policies, where we have a feature called process baselining that you can say exactly what processes are supposed to run in this pod. Um, and then influencing configuration in that way to kind of be like, yeah, this is what it's doing. And let's go stamp this, you know, declaratively so that when you deploy it the next time you already have security built in at the Kubernetes level. >>So as we've talked about a couple of different topics, the abstraction layers, I have security around DevOps. So, you know, I have multi tendency, I have to deal with, think about how am I going to secure the, the, the Kubernetes infrastructure itself. Then I have what seems like you've been talking about here, Connor, which is dev SecOps mm-hmm <affirmative> and the practice of securing the application through policy. Right. Are customers really getting what's under the hood of dev SecOps? >>Do you wanna start or yeah. >>I mean, I think yes and no. I think, um, you know, we've, some organizations are definitely getting it right. And they have teams that are helping build things like network policies, which provide network segmentation. I think this is huge for compliance and multi-tenancy right. Just like containers, you know, one of the main benefits of containers, it provides this isolation between your applications, right? And then everyone's familiar with the network firewall, which is providing network segmentation, but now in between your applications inside Kubernetes, you can create, uh, network segmentation. Right. And so we have some folks that are super, super far along that path and, and creating those. And we have some folks who have no network policies except the ones that get installed with our products. Right. And then we say, okay, how can we help you guys start leveraging these things and, and creating maybe just basic name, space isolation, or things like that. And then trying to push that back into more the declarative approach. >>So some of what I think we hear from, from what Connor just te teed up is that real DevSecOps requires breaking down silos between developers, operations and security, including network security teams. And so the Kubernetes paradigm requires, uh, involvement actually, in some ways, it, it forces involvement of developers in things like network policy for the SDN layer, right? You need to, you know, the application developer knows which, what kinds of communication he or she, his app or her app needs to function. So they need to define, they need to figure out those network policies. Now, some network security teams, they're not familiar with YAML, they're not necessary familiar with software development, software defined networking. So there's this whole kind of, how do we do the network security in collaboration with the engineering team? And when people, one of the things I worry about, so DevSecOps it's technology, but it's people in process too. >>Right. And one of the things I think people are very comfortable adopting vulnerability scanning early on, but they haven't yet started to think about the network security angle. This is one area that not only do we have the ability in ACS stack rocks today to recommend a network policy based on a running deployment, and then make it easy to deploy that. But we're also working to shift that left so that you can actually analyze app deployment data prior to it being deployed, generate a network policy, tested out in staging and, and kind of go from the beginning. But again, people do vulnerability analysis shift left, but they kind of tend to stop there and you need to add app config analysis, network communication analysis, and then we need appropriate security gates at deployment time. We need the right automation that helps inform the developers. Not all developers have security expertise, not all security people understand a C I C D pipeline. Right. So, so how, you know, we need the right set of information to the right people in the place they're used to working in order to really do that infinity loop. >>Do you see this as a natural progression for developers? Do they really hit a wall before, you know, uh, finding out that they need to progress in, in this, uh, methodology? Or I know >>What else? Yeah. So I think, I think initially there's like a period of transition, right? Where there's sometimes there's opinion, oh, I, I ship my application. That's what I get paid for. That's what I do. Right. <laugh> um, and, and, but since, uh, Kubernetes has basically increased the velocity of developers on top, you know, of the platform in order to just deploy their own code. And, you know, we have every, some people have commits going to production, you know, every commitment on the repo goes to production. Right. Um, and so security is even more at the forefront there. So I think initially you hit a little bit of a wall security scans in CI. You could get some failures and some pushback, but as long as these are very informative and actionable, right. Then developers always wanna do the right thing. Right. I mean, we all want to ship secure code. >>Um, and so if you can inform you, Hey, this is why we do this. Or, or here's the information about this? I think it's really important because I'm like, right, okay. Now when I'm sending my next commits, I'm like, okay, these are some constraints that I'm thinking about, and it's sort of like a mindset shift, but I think through the tooling that we like know and love, and we use on top of Kubernetes, that's the best way to kind of convey that information of, you know, honestly significantly smaller security teams than the number of developers that are really pushing all of this code. >>So let's scale out what, talk to me about the larger landscape projects like prime cube, Litner, OPPI different areas of investment in, in, in security. Talk to me about where customers are making investments. >>You wanna start with coup linter. >>Sure. So coup linter was a open source project, uh, when we were still, uh, a private company and it was really around taking some of our functionality on our product and just making it available to everyone, to basically check configuration, um, both bridging DevOps and SecOps, right? There's some things around, uh, privileged containers, right? You usually don't wanna deploy those into your environment unless you really need to, but there's other things around, okay, do I have anti affinity rules, right. Am I running, you know, you can run 10 replicas of a pod on the same node, and now your failure domain is a single node. Now you want them on different nodes, right. And so you can do a bunch of checks just around the configuration DevOps best practices. And so we've actually seen quite a bit of adoption. I think we have like almost 2000 stars on, uh, and super happy to see people just really adopt that and integrate it into their pipelines. It's a single binary. So it's been super easy for people to take it into their C I C D and just, and start running three things through it and get, uh, you know, valuable insights into, to what configurations they should change. Right. >>And then if you're, if you were asking about things like, uh, OPPA, open policy agent and OPPA gatekeeper, so one of the things happening in the community about OPPA has been around for a while. Uh, they added, you know, the OPPA gatekeeper as an admission controller for Cobe. There's also veno another open source project that is doing, uh, admission as the Kubernetes community has, uh, kind of is decided to deprecate pod security policies, um, which had a level of complexity, but is one of the key security capabilities and gates built into Kubernetes itself. Um, OpenShift is gonna continue to have security context constraints, very similar, but it prevents by default on an OpenShift cluster. Uh, not a regular user cannot deploy a privileged pod or a pod that has access to the host network. Um, and there's se Linux configuration on by default also protects against container escapes to the file system or mitigates them. >>So pod security policies were one way to ensure that kind of constraint on what the developer did. Developers might not have had awareness of what was important in terms of the level of security. And so again, the cube and tools like that can help to inform the developer in the tools they use, and then a solution like OPPA, gatekeeper, or SCCs. That's something that runs on the cluster. So if something got through the pipeline or somebody's not using one of these tools, those gates can be leveraged to ensure that the security posture of the deployment is what the organization wants and OPPA gatekeeper. You can do very complex policies with that. And >>Lastly, talk to me about Falco and Claire, about what Falco >>Falco and yep, absolutely. So, um, Falco, great runtime analysis have been and something that stack rocks leveraged early on. So >>Yeah, so yeah, we leveraged, um, some libraries from Falco. Uh, we use either an EB P F pro or a kernel module to detect runtime events. Right. And we, we primarily focus on network and process activity as, um, as angles there. And then for Claire, um, it's, it's now within red hat again, <laugh>, uh, through the acquisition of cores, but, uh, we've forked in added a bunch of things around language vulnerabilities and, and different aspects that we wanted. And, uh, and you know, we're really interested in, I think, you know, the code bases have diversion a little bit Claire's on V4. We, we were based off V2, but I think we've both added a ton of really great features. And so I'm really looking forward to actually combining all of those features and kind of building, um, you know, we have two best of best of breed scanners right now. And I'm like, okay, what can we do when we put them together? And so that's something that, uh, I'm really excited about. >>So you, you somehow are aiming at, you know, your roadmap here now putting everything together. And again, orchestrated well integrated yeah. To, to get, you know, also a simplified experience, because that could be the >>Point. Yeah. And, and as you mentioned, you know, it's sort of that, that orchestration of orchestrators, like leveraging the Kubernetes operator principle to, to deliver an app, an opinionated Kubernetes platform has, has been one of the key things we've done. And we're doing that as well for security out of the box security policies, principles based on best practices with stack rocks that can be leveraged in the community or with red hat, advanced cluster security, combining our two scanners into one clear based scanner, contributing back, contributing back to Falco all of these things. >>Well, that speaks to the complexity of open source projects. There's a lot of overlap in reconciling. That is a very difficult thing. Kirsten Connor, thank you for joining the cube Connor. You're now a cube alone. Welcome to main elite group. Great. From Valencia Spain, I'm Keith Townsend, along with en Rico senior, and you're watching the cue, the leader in high tech coverage.

Published Date : May 19 2022

SUMMARY :

The cube presents, Coon and cloud native con Europe, 2022, brought to you by red hat, Talk to me about the flavor of the challenges they had, how they, you know, overcome some of them. we're gonna start with you security and Kubernetes, you know, is Kubernetes. And especially, you know, we've seen the adoption accelerate during And which is one of the reasons, you know, the acquisition of stack rocks was, was so important to than a data center or, or, or maybe, you know, the cloud. the Kubernetes orchestration all the services, you need to have a complete Kubernetes in, uh, multi cluster management and, or, you know, I mean, yeah, sure. And so you can do vulnerability scanning, And let's go stamp this, you know, declaratively so that when you So, you know, I have multi tendency, I mean, I think yes and no. I think, um, you know, we've, some organizations are definitely getting You need to, you know, So, so how, you know, we need the right set of information you know, we have every, some people have commits going to production, you know, every commitment on the repo goes to production. that's the best way to kind of convey that information of, you know, honestly significantly smaller security Talk to me about where customers And so you can do a bunch of checks just around the configuration DevOps best practices. Uh, they added, you know, the OPPA gatekeeper as an admission controller ensure that the security posture of the deployment is what the organization wants and So And, uh, and you know, we're really interested in, I think, you know, the code bases have diversion a little bit you know, also a simplified experience, because that could be the an opinionated Kubernetes platform has, has been one of the key things we've Kirsten Connor, thank you for joining the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

TelcoORGANIZATION

0.99+

Kirsten ConnorPERSON

0.99+

Connor GormanPERSON

0.99+

KirstenPERSON

0.99+

AWSORGANIZATION

0.99+

10 replicasQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Kristen NewcomerPERSON

0.99+

ConnorPERSON

0.99+

red hatORGANIZATION

0.99+

Valencia SpainLOCATION

0.99+

Red HatORGANIZATION

0.99+

oneQUANTITY

0.99+

RicoORGANIZATION

0.99+

FalcoORGANIZATION

0.99+

twoQUANTITY

0.98+

annerPERSON

0.98+

LinuxTITLE

0.98+

KubernetesTITLE

0.98+

ClairePERSON

0.97+

two scannersQUANTITY

0.97+

OpenShiftTITLE

0.97+

bothQUANTITY

0.97+

CloudnativeconORGANIZATION

0.97+

Kubernetes 1.0TITLE

0.97+

telcoORGANIZATION

0.97+

single nodeQUANTITY

0.95+

one wayQUANTITY

0.95+

DevOpsTITLE

0.94+

pandemicEVENT

0.94+

2022DATE

0.94+

prime cubeCOMMERCIAL_ITEM

0.93+

SecOpsTITLE

0.93+

OPPATITLE

0.92+

one areaQUANTITY

0.91+

Kirsten newcomerPERSON

0.9+

KubeconORGANIZATION

0.9+

almost 2000 starsQUANTITY

0.89+

CoonORGANIZATION

0.87+

single binaryQUANTITY

0.87+

todayDATE

0.84+

EuropeLOCATION

0.82+

threeQUANTITY

0.77+

CobePERSON

0.75+

three major lifeQUANTITY

0.73+

5gQUANTITY

0.72+

coup linterTITLE

0.71+

Haseeb Budhani, Rafay & Adnan Khan, MoneyGram | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to the cube coverage of CubeCon 2022 EU. I'm here with my cohost Paul Gill. Please work with you, Keith. Nice to work with you, Paul. And we have our first two guests. The cube is hot. I'm telling you we are having interviews before the start of even the show floor I have with me. We gotta start with the customers first enterprise architect, a non-con Aon con. Welcome to the show. >>Thank you so >>Much. Cube time cube time. First now you're at cube alumni. Yep. <laugh> and, and, uh, has Havani CEO. Arai welcome back. Nice to, >>Uh, >>Talk to you again today. So we're talking all things Kubernetes and we're super excited to talk to MoneyGram about their journey to Kubernetes. First question I have for Anon. Talk to us about what your pre Kubernetes landscape looked like. >>Yeah, certainly. Uh, Keith, so, um, we had a, uh, you know, a traditional mix of legacy applications and modern applications. Uh, you know, a few years ago we made the decision to move to a microservices architecture. Um, and this was all happening while we were still on prem. Right? So your traditional VMs, um, and you know, we started 20, 30 microservices, but with the microservices packing, you know, you quickly expand to hundreds of microservices. Um, and we started getting to that stage where managing them without sort of an orchestration platform, uh, and just as traditional VMs was getting to be really challenging, right. Uh, especially from a day two operational, uh, you know, you can manage 10, 15 microservices, but when you start having 50 and so forth, um, all those concerns around, uh, you know, high availability, operational performance. Um, so we started looking at some open source projects, you know, spring cloud. Uh, we are predominantly a Java, um, shop. So we looked at the spring cloud projects. Uh, they give you a number, uh, you know, of initiatives, um, for doing some of those, um, management and what we realized again, to manage those components, um, without sort of a platform was really challenging. So that, that kind of led us to sort of Kubernetes where, um, along with our journey cloud, uh, it was the platform that could help us with a lot of those management operational concerns. >>So as you talk about some of those challenges, pre Kubernetes, what were some of the operational issues that you folks experienced? >>Yeah. You know, uh, certain things like auto scaling is, is number one, right? I mean, that's a fundamental concept of cloud native, right. Is, um, how do you auto scale VMs? Right. Uh, you can put in some old methods and stuff, but, uh, it was really hard to do that automatically. Right. So, uh, Kubernetes with like HPA gives you those out of the box, right? Provided you set the right policies. Uh, you can have auto scaling, uh, where it can scale up and scale back. So we were doing that manually. Right. So before, uh, you know, MoneyGram, obviously, you know, holiday season, people are sending more money mother's day. Um, our ops team would go in basically manually scale, uh, VMs. Right. So we'd go from four instances to maybe eight instances. Right. Uh, but, but that entailed outages. Right. Um, and just to plan around doing that manually and then sort of scale them back was a lot of overhead, a lot of administration overhead. Right. So, uh, we wanted something that could help us do that automatically right. In a, in an efficient, uh, unintrusive way. So, so, you know, that was one of the things, uh, monitoring, um, and, and management, uh, operations, you know, just kind of visibility into how those applications were during, what were the status of your, um, workloads was also a challenge, right. Uh, to do that. >>So, cause see, I gotta ask the question. If someone would've came to me with that problem, I'd just say, you know, what, go to the plug, the cloud, what, how does, uh, your group help solve some of these challenges? What do you guys do? >>Yeah. What, what do we do? So here's my perspective on the market as it's playing out. So I see a bifurcation happening in the Kubernetes space, but there's the Kubernetes run time. So Amazon is EKS Azure as EKS, you know, there's enough of these available. They're not managed services. They're actually really good, frankly. Right? In fact, retail customers, if you're an Amazon, why would you spin up your own? Just use EK. It's awesome. But then there's an operational layer that is needed to run Kubernetes. Uh, my perspective is that, you know, 50,000 enterprises are adopting Kubernetes over the next five to 10 years. And they're all gonna go through the same exact journey and they're all gonna end up, you know, potentially making the same mistake, which is, they're gonna assume that Kubernetes is easy. <laugh> they're gonna say, well, this is not hard. I got this up and running on my laptop. >>This is so easy. No worries. Right. I can do key gas, but then, okay. Can you consistently spin up these things? Can you scale them consistently? Do you have the right blueprints in place? Do you have the right access management in place? Do you have the right policies in place? Can you deploy applications consistently? Do you have monitoring and visibility into those things? Do your developers have access to when they need it? Do you have the right networking layer in place? Do you have the right chargebacks in place? Remember you have multiple teams and by the way, nobody has a single cluster. So you gotta do this across multiple clusters. And some of them have multiple clouds, not because they wanna be multiple clouds because, but sometimes you buy a company and they happen to be in Azure. How many dashboards do you have now across all the open source technologies that you have identified to solve these problems? >>This is where pain lies. So I think that Kubernetes is fundamentally a solve problem. Like our friends at AWS and Azure they've solved this problem. It's like a KSKS et cetera, GK for that matter. They're they're great. And you should use them and don't even think about spinning up Q B and a best clusters. Don't do it. Use the platforms that exist and commensurately on premises. OpenShift is pretty awesome, right? If you like it, use it. But then when it comes to the operations layer, right, that's where today we end up investing in a DevOps team and then an SRE organization that need to become experts in Kubernetes. And that is not tenable, right? Can you let's say unlimited capital unlimited budgets. Can you hire 20 people to do Kubernetes today? >>If you could find them, if >>You can find 'em right. So even if you could, the point is that see, five years ago, when your competitors were not doing Kubernetes, it was a competitive advantage to go build a team to do Kubernetes. So you could move faster today. You know, there's a high chance that your competitors are already buying from a Rafa or somebody like Rafa. So now it's better to take these really, really sharp engineers and have them work on things that make the company money, writing operations for Kubernetes. This is a commodity. Now >>How confident are you that the cloud providers won't get in and do what you do and put you out of business? >>Yeah, I mean, absolutely. I think, I mean, in fact, I, I had a conversation with somebody from HBS this morning and I was telling them, I don't think you have a choice. You have to do this right. Competition is not a bad thing. Right? This, the, >>If we are the only company in a space, this is not a space, right. The bet we are making is that every enterprise has, you know, they have an on-prem strategy. They have at least a handful of, everybody's got at least two clouds that they're thinking about. Everybody starts with one cloud and then they have some other cloud that they're also thinking about, um, for them to only rely on one cloud's tools to solve for on-prem plus that second cloud, they potentially, they may have, that's a tough thing to do. Um, and at the same time we as a vendor, I mean the only real reason why startups survive is because you have technology that is truly differentiated, right. Otherwise, right. I mean, you gotta build something that is materially. Interesting. Right. We seem to have, sorry, go ahead. >>No, I was gonna ask you, you actually had me thinking about something, a non yes. MoneyGram big, well known company, a startup, adding, working in a space with Google, VMware, all the biggest names. What brought you to Rafi to solve this operational challenge? >>Yeah. Good question. So when we started out sort of in our Kubernetes, um, you know, we had heard about EKS, uh, and, and we are an AWS shop. So, uh, that was the most natural path. And, and we looked at, um, EKS and, and used that to, you know, create our clusters. Um, but then we realized very quickly that yes, toe's point AWS manages the control plane for you. It gives you the high availability. So you're not managing those components, which is some really heavy lifting. Right. Uh, but then what about all the other things like, you know, centralized dashboard, what about, we need to provision, uh, Kubernetes clusters on multi-cloud right. We have other clouds that we use, uh, or also on prem. Right. Um, how do you do some of that stuff? Right. Um, we, we also, at that time were looking at, uh, other, uh, tools also. >>And I had, I remember come up with an MVP list that we needed to have in place for day one or day two, uh, operations, right. To before we even launch any single applications into production. Um, and my ops team looked at that list. Um, and literally there was only one or two items that they could check, check off with S you know, they they've got the control plane, they've got the cluster provision, but what about all those other components? Uh, and some of that kind of led us down the path of, uh, you know, looking at, Hey, what's out there in this space. And, and we realized pretty quickly that there weren't too many, there were some large providers and capabilities like Antos, but we felt that it was, uh, a little too much for what we were trying to do. You know, at that point in time, we wanted to scale slowly. We wanted to minimize our footprint. Um, and, and Rafa seemed to sort of, uh, was, was a nice mix, uh, you know, uh, from all those different angles, how >>Was, how was the situation affecting your developer experience? >>So, um, so that's a really good question also. So operations was one aspect of, to it, right? The other part is the application development, right? We've got, uh, you know, Moneygrams when a lot of organizations have a plethora of technologies, right? From, from Java to.net to no GS, what have you, right. Um, now as you start saying, okay, now we're going cloud native, and we're gonna start deploying to Kubernetes. Um, there's a fair amount of overhead because a tech stack, all of a sudden goes from, you know, just being Java or just being.net to things like Docker, right? All these container orchestration and deployment concerns, Kubernetes, uh, deployment artifacts, right. I gotta write all this YAML, uh, as my developer say, YAML, hell right. <laugh>, uh, I gotta learn Docker files. I need to figure out, um, a package manager like helm, uh, on top of learning all the Kubernetes artifacts. >>Right. So, um, initially we went with sort of, okay, you know, we can just train our developers. Right. Um, and that was wrong. Right. I mean, you can't assume that everyone is gonna sort of learn all these deployment concerns, uh, and we'll adopt them. Right. Um, uh, there's a lot of stuff that's outside of their sort of core dev domain, uh, that you're putting all this burden on them. Right. So, um, we could not rely on them and to be sort of cube cuddle experts, right. That that's a fair amount, overhead learning curve there. Um, so Rafa again, from their dashboard perspective, right? So the managed cube cuddle gives you that easy access for devs, right. Where they can go and monitor the status of their workloads. Um, they can, they don't have to figure out, you know, configuring all these tools locally just to get it to work. >>Uh, we did some things from a DevOps perspective to basically streamline and automate that process. But then also office order came in and helped us out, uh, on kind of that providing that dashboard. They don't have to worry. They can basically get on through single sign on and have visibility into the status of their deployment. Uh, they can do troubleshooting diagnostics all through a single pane of glass. Right. Which was a key key item. Uh, initially before Rafa, we were doing that command line. Right. And again, just getting some of the tools configured was, was huge. Right. Took us days just to get that. And then the learning curve for development teams, right? Oh, now you gotta, you got the tools now you gotta figure out how to use it. Right. Um, so >>See, talk to me about the, the cloud native infrastructure. When I look at that entire landscaping number, I'm just overwhelmed by it. As a customer, I look at it, I'm like, I, I don't know where to start I'm sure. Or not, you, you folks looked at it and said, wow, there's so many solutions. How do you engage with the ecosystem? You have to be at some level opinionated, but flexible enough to, uh, meet every customer's needs. How, how do you approach that? >>Yeah. So it's a, it's a really tough problem to solve because, so, so the thing about abstraction layers, you know, we all know how that plays out, right? So abstraction layers are fundamentally never the right answer because they will never catch up. Right. Because you're trying to write and layer on top. So then we had to solve the problem, which was, well, we can't be an abstraction layer, but then at the same time, we need to provide some sort of, sort of like centralization standardization. Right. So, so we sort of have this, the following dissonance in our platform, which is actually really important to solve the problem. So we think of a, of a stack as sort of four things. There's the, there's the Kubernetes layer infrastructure layer, um, and EKS is different from ES and it's okay. Mm-hmm <affirmative>, if we try to now bring them all together and make them behave as one, our customers are gonna suffer because there are features in ESS that I really want. >>But then if you write an AB obsession layer, I'm not gonna get 'em so not. Okay. So treat them as individual things. And we logic that we now curate. So every time S for example, goes from 1 22 to 1 23, rewrite a new product, just so my customer can press a button and upgrade these clusters. Similarly, we do this fors, we do this for GK. We it's a really, really hard job, but that's the job. We gotta do it on top of that, you have these things called. Add-ons like my network policy, my access management policy, my et cetera. Right. These things are all actually the same. So whether I'm Anek or a Ks, I want the same access for Keith versus a none. Right. So then those components are sort of the same across doesn't matter how many clusters does money clouds on top of that? You have applications. And when it comes to the developer, in fact, I do the following demo a lot of times because people ask the question, right? Mean, I, I, I, people say things like, I wanna run the same Kubernetes distribution everywhere, because this is like Linux, actually, it's not. So I, I do a demo where I spin up a access to an OpenShift cluster and an EKS cluster and an AKs cluster. And I say, log in, show me which one is, which they're all the same. >>So Anan get, put, make that real for me, I'm sure after this amount of time, developers groups have come to you with things that are snowflakes and you, and as a enterprise architect, you have to make it work within your framework. How has working with RAI made that possible? >>Yeah. So, um, you know, I think one of the very common concerns is right. The whole deployment, right. Uh, toe's point, right. Is you are from an, from a deployment perspective. Uh, it's still using helm. It's still using some of the same tooling, um, right. But, um, how do you Rafa gives us, uh, some tools, you know, they have a, a command line, art cuddle API that essentially we use. Um, we wanted parody, um, across all our different environments, different clusters, you know, it doesn't matter where you're running. Um, so that gives us basically a consistent API for deployment. Um, we've also had, um, challenges, uh, with just some of the tooling in general, that we worked with RA actually to actually extend their, our cuddle API for us, so that we have a better deployment experience for our developers. So, >>Uh Huie how long does this opportunity exist for you? At some point, do the cloud providers figure this out or does the open source community figure out how to do what you've done and, and this opportunity is gone. >>So, so I think back to a platform that I, I think very highly of, which is a highly off, which has been around a long time and continues to live vCenter, I think vCenter is awesome. And it's, it's beautiful. VMware did an incredible job. Uh, what is the job? Its job is to manage VMs, right? But then it's for access. It's also storage. It's also networking and a sex, right? All these things got done because to solve a real problem, you have to think about all the things that come together to solve, help you solve that problem from an operations perspective. Right? My view is that this market needs essentially a vCenter, but for Kubernetes, right. Um, and that is a very broad problem, right. And it's gonna spend, it's not about a cloud, right? I mean, every cloud should build this. I mean, why would they not? It makes sense, Anto success, right. Everybody should have one. But then, you know, the clarity in thinking that the Rafa team seems to have exhibited till date seems to merit an independent company. In my opinion, I think like, I mean, from a technical perspective, this products awesome. Right? I mean, you know, we seem to have, you know, no real competition when it comes to this broad breadth of capabilities, will it last, we'll see, right. I mean, I keep doing Q shows, right? So every year you can ask me that question again. Well, you're >>You make a good point though. I mean, you're up against VMware, you're up against Google. They're both trying to do sort of the same thing you're doing. What's why are you succeeding? >>Maybe it's focus. Maybe it's because of the right experience. I think startups only in hindsight, can one tell why a startup was successful? In all honesty. I, I, I've been in a one or two service in the past. Um, and there's a lot of luck to this. There's a lot of timing to this. I think this timing for a com product like this is perfect. Like three, four years ago, nobody would've cared. Like honestly, nobody would've cared. This is the right time to have a product like this in the market because so many enterprises are now thinking of modernization. And because everybody's doing this, this is like the boots storm problem in HCI. Everybody's doing it. But there's only so many people in the industry who actually understand this problem. So they can't even hire the people. And the CTO said, I gotta go. I don't have the people. I can't fill the, the seats. And then they look for solutions and we are that solution that we're gonna get embedded. And when you have infrastructure software like this embedded in your solution, we're gonna be around with the assuming, obviously we don't score up, right. We're gonna be around with these companies for some time. We're gonna have strong partners for the long term. >>Well, vCenter for Kubernetes, I love to end on that note, intriguing conversation. We could go on forever on this topic, cuz there's a lot of work to do. I think, uh, I don't think this will over be a solve problem for the Kubernetes of cloud native solution. So I think there's a lot of opportunity in that space. Hi, thank you for rejoining the cube. I non con welcome becoming a cube alum. <laugh> I awesome. Thank you. Get your much your profile on the, on the Ken's. Website's really cool from Valencia Spain. I'm Keith Townsend, along with my whole Paul Gillon and you're watching the cube, the leader in high tech coverage.

Published Date : May 18 2022

SUMMARY :

brought to you by the cloud native computing foundation. I'm telling you we are having interviews before the start of even the <laugh> and, and, uh, has Havani CEO. Talk to you again today. Uh, Keith, so, um, we had a, uh, you know, So before, uh, you know, MoneyGram, obviously, you know, that problem, I'd just say, you know, what, go to the plug, the cloud, what, how does, So Amazon is EKS Azure as EKS, you know, How many dashboards do you have now across all the open source technologies that you have identified to And you should use them and don't even think about spinning up Q B and a best clusters. So even if you could, the point is that see, five years ago, I don't think you have a choice. we as a vendor, I mean the only real reason why startups survive is because you have technology that is truly What brought you to Rafi to solve Uh, but then what about all the other things like, you know, centralized dashboard, that they could check, check off with S you know, they they've got the control plane, they've got the cluster provision, you know, just being Java or just being.net to things like Docker, right? So, um, initially we went with sort of, okay, you know, we can just Oh, now you gotta, you got the tools now you gotta figure out how to use it. How do you engage with the ecosystem? so the thing about abstraction layers, you know, we all know how that plays out, We gotta do it on top of that, you have these things called. developers groups have come to you with things that are snowflakes and you, some tools, you know, they have a, a command line, art cuddle API that essentially we use. does the open source community figure out how to do what you've done and, and this opportunity is gone. you know, the clarity in thinking that the Rafa team seems to have exhibited till date seems What's why are you succeeding? And when you have infrastructure software like this embedded in your solution, we're thank you for rejoining the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillPERSON

0.99+

Keith TownsendPERSON

0.99+

Paul GillonPERSON

0.99+

PaulPERSON

0.99+

oneQUANTITY

0.99+

KeithPERSON

0.99+

GoogleORGANIZATION

0.99+

20QUANTITY

0.99+

HBSORGANIZATION

0.99+

RafayPERSON

0.99+

10QUANTITY

0.99+

AWSORGANIZATION

0.99+

Adnan KhanPERSON

0.99+

AmazonORGANIZATION

0.99+

JavaTITLE

0.99+

20 peopleQUANTITY

0.99+

Haseeb BudhaniPERSON

0.99+

RafaPERSON

0.99+

eight instancesQUANTITY

0.99+

Valencia SpainLOCATION

0.99+

AraiPERSON

0.99+

50QUANTITY

0.99+

FirstQUANTITY

0.99+

50,000 enterprisesQUANTITY

0.99+

second cloudQUANTITY

0.99+

15 microservicesQUANTITY

0.99+

LinuxTITLE

0.98+

one cloudQUANTITY

0.98+

vCenterTITLE

0.98+

todayDATE

0.98+

mother's dayEVENT

0.98+

firstQUANTITY

0.98+

First questionQUANTITY

0.98+

bothQUANTITY

0.98+

five years agoDATE

0.98+

four instancesQUANTITY

0.98+

ESTITLE

0.98+

AnanPERSON

0.97+

RafiPERSON

0.97+

MoneyGramORGANIZATION

0.97+

first two guestsQUANTITY

0.97+

HPAORGANIZATION

0.97+

four years agoDATE

0.96+

KubernetesTITLE

0.96+

single clusterQUANTITY

0.95+

1 23OTHER

0.95+

hundreds of microservicesQUANTITY

0.95+

30 microservicesQUANTITY

0.95+

singleQUANTITY

0.95+

OpenShiftTITLE

0.95+

one aspectQUANTITY

0.95+

single paneQUANTITY

0.94+

VMwareORGANIZATION

0.94+

two itemsQUANTITY

0.94+

day twoQUANTITY

0.93+

CoonORGANIZATION

0.93+

ESSTITLE

0.9+

10 yearsQUANTITY

0.89+

AzureORGANIZATION

0.89+

day oneQUANTITY

0.89+

RafaORGANIZATION

0.88+

KubernetesORGANIZATION

0.88+

this morningDATE

0.88+

DockerTITLE

0.87+

CloudnativeconORGANIZATION

0.86+

KenPERSON

0.86+

Matt Coulter, Liberty Mutual | AWS re:Invent 2021


 

(upbeat music) >> Good afternoon and welcome back to Las Vegas. You're watching theCUBE's coverage of AWS 2021. My name is Dave Vellante. theCUBE goes out to the events. We extract the signal from the noise. Very few physical events this year doing a lot of hybrid stuff. It's great to be back in hybrid event... Physical event land, 25,000 people here. Probably a little few more registered than that. And then on the periphery, got to be another at least 10,000 people that came in, flew in and out, see what's happening. A bunch of VCs, checking things out, a few parties last night and so forth. A lot of action here. It's like re:Invent is back. Matt Coulter is here. He's a technical architect at Liberty Mutual. Matt, thanks for flying in from Belfast. Good to see ya. >> Dave, and thanks for having me today. >> Pleasure. So what's your role as a technical architect? Maybe describe that, we'll get into a little bit. >> Yeah so I am here to empower and enable our developers across the globe to rapidly deliver business value and solve problems for our customers in a well-architected way that doesn't introduce problems or risks, you know, later down the line. So instead of thinking of me as someone who directly every day, build software, I try to create the environment where other people can rapidly build software. >> That's, you know, it's interesting. because you're a developer, right? You can use like, "Hey I code." That's what normally you would say but you're actually creating frameworks and business model so that others can learn, teach them how to fish, so we speak. >> Yeah because I can only scale, there's a certain amount. Whereas if I can teach, there's 5,000 people in Liberty Mutual's tech organization. So if I can teach the 5,000 to be 5% better, it's way more than me even if I 10Xed >> When did you first touch the Cloud? >> Personally, it would have been four/five years ago. That's when I started in the Cloud. >> What was that experience like for you? >> Oh, it was hard. It was very different to anything that we'd done in the past. So it's because you... Traditionally, you would have just written your small piece of code. You would have had a big application that was out there, it had been out there maybe 20 years, it was deployed, and you were just adding a couple of lines. Whereas when you start putting stuff into the Cloud, it's out there. It's on the internet for anyone there to try and hack or try to get into. It was a bit overwhelming the amount that you needed to learn. So it was- >> Was it worth it? >> Oh yeah. Completely. (laughing) So that's the thing, that I would never go back to the way we did things before. And that's why I'm so passionate, enthusiastic about the stuff I've been doing. Because to me, the amount of benefits you can get, like now we can deliver thing. We have teams going out there and doing discovery and framing with the business. And they're pushing well-architected products three days later into production. That was unheard of before, you know, this year. >> Yeah. So you were part of Werner's keynote this morning. Of course that's always one of the keynotes that's most anticipated at re:Invent. It's on the sort of last day. He's awesome. This is you know, 10th year of re:Invent. He sort of did a look back. He started out (chuckles) he's just a cool guy and very passionate. But talk about what your role was in the keynote. >> Yeah so I had a section towards the end of the keynote, and I was to talk about Liberty Mutual's serverless first journey. I actually went through from 2014 through to the current day of all the major Cloud milestones that we've hit. And I talked through some of the impact it's had on our business and the impact it's had on our developers. And yeah it's just been this incredible journey where as I said, it was hard at the start. So we had to spark this culture within our company that we were going to empower and enable our developers and we were going to get them excited about doing this. And that's why we needed to make it safe. So there was a lot of work went down at the start to make the Cloud safe for our developers to experiment. And then the past two years have been known that it's safe, okay? Let's see what it can do. Let's go. >> Yeah so Liberty Mutual has been around many many years, Boston-based, you know, East Coast-based, my home city. I don't live in Boston but I consider it my city. And so talk about your business a little bit because you're an established company. I don't know, probably a hundred years old, right? Any all other newbies nipping at your business, right? Coming in with low-cost products. Maybe not bringing as much protection as you dig into it. But regardless, you've got to compete with them technically. So what are some of the drivers in your business and how are you using the Cloud to sort of defend your turf and grow? >> Yeah so first of all, we're 109 years old. (laughing) Yeah. So absolutely, there's an entire insurtech market of people here gunning for the big Liberty Mutual because we've been here for so long. And our whole thing is we're focused on our customers. So we want to be there for people in their time of need. Because at a point in time whenever you need insurance, typically something is going wrong. And that's why we're building innovative solutions like a serverless call center we built, that after natural disaster, it can automatically process claims in less than four minutes. So instead of having to wait on hold for maybe an hour, you can just text or pick up the phone, and four minutes later your claims are through. And that's we're using technology always focused on the customer. >> That's unbelievable. Think about that experience, to me. I mean I've filed claims before and it's, it's kind of time consuming. And you're saying you've compressed that to minutes? Days, weeks, you know, and now you've compressed that to minutes? >> Yeah. >> Tell us more about how you did that. >> And that's because it's a fully serverless solution that was built. So it doesn't require like people to scale. It can scale to whatever number of our customers need to make a claim at that point because that would typically be the bottleneck if there's some kind of natural disaster. So that means that if something happens we can just switch it on. And customers can choose not to use it. You can always choose to say I want to speak to a person. But now with this technology, we can just make it easy and just go. Everything, all the information we know in the back end, we just use it and actually make things better for you. >> You're talking about the impact that it had on your business and developers. So how do you quantify that? Maybe start with the business. Maybe share some ways in which you look at that measure. >> Yeah, so I mean, in terms of how we measure the impact of the Cloud on our business, we're always looking at our profitability and we're always looking, as I say, at our customers. And ideally, I want our Cloud bill to go down as our number of customers goes up because that's why we're using the serverless fast mindset, we call it. We don't want to build anything we don't have to build. We want to take the best that's out there and just piece it together and produce these products for our customers. So yeah, that's having an impact on our business because now developers aren't spending weeks, months, years doing all this configuration. And they can actually sit down with the business and understand how we write insurance. So now we can start being innovative with our products and talking about the real business instead of everything else. >> When you say you want your Cloud bill to go down, you know, it reminds me like in the old days of IT budgeting, right? It was always slash, do more with less cut, cut, cut, right? And it was kind of going in cycles. But with the Cloud a lot of customers that I talk to, they were like, might be going down as a percentage of revenues but actually it might be going up as you launch more projects because they're driving revenue. There's a tighter tie between revenue and Cloud bill. How do you look at that? >> Yeah. So I mean, with every project, you have to look at the worth-based development often and whether or not it's going to hold this away in the market. And the key thing is with the serverless products that are being released now, they cost pennies if they're low scale. So you can actually launch a new product into the market and it maybe only cost you $20 to see if that thing would fit in the market. So by the time you're getting into the big bills you know whether or not you've got a market fit and you can decide whether you want to pivot. >> Oh wow. So you you've compressed, that's another business metric. You've compressed the time to get certainty around product market fit, right? Which is huge because you really can't go to market until you have product market fit (laughing) >> Exactly. You have to be. Thoroughly understand if it's going to work. >> Right because if you go to the market and you've got 50% churn. (laughing) Well, you don't want to be worried about the go-to market. You got to get back to the product so you can test that and you can generate. >> So that's why, yeah, As I said, we have developers who can go out and do discovery and framing on a potential product and deliver it three days later which (chuckles) >> How has the Cloud effected developer satisfaction or passion? I guess it's... I mean we're in AWS Cloud. Our developers, we tell them "Okay, you got to go back on-prem." They would say, "I quit." (laughing) How has it affected their lives? >> Yeah it's completely there for them, it's way better. So now we have way more ownership over any, you know, of everything we ever did. So it feels like you're truly a part of Liberty Mutual and you're solving Liberty's problems now. Because it's not a case of like, "Okay, let's put in a request to stand up a server, it's going to take six months. And then let's do some big long acquisition." It's a case of like, "Let's actually get done into the nitty gritty of what we going to build." And that's- >> How do you use the Cloud developer kit? Maybe you could talk about that. I mean, explain what it is. It's a framework. But explain from your perspective. >> Yeah so the Cloud typically, it started off, and lot of it was done by Cloud infrastructure engineers who created these big YAML files. That's how they defined all the stuff that's going to be deployed. But that's not typically the development language that most developers use. The CDK is in like Java, TypeScript, .NET, Python. The language is developers ready known love. And it means that they can use everything they already know from all of their previous development experience and bring it to the Cloud. And you see some benefits like, you get, I talked about this morning, a 1500 line YAML file was reduced to 14 lines of TypeScript. And that's what we're talking about with the cognitive difference for a developer using CDK versus anything else. >> Cognitive abstraction, >> Right? >> Yeah. And so it just simplifies your living and you spend more time doing cool stuff. >> Yeah we can write an abstraction for our specific needs once. And then everybody can use that abstraction. And if we want to make a change and make it better, everyone benefits instead of everybody doing the same thing all the time. >> So for people who are unfamiliar, what do you need? You need an AWS account, obviously. You got to get a command-line interface, I would imagine. maybe some Node.js often running, or is it- >> Yeah. So that's it. You need an AWS account, and then you need to install CDK, which is from Node Package Manager. And then from there, it depends on which way you want to start. You could use my project CDK patterns, has a whole ray of working patterns that you can clone among commands. You just have to type, like one command you've got a pattern, and then CDK deploy. And you'll have something working. >> Okay so what do you do day-to-day? You sort of, you evangelize folks to come in and get trained? Is there just like a backlog of people that want your time? How do you manage that? >> So I try to be the place that I'm needed the most based on impact of the business. And that's why I try to go in. Liberty split up into different areas and I try to go into those areas, understand where they are versus where they need to be. And then if I can do that across everywhere, you can see the common thesis. And then I can see where I can have the most impact across the board instead of focusing on one micro place. So there's a variety of tools and techniques that I would do, you know, to go through that but that's the crux of it. >> So you look at your business across the portfolio, so you have portfolio view. And then you do a gap analysis essentially, say "Okay, where can I approach this framework and technology from a developer standpoint, add value? >> Yeah like I could go into every single team with every single project, draw it all out and like, what we call Wardley map, and then you can draw a line and then say "Everything blue in this line is undifferentiated, heavy-lifted. I want you to migrate that. And here's how you're going to do it I've already built the tools for that." And that's how we can drive those conversations. >> So, you know, it's funny, I spent a lot of time in the insurance business not in the business but consulting with heads of application development and looking at portfolios. And you know, they did their thing. But you know, a lot of people sort of question, "Can developers in an insurance company actually become cool Cloud native developers?" You're doing it, right? So that's going to be an amazing transformation for your colleagues and your industry. And it's happening as we look around here (indistinct) >> And that's the thing, in Liberty I'm not the only one. So there's Tommy Gloklin, he's an AWS hero, and there's Diali Mikan, who's an AWS hero. And Diali is in Workgrid but we're still all the same family. >> So what does it mean to be an AWS hero? >> Yeah so this is something that AWS has to offer you to join. So basically, it's about impacting the community. It's not... There's not like a checklist of items you can go through and you're hero. It's you have to be nominated internally through AWS, and then you have to have the right intentions. And yeah, just follow through. >> Dave: That's awesome. Yeah so our producer, Lynette, is looking for an Irish limerick. You know, every, say I'm half Irish is through my marriage. Dad, you didn't know that, did you? And every year we have a St Patrick's Day party and my daughter comes up with limericks. So I don't know, if you have one that you want to share. If you don't, that's fine. >> I have no limericks for now. I'm so sorry. (laughing) >> There once was a producer from, where are you from? (laughing) So where do you want to take this, Matt? What's your future look like with this program? >> So right now, today, I actually launched a book called the CDK book. >> Dave: Really? Awesome. >> Yeah So me and three other heroes got together and put everything we know about CDK and distilled it into one book. But the... I mean there's two sides, there's inside Liberty. The goal as I've mentioned is to get our developers to the point that they're talking about real insurance problems rather than tech. And then outside Liberty in the community the goal is things like CDK Day, which is a global conference that I created and run. And I want to just grow those farther and farther throughout the world so that eventually we can start learning you know, cross business, cross market, cross the main instead of just internally one company. >> It's impressive how tuned in you are to the business. Do you feel like the Cloud almost forces that alignment? >> It does. It definitely does. Because when you move quickly, you need to understand what you're doing. You can't bluff almost, you know. Like everything you're building you're demonstrating that every two weeks or faster. So you need to know the business to do it. >> Well, Matt, congratulations on all the great work that you've done and the keynote this morning. You know, true tech hero. We really appreciate your time coming in theCUBE. >> Thank you, Dave, for having me. >> Our pleasure. And thank you for watching. This is Dave Vellante for theCUBE at AWS re:Invent. We are the leader global tech coverage. We'll be right back. (light upbeat music)

Published Date : Dec 3 2021

SUMMARY :

And then on the periphery, So what's your and enable our developers across the globe That's what normally you would say So if I can teach the Personally, it would have the amount that you needed to learn. of benefits you can get, This is you know, 10th year of re:Invent. and the impact it's had on our developers. and how are you using the Cloud So instead of having to wait Days, weeks, you know, And customers can choose not to use it. So how do you quantify that? and talking about the real business How do you look at that? and it maybe only cost you $20 So you you've compressed, You have to be. and you can generate. "Okay, you got to go back on-prem." over any, you know, of How do you use the Cloud developer kit? And you see some benefits like, you get, and you spend more time doing cool stuff. And if we want to make a unfamiliar, what do you need? it depends on which way you want to start. that I would do, you So you look at your and then you can draw a line And you know, they did their thing. And that's the thing, in and then you have to have So I don't know, if you have I have no limericks book called the CDK book. Dave: Really? you know, cross business, in you are to the business. So you need to know the business to do it. and the keynote this morning. thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tommy GloklinPERSON

0.99+

Dave VellantePERSON

0.99+

BelfastLOCATION

0.99+

DialiPERSON

0.99+

BostonLOCATION

0.99+

Diali MikanPERSON

0.99+

MattPERSON

0.99+

LynettePERSON

0.99+

LibertyORGANIZATION

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

Las VegasLOCATION

0.99+

2014DATE

0.99+

50%QUANTITY

0.99+

Matt CoulterPERSON

0.99+

$20QUANTITY

0.99+

six monthsQUANTITY

0.99+

Liberty MutualORGANIZATION

0.99+

5,000QUANTITY

0.99+

two sidesQUANTITY

0.99+

14 linesQUANTITY

0.99+

25,000 peopleQUANTITY

0.99+

20 yearsQUANTITY

0.99+

5,000 peopleQUANTITY

0.99+

JavaTITLE

0.99+

an hourQUANTITY

0.99+

less than four minutesQUANTITY

0.99+

WernerPERSON

0.99+

Node.jsTITLE

0.99+

PythonTITLE

0.99+

CDKORGANIZATION

0.99+

St Patrick's DayEVENT

0.99+

1500 lineQUANTITY

0.99+

one bookQUANTITY

0.99+

5%QUANTITY

0.99+

this yearDATE

0.99+

todayDATE

0.99+

TypeScriptTITLE

0.98+

East CoastLOCATION

0.98+

three days laterDATE

0.98+

three days laterDATE

0.98+

10th yearQUANTITY

0.97+

firstQUANTITY

0.97+

four/five years agoDATE

0.97+

CDK DayEVENT

0.97+

.NETTITLE

0.96+

three other heroesQUANTITY

0.96+

last nightDATE

0.96+

first journeyQUANTITY

0.96+

oneQUANTITY

0.94+

one companyQUANTITY

0.94+

IrishOTHER

0.91+

CDKTITLE

0.91+

InventEVENT

0.91+

four minutes laterDATE

0.9+

109 years oldQUANTITY

0.9+

one commandQUANTITY

0.86+

LibertyLOCATION

0.85+

single teamQUANTITY

0.85+

at least 10,000 peopleQUANTITY

0.84+

two weeksQUANTITY

0.83+

Node Package ManagerTITLE

0.83+

this morningDATE

0.81+

WorkgridTITLE

0.8+

WardleyORGANIZATION

0.78+

a hundred years oldQUANTITY

0.76+

CloudTITLE

0.76+

one microQUANTITY

0.75+

Bob Wise, AWS & Peder Ulander, AWS | Red Hat Summit 2021 Virtual Experience


 

(smart gentle music) >> Hey, welcome back everyone to theCUBE's coverage of Red Hat Summit 2021 virtual. I'm John Furrier, host of theCUBE, got two great guests here from AWS, Bob Wise, General Manager of Kubernetes for Amazon Web Services and Peder Ulander, Head of product marketing for the enterprise developer and open-source at AWS. Gentlemen, you guys are the core leaders in the AWS open-source initiatives. Thanks for joining us on theCUBE here for Red Hat Summit. >> Thanks for having us, John. >> Good to be here. >> So the innovation that's come from people building on top of the cloud has just been amazing. You guys, props to Amazon Web Services for constantly adding more and raising the bar on more services every year. You guys do that, and now public cloud has become so popular, and so important that now Hybrid has pushed the Edge. You got outpost with Amazon you see everyone following suit. It's pretty much clear vote of confidence from the customers that, Hybrid is the operating model of the future. And that really is about the Edge. So I want to chat with you about the open-source intersection there, so let's get into it. So we're here at Red Hat Summit. So Red Hat's an open-source company and timing is great for them. Now, part of IBM you guys have had a relationship with Red Hat for some time. Can you tell us about the partnership and how it's working together? >> Yeah, absolutely. Why don't I take that one? AWS and Red Hat have been strategic partners since, shoot, I think it's 2008 or so in the early days of AWS, when engaging with customers, we wanted to ensure that AWS was the best place for enterprises to run their Red Hat workloads. And this is super important when you think about, what Red Hat has accomplished with RHEL in the enterprise, it's running SAP, it's running Oracle's, it's running all different types of core business applications, as well as a lot of the new things that customers are innovating. And so having that relationship to ensure that not only did it work on AWS, but it actually scaled we had integration of services, we had the performance, the price all of the things that were so critical to customers was critical from day one. And we continue to evolve this relationship over time. As you see us coming into Red Hat Summit this year. >> Well, again, to the hard news here also the new service Red Hat OpenShift servers on AWS known as ROSA, the A for Amazon Red Hat OpenShift, A for Amazon Web Services, a clever acronym but really it's on AWS. What exactly is this service? What does it do? And who is it designed for? >> Well, I'll let me jump in on this one. Maybe let's start with the why? Why ROSA? Customers love using OpenShift, but they also want to use AWS. They want the best of both. So they want their peanut butter and their chocolate together in a single confection. A lot of those customers have deployed AWS, have deployed OpenShift on AWS. They want managed service simplified supply chain. We want to be able to streamline moving on premises, OpenShift workloads to AWS, naturally want good integration with AWS services. So as to the, what? Our new service jointly operated is supported by Red Hat and AWS to provide a fully managed to OpenShifts on AWS. So again, like lot of customers have been running OpenShift on AWS before this time, but of course they were managing it themselves typically. And so now they get a fully managed option with also simplified supply chain. Single support channels, single billing. >> You know, were talking before we came on camera about the acronym on AWS and people build on the clouds kind of like it's no big deal to say that, but I know it means something. I want to explain, you guys to explain this on because I know I've been scolded saying things on theCUBE that were kind of misspoken because it's easy to say, Oh yeah, I built that app. We built all this stuff on theCUBE was on AWS, but it's not on AWS. It means something from a designation standpoint what does on AWS mean? 'Cause this is OpenShift servers on AWS, we see this other companies have their products on AWS. This is specific designation. Can you share, please. >> John, when you see the branding of something like Red Hat on AWS, what that basically signals to our customers is that this is joint engineering work. This is the top of the strategic partners where we actually do a lot of joint engineering and work to make sure that we're driving the right integrations and the right experience, make sure that these things are accessible and discoverable in our console. They're treated effectively as a first-class service inside of the AWS ecosystem. So it's, there's not many of the on's, if you will. You think about SAP on VMware cloud, on AWS, and now Red Hat OpenShift on AWS, it really is that signal that helps give customers the confidence of tested, tried, trued, supported and validated service on top of AWS. And we think that's significantly better than anything else. It's easy to run an image on a VM and stuffed it into a cloud service to make it available, but customers want better, customer want tighter experiences. They want to be able to take advantage of all the great things that we have from a scale availability and performance perspective. And that's really what we're pushing towards. >> Yeah. I've seen examples specifically where when partners work with Amazon at that level of joint engineering, deeper partnerships. The results were pretty significant on the business side. So congratulations to you guys working with OpenShift and Red Hat, that's real testament to their product. But I got to ask you guys, pull the Amazon playbook out and challenge you guys, or just, create a new some commentary around the process of working backwards. Every time I talked to Andy Jassy, he always says, we work backwards from the customer and we get the requirements, and we're listening to customers. Okay, great. He loves that, he loves to say that it's true. I know that I've seen that. What is the customer work backwards document look like here? What is the, what was the need and what made this become such an important part of AWS? What was the, and then what are they saying now, now that the products out there? >> Well, OpenShift has a very wide footprint as does AWS. Some working backwards documents kind of write themselves, because now the customer demand is so strong that there's just no avoiding it. Now, it really just becomes about making sure you have a good plan so it becomes much more operational at that point. ROSA's definitely one of those services. We had so much demand and as a result, no surprise that we're getting a lot of enthusiasm for customers because so many of them asked us for it. (crosstalk) >> What's been the reaction in asking demand. That's kind of got the sense of that, but okay. So there's demand now, what's the what's the use cases? What are customers saying? What's the reaction been? >> Lot of the use cases are these Hybrid kind of use cases where a customer has a big OpenShift footprint. What we see from a lot of these customers is a strong demand for consistency in order to reduce IT sprawl. What they really want to do is have the smallest number of simplest environments they can. And so when customers that standardized on OpenShift really wants to be able to standardize OpenShifts, both in their on premises environment and on AWS and get managed service options just to remove the undifferentiated heavy lifting. >> Hey, what's your take on the product marketing side of this, where you got open-source becoming very enterprise specific, Red Hat's been there for a very long time. I've been user of Red Hat since the beginning and following them, and Linux, obviously is Linux where that's come from. But what features specifically jump out in this offering that customers are resonating around? What's the vibe here? >> John, you kind of alluded to it early on, which is I don't know that I'd necessarily call it Hybrid but the reality is our customers have environments that are on premises in the cloud and all the way out to the Edge. Today, when you think of a lot of solutions and services, it's a fractured experience that they have between those three locations. And one of our biggest commitments to our customers, just to make things super simple, remove the complexity do all of the hard work, which means, customers are looking for a consistent experience environment and tooling that spans data center to cloud, to Edge. And that's probably the biggest kind of core asset here for customers who might have standardized on OpenShift in the data centers. They come to the cloud, they want to continue to leverage those skills. I think probably one of the, an interesting one is we headed down in this path, we all know Delta Airlines. Delta is a great example of a customer who, joint customer, who have been doing stuff inside of AWS for a long time. They've been standardizing on Red Hat for a long time and bringing this together just gave them that simple extension to take their investment in Red Hat OpenShift and leverage their experience. And again, the scale and performance of what AWS brings them. >> Next question, what's next for a Red Hat OpenShift on AWS in your work with Red Hat. Where does this go next? What's the big to-do item, what do you guys see as the vision? >> I'm glad you mentioned open-source collaboration at the start there. We're taking to point out is that AWS works on the Kubernetes project upstream as does the Red Hat teams. So one of the ways that we collaborate with the Red Hat team is in open-source. One of those projects is on a new project called ACK. It was on controllers for Kubernetes and this is a kind of Kubernetes friendly way for my customers to use an API to manage AWS services. So that's one of the things that we're looking forward to as that goes GA wobbling out into both ROSA and onto our other services. >> Awesome. I got to ask you guys this while you're here, because it's very rare to get two luminaries within AWS on the open-source side. This has been a huge build-out over the many, many years for AWS, and some people really kind of don't understand kind of the position. So take a minute to clarify the position of AWS on open-source. You guys are very active in a lot of projects. You mentioned upstream with Kubernetes in other areas. I've had many countries with Adrian Cockcroft on this, as well as others within AWS. Huge proponents web services, I mean, you go back to the original Amazon. I mean, Jeff Barr was saying 15 years ago some of those API's are still in play here. API's back in 15 years ago, that was kind of not main stream at that time. So you had open standards, really made Amazon web services successful and you guys are continuing it but as the modern era is very enterprise, like and you see a lot of legacy, you seeing a lot more operations that they're going to be driven by open technologies that you guys are investing in. I'll take a minute to explain what AWS is doing and what you guys care about and your mission? >> Yeah. Well, why don't I start? And then we'll kick it over to Bob 'cause I think Bob can also talk about some of the key contribution sides, but the best way to think about it is kind of in three different pillars. So let's start with the first one, which is, around the fact of ensuring that our customer's favorite open-source projects run best on AWS. Since 2006, we've been helping our customers operationalize their open-source investments and really kind of achieve that scale and focus more on how they use and innovate on the products versus how they set up and run. And for myself being an open-source since the late 90s, the biggest opportunity, yet challenge was the access to the technology, but it still required you as a customer to learn how to set up, configure, operationalized support and sustain. AWS removes that heavy lifting and, again, back to that earlier point from the beginning of AWS, we helped customers scale and implement their Apache services, their database services, all of these different types of open-source projects to make them really work exceptionally well on AWS. And back to that point, make sure that AWS was the best place for their open-source projects. I think the second thing that we do, and you're seeing that today with what we're doing with ROSA and Red Hat is we partner with open-source leaders from Red Hat to Redis and Confluent to a number of different players out there, Grafana, and Prometheus, to even foundations like the LF and the CNCF. We partner with these leaders to ensure that we're working together to grow grow the overall experience and the overall the overall pie, if you will. And this kind of gets into that point you were making John in that, the old world legacy proprietary stuff, there's a huge chance for refresh and new opportunity and rethinking or modernization if you will, as you come into the cloud having the expertise and the partnerships with these key players is as enterprises move in, is so crucial. And then the third piece I'd like to talk about that's important to our open-source strategies is really around contribution. We have a number of projects that we've delivered ourselves. I think the two most recent ones that really come top of mind for me is, what we did with Babel Fish, as well as with OpenSearch. So contributing and driving a true open-source project that helps our customers, take advantage of things like an SQL, a proprietary to open-source SQL conversion tool, or what we're doing to make Elasticsearch, the opportune or the primary open platform for our customers. But it's not just about those services, it's also collaborating with key industry initiatives. Bob's at the forefront of that with what we're doing with the CNCF around things, like Kubernetes and Prometheus et cetera, Bob you want to jump in on some of that? >> Sure, I think the one thing I would add here is that customers love using those open-source projects. The one of the challenges with them frequently is security. And this is job zero to AWS. So a lot of the collaboration work we do, a lot of the work that we do on upstream projects is go specifically around kind of security oriented things because that is what customers expect when they come to get a managed service at AWS. Some of those efforts are somewhat unsung because you generally do more work and less talk, in security oriented things. But projects across AWS, that's always a key contribution focus for us. >> Good way to call out security too. I think that's being built-in to the everything now, that's an operating model. People call it shift-left day two operations. Whatever you want to look at it. You got this nice formation going between under the hood kind of programmability of the infrastructure at scale. And then you have the modern application development which is just beginning, programmable DevSecOps. It's funny, Bob, I'd love to get your take on this because I remember in the 80s and during the Unix generation I used to peddle software under the table. Like, here's a copy of, you just don't tell anyone, people in the younger generation don't get the fact that it wasn't always open. And so now you have open and you have this idea of an enterprise that's going to be a system management system view. So you got engineering and you got computer science kind of coming together, this SRE middle layer. You're hearing that as a, kind of a new discipline. So DevOps kind of has won. I mean, we kind of knew this for many, many years. I said this in 2013 on theCUBE actually at re-inventing. I just recently shared that clip. But okay, now you've got SecOps, DevSecOps. So now you have an era where it's a system thinking and open-source is driving all of that. So can you share your perspective because this is kind of where the puck is going. It's an open to open world. That's going to have to be open and scalable. How does open-source and you guys take it to the next level to give that same scale and reliability? What's your vision? >> The key here is really around automation and what we're seeing you could look at Kubernetes. Kubernetes, is essentially a robot. It was like the early design of it was built around robotics principles. So it's a giant software robot and the world has changed. If you just look at the influx of all kinds of automation to not just the DevOps world but to all industries, you see a similar kind of trend. And so the world of IT operations person is changing from doing the work that the robot did and replacing it with the robot to managing large numbers of robots. And in this case, the robots are like a little early and a little hard to talk to. And so, you end up using languages like YAML and other things, but it turns out robots still just do what you tell them to do. And so one of the things you have to do is be really, really careful because robots will go and do whatever it is you ask them to do. On the other hand, they're really, really good at doing that. So in the security area, they take the research points to the largest single source of security issues, being people making manual mistakes. And a lot of people are still a little bit terrified if human beings aren't touching things on the way to production. In AWS, we're terrified if humans aren't touching it. And that is a super hard chasm to cross and open-source projects have really, are really playing a big role in what's really a IT wide migration to a whole new set of, not just tools, but organizational approaches. >> What's your reaction to that? Because we're talking that essentially software concepts, because if you write bad code, the code will execute what you did. So assuming it compiles left in the old days. Now, if you're going to scale a large scale operations that has dynamic capabilities, services being initiated in terminating tear down up started, you need the automation, but if you really don't design it right, you could be screwed. This is a huge deal. >> This is one reason why we've put so much effort into getops that you can think of it as a more narrowly defined subset of the DevOps world with a specific set of principles around using kind of simplified declarative approaches, along with robots that converge the desired state, converge the system to the desired state. And when you get into large distributed systems, you end up needing to take those kinds of approaches to get it to work at scale. Otherwise you have problems. >> Yeah, just adding to that. And it's funny, you said DevOps has won. I actually think DevOps has won, but DevOps hasn't changed (indistinct) Bob, you were right, the reality is it was founded back what quite a while ago, it was more around CICD in the enterprise and the closed data center. And it was one of those where automation and runbooks took addressed the fact that, every pair of hands between service requests and service delivery recreated or created an issue. So that growth and that mental model of moving from a waterfall, agile to DevOps, you built it, you run it, type of a model, I think is really, really important. But as it comes out into the cloud, you no longer have those controls of the data center and you actually have infinite scale. So back to your point of you got to get this right. You have to architect correctly you have to make sure that your code is good, you have to make sure that you have full visibility. This is where it gets really interesting at AWS. And some of the things that we're tying in. So whether we're talking about getops like what Bob just went through, or what you brought up with DevSecOps, you also have things like, AIOps. And so looking at how we take our machine learning tools to really implement the appropriate types of code reviews to assessing your infrastructure or your choices against well-architected principles and providing automated remediation is key, adding to that is observability, developers, especially in a highly distributed environment need to have better understanding, fidelity and touchpoints of what's going on with our application as it runs in production. And so what we do with regards to the work we have in observability around Grafana and Prometheus projects only accelerate that co-whole concept of continuous monitoring and continuous observability, and then kind of really, adding to that, I think it was last month, we introduce our fault injection simulator, a chaos engineering tool that, again takes advantage of all of this automation and machine learning to really help our developers, our customers operate at scale. And make sure that when they are releasing code, they're releasing code that is not just great in a small sense, it works on my laptop, but it works great in a highly distributed massively scaled environment around the globe. >> You know, this is one of the things that impresses me about Red Hat this year. And I've said this before all the covers events I've covered with them is that they get the cloud scale piece and I think their relationship with you guys shows that I think, DevOps has won, but it's the gift that keeps giving in open-source because what you have here is no longer a conversation about the cloud moving to the cloud. It's the cloud has become the operating model. So the conversation shifts to much more complicated enterprise or, and or intelligent Edge, and whether it's industrial or human or whatever, you got a data problem. So that's about a programmability issue at scale. So what's interesting is that Red Hat is on those bandwagon. It's an operating system. I mean, basically it's a distributed computing paradigm, essentially ala AWS concept as a cloud. Now it goes to the Edge, it's just distributed services via an open-source. So what's your reaction to that? >> Yeah, it's back to the original point, John where I said, any CIO is thinking about their IT environment from data center to cloud, to Edge and the more consistency automation and, kind of tools that they're at their disposal to enable them to create that kind of, I think you started to talk about an infrastructure the whole as code infrastructure's code, it's now, almost everything is code. And that starts with the operating system, obviously. And that's why this is so critical that we're partnering with companies like Red Hat on our vision and their vision, because they aligned to where our customers were ultimately going. Bob, you want to, you want to add to that? >> Bob: No, I think you said it. >> John: You guys are crushing it. Bob, one quick question for you, while I got you here. You mentioned getops, I've heard this before, I kind of understand it. Can you just quickly define from your perspective. What is getops? >> Sure, well, getops is really taking the, I said before it's a kind of narrowed version of DevOps. Sure, it's infrastructure is code. Sure, you're doing things incrementally but the getops principle, it's back to like, what are the good, what are the best practices we are managing large numbers, large numbers of robots. And in this case, it's around this idea of declarative intent. So instead of having systems that reach into production and change things, what you do is you set up the defined declared state of the system that you want and then leave the robots to constantly work to converge the state there. That seems kind of nebulous. Let me give you like a really concrete example from Kubernetes, by the way the entire Kubernetes system design is based on this. You say, I want five pods running in production and that's running my application. So what Kubernetes does is it sits there and it constantly checks, Oh, I'm supposed to have five pods. Do I have five? Well, what happens if the machine running one of those pods goes away. Now, suddenly it goes and checks and says, Oh, I'm supposed to have five pods, but there's four pods. What action do I take to now try to get the system back to the state. So you don't have a system running, reaching out and checking externally to Kubernetes, you let Kubernetes do the heavy lifting there. And so it goes through, goes through a loop of, Oh, I need to start a new pod and then it converges the system state back to running five pods. So it's really taking that kind of declarative intent combined with constant convergence loops to fully production at scale. >> That's awesome. Well, we do a whole segment on state and stateless future, but we don't have time. I do want to summarize real quick. We're here at the Red Hat Summit 2021. You got Red Hat OpenShift on AWS. The big news, Bob and Peder tell us quickly in summary, why AWS? Why Red Hat? Why better together? Give the quick overview, Bob, we'll start with you. >> Bob, you want to kick us off? >> I'm going to repeat peanut butter and chocolate. Customers love OpenShift, they love managed services. They want a simplified operations, simplified supply chain. So you get the best of both worlds. You get the OpenShift that you want fully managed on AWS, where you get all of the security and scale. Yeah, I can't add much to that. Other than saying, Red Hat is powerhouse obviously on data centers it is the operating system of the data center. Bringing together the best in the cloud, with the best in the data center is such a huge benefit to our customers. Because back to your point, John, our customers are thinking about what are they doing from data center to cloud, to Edge and bringing the best of those pieces together in a seamless solution is so, so critical. And that that's why AW. (indistinct) >> Thanks for coming on, I really appreciate it. I just want to give you guys a plug for you and being humble, but you've worked in the CNCF and standards bodies has been well, well known and I'm getting the word out. Congratulations for the commitment to open-source. Really appreciate the community. Thanks you, thank you for your time. >> Thanks, John. >> Okay, Cube coverage here, covering Red Hat Summit 2021. I'm John Ferry, host of theCUBE. Thanks for watching. (smart gentle music)

Published Date : Apr 27 2021

SUMMARY :

in the AWS open-source initiatives. And that really is about the Edge. And so having that relationship to ensure also the new service Red Red Hat and AWS to kind of like it's no big deal to say that, of the on's, if you will. But I got to ask you guys, pull the Amazon because now the customer That's kind of got the Lot of the use cases are of this, where you got do all of the hard work, which what do you guys see as the vision? So one of the ways that we collaborate I got to ask you guys this the overall pie, if you will. So a lot of the collaboration work we do, And so now you have open And so one of the things you have to do the code will execute what you did. into getops that you can of the data center and you So the conversation shifts to and the more consistency automation and, I kind of understand it. of the system that you want We're here at the Red Hat Summit 2021. in the cloud, with the best I just want to give you guys a I'm John Ferry, host of theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

Jeff BarrPERSON

0.99+

AWSORGANIZATION

0.99+

John FerryPERSON

0.99+

ROSAORGANIZATION

0.99+

Adrian CockcroftPERSON

0.99+

Bob WisePERSON

0.99+

BobPERSON

0.99+

RedisORGANIZATION

0.99+

twoQUANTITY

0.99+

IBMORGANIZATION

0.99+

2013DATE

0.99+

Andy JassyPERSON

0.99+

John FurrierPERSON

0.99+

Red HatORGANIZATION

0.99+

DeltaORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

2008DATE

0.99+

LFORGANIZATION

0.99+

fiveQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Delta AirlinesORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

five podsQUANTITY

0.99+

Red Hat OpenShiftTITLE

0.99+

GrafanaORGANIZATION

0.99+

Red HatTITLE

0.99+

five podsQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

KubernetesORGANIZATION

0.99+

Another test of transitions


 

>> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, (upbeat music) >> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, I've got the Keynote file there. YAMLs, I've got Dockerfiles, Compose files, all that good stuff. If you want to follow along, great, if not go back and review later, kind of fun. So let me tell you a little bit about myself. I am a former DOD contractor. This is my seventh DockerCon. I've spoken, I had the pleasure to speak at a few of them, one even in Europe. I was even a Docker employee for quite a number of years, providing solutions to the federal government and customers around containers and all things Docker. So I've been doing this a little while. One of the things that I always found interesting was the lack of understanding around labels. So why labels, right? Well, as a former DOD contractor, I had built out a large registry. And the question I constantly got was, where did this image come from? How did you get it? What's in it? Where did it come from? How did it get here? And one of the things we did to kind of alleviate some of those questions was we established a baseline set of labels. Labels really are designed to provide as much metadata around the image as possible. I ask everyone in attendance, when was the last time you pulled an image and had 100% confidence, you knew what was inside it, where it was built, how it was built, when it was built, you probably didn't, right? The last thing we obviously want is a container fire, like our image on the screen. And one kind of interesting way we can kind of prevent that is through the use of labels. We can use labels to address security, address some of the simplicity on how to run these images. So think of it, kind of like self documenting, Think of it also as an audit trail, image provenance, things like that. These are some interesting concepts that we can definitely mandate as we move forward. What is a label, right? Specifically what is the Schema? It's just a key-value. All right? It's any key and pretty much any value. What if we could dump in all kinds of information? What if we could encode things and store it in there? And I've got a fun little demo to show you about that. Let's start off with some of the simple keys, right? Author, date, description, version. Some of the basic information around the image. That would be pretty useful, right? What about specific labels for CI? What about a, where's the version control? Where's the source, right? Whether it's Git, whether it's GitLab, whether it's GitHub, whether it's Gitosis, right? Even SPN, who cares? Where are the source files that built, where's the Docker file that built this image? What's the commit number? That might be interesting in terms of tracking the resulting image to a person or to a commit, hopefully then to a person. How is it built? What if you wanted to play with it and do a git clone of the repo and then build the Docker file on your own? Having a label specifically dedicated on how to build this image might be interesting for development work. Where it was built, and obviously what build number, right? These kind of all, not only talk about continuous integration, CI but also start to talk about security. Specifically what server built it. The version control number, the version number, the commit number, again, how it was built. What's the specific build number? What was that job number in, say, Jenkins or GitLab? What if we could take it a step further? What if we could actually apply policy enforcement in the build pipeline, looking specifically for some of these specific labels? I've got a good example of, in my demo of a policy enforcement. So let's look at some sample labels. Now originally, this idea came out of label-schema.org. And then it was a modified to opencontainers, org.opencontainers.image. There is a link in my GitHub page that links to the full reference. But these are some of the labels that I like to use, just as kind of like a standardization. So obviously, Author's, an email address, so now the image is attributable to a person, that's always kind of good for security and reliability. Where's the source? Where's the version control that has the source, the Docker file and all the assets? How it was built, build number, build server the commit, we talked about, when it was created, a simple description. A fun one I like adding in is the healthZendpoint. Now obviously, the health check directive should be in the Docker file. But if you've got other systems that want to ping your applications, why not declare it and make it queryable? Image version, obviously, that's simple declarative And then a title. And then I've got the two fun ones. Remember, I talked about what if we could encode some fun things? Hypothetically, what if we could encode the Compose file of how to build the stack in the first image itself? And conversely the Kubernetes? Well, actually, you can and I have a demo to show you how to kind of take advantage of that. So how do we create labels? And really creating labels as a function of build time okay? You can't really add labels to an image after the fact. The way you do add labels is either through the Docker file, which I'm a big fan of, because it's declarative. It's in version control. It's kind of irrefutable, especially if you're tracking that commit number in a label. You can extend it from being a static kind of declaration to more a dynamic with build arguments. And I can show you, I'll show you in a little while how you can use a build argument at build time to pass in that variable. And then obviously, if you did it by hand, you could do a docker build--label key equals value. I'm not a big fan of the third one, I love the first one and obviously the second one. Being dynamic we can take advantage of some of the variables coming out of version control. Or I should say, some of the variables coming out of our CI system. And that way, it self documents effectively at build time, which is kind of cool. How do we view labels? Well, there's two major ways to view labels. The first one is obviously a docker pull and docker inspect. You can pull the image locally, you can inspect it, you can obviously, it's going to output as JSON. So you going to use something like JQ to crack it open and look at the individual labels. Another one which I found recently was Skopeo from Red Hat. This allows you to actually query the registry server. So you don't even have to pull the image initially. This can be really useful if you're on a really small development workstation, and you're trying to talk to a Kubernetes cluster and wanting to deploy apps kind of in a very simple manner. Okay? And this was that use case, right? Using Kubernetes, the Kubernetes demo. One of the interesting things about this is that you can base64 encode almost anything, push it in as text into a label and then base64 decode it, and then use it. So in this case, in my demo, I'll show you how we can actually use a kubectl apply piped from the base64 decode from the label itself from skopeo talking to the registry. And what's interesting about this kind of technique is you don't need to store Helm charts. You don't need to learn another language for your declarative automation, right? You don't need all this extra levels of abstraction inherently, if you use it as a label with a kubectl apply, It's just built in. It's kind of like the kiss approach to a certain extent. It does require some encoding when you actually build the image, but to me, it doesn't seem that hard. Okay, let's take a look at a demo. And what I'm going to do for my demo, before we actually get started is here's my repo. Here's a, let me actually go to the actual full repo. So here's the repo, right? And I've got my Jenkins pipeline 'cause I'm using Jenkins for this demo. And in my demo flask, I've got the Docker file. I've got my compose and my Kubernetes YAML. So let's take a look at the Docker file, right? So it's a simple Alpine image. The org statements are the build time arguments that are passed in. Label, so again, I'm using the org.opencontainers.image.blank, for most of them. There's a typo there. Let's see if you can find it, I'll show you it later. My source, build date, build number, commit. Build number and get commit are derived from the Jenkins itself, which is nice. I can just take advantage of existing URLs. I don't have to create anything crazy. And again, I've got my actual Docker build command. Now this is just a label on how to build it. And then here's my simple Python, APK upgrade, remove the package manager, kind of some security stuff, health check getting Python through, okay? Let's take a look at the Jenkins pipeline real quick. So here is my Jenkins pipeline and I have four major stages, four stages, I have built. And here in build, what I do is I actually do the Git clone. And then I do my docker build. From there, I actually tell the Jenkins StackRox plugin. So that's what I'm using for my security scanning. So go ahead and scan, basically, I'm staging it to scan the image. I'm pushing it to Hub, okay? Where I can see the, basically I'm pushing the image up to Hub so such that my StackRox security scanner can go ahead and scan the image. I'm kicking off the scan itself. And then if everything's successful, I'm pushing it to prod. Now what I'm doing is I'm just using the same image with two tags, pre-prod and prod. This is not exactly ideal, in your environment, you probably want to use separate registries and non-prod and a production registry, but for demonstration purposes, I think this is okay. So let's go over to my Jenkins and I've got a deliberate failure. And I'll show you why there's a reason for that. And let's go down. Let's look at my, so I have a StackRox report. Let's look at my report. And it says image required, required image label alert, right? Request that the maintainer, add the required label to the image, so we're missing a label, okay? One of the things we can do is let's flip over, and let's look at Skopeo. Right? I'm going to do this just the easy way. So instead of looking at org.zdocker, opencontainers.image.authors. Okay, see here it says build signature? That was the typo, we didn't actually pass in. So if we go back to our repo, we didn't pass in the the build time argument, we just passed in the word. So let's fix that real quick. That's the Docker file. Let's go ahead and put our dollar sign in their. First day with the fingers you going to love it. And let's go ahead and commit that. Okay? So now that that's committed, we can go back to Jenkins, and we can actually do another build. And there's number 12. And as you can see, I've been playing with this for a little bit today. And while that's running, come on, we can go ahead and look at the Console output. Okay, so there's our image. And again, look at all the build arguments that we're passing into the build statement. So we're passing in the date and the date gets derived on the command line. With the build arguments, there's the base64 encoded of the Compose file. Here's the base64 encoding of the Kubernetes YAML. We do the build. And then let's go down to the bottom layer exists and successful. So here's where we can see no system policy violations profound marking stack regimes security plugin, build step as successful, okay? So we're actually able to do policy enforcement that that image exists, that that label sorry, exists in the image. And again, we can look at the security report and there's no policy violations and no vulnerabilities. So that's pretty good for security, right? We can now enforce and mandate use of certain labels within our images. And let's flip back over to Skopeo, and let's go ahead and look at it. So we're looking at the prod version again. And there's it is in my email address. And that validated that that was valid for that policy. So that's kind of cool. Now, let's take it a step further. What if, let's go ahead and take a look at all of the image, all the labels for a second, let me remove the dash org, make it pretty. Okay? So we have all of our image labels. Again, author's build, commit number, look at the commit number. It was built today build number 12. We saw that right? Delete, build 12. So that's kind of cool dynamic labels. Name, healthz, right? But what we're looking for is we're going to look at the org.zdockerketers label. So let's go look at the label real quick. Okay, well that doesn't really help us because it's encoded but let's base64 dash D, let's decode it. And I need to put the dash r in there 'cause it doesn't like, there we go. So there's my Kubernetes YAML. So why can't we simply kubectl apply dash f? Let's just apply it from standard end. So now we've actually used that label. From the image that we've queried with skopeo, from a remote registry to deploy locally to our Kubernetes cluster. So let's go ahead and look everything's up and running, perfect. So what does that look like, right? So luckily, I'm using traefik for Ingress 'cause I love it. And I've got an object in my Kubernetes YAML called flask.doctor.life. That's my Ingress object for traefik. I can go to flask.docker.life. And I can hit refresh. Obviously, I'm not a very good web designer 'cause the background image in the text. We can go ahead and refresh it a couple times we've got Redis storing a hit counter. We can see that our server name is roundrobing. Okay? That's kind of cool. So let's kind of recap a little bit about my demo environment. So my demo environment, I'm using DigitalOcean, Ubuntu 19.10 Vms. I'm using K3s instead of full Kubernetes either full Rancher, full Open Shift or Docker Enterprise. I think K3s has some really interesting advantages on the development side and it's kind of intended for IoT but it works really well and it deploys super easy. I'm using traefik for Ingress. I love traefik. I may or may not be a traefik ambassador. I'm using Jenkins for CI. And I'm using StackRox for image scanning and policy enforcement. One of the things to think about though, especially in terms of labels is none of this demo stack is required. You can be in any cloud, you can be in CentOs, you can be in any Kubernetes. You can even be in swarm, if you wanted to, or Docker compose. Any Ingress, any CI system, Jenkins, circle, GitLab, it doesn't matter. And pretty much any scanning. One of the things that I think is kind of nice about at least StackRox is that we do a lot more than just image scanning, right? With the policy enforcement things like that. I guess that's kind of a shameless plug. But again, any of this stack is completely replaceable, with any comparative product in that category. So I'd like to, again, point you guys to the andyc.infodc20, that's take you right to the GitHub repo. You can reach out to me at any of the socials @clemenko or andy@stackrox.com. And thank you for attending. I hope you learned something fun about labels. And hopefully you guys can standardize labels in your organization and really kind of take your images and the image provenance to a new level. Thanks for watching. (upbeat music) >> Narrator: Live from Las Vegas It's theCUBE. Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel along with it's ecosystem partners. >> Okay, welcome back everyone theCUBE's live coverage of AWS re:Invent 2019. This is theCUBE's 7th year covering Amazon re:Invent. It's their 8th year of the conference. I want to just shout out to Intel for their sponsorship for these two amazing sets. Without their support we wouldn't be able to bring our mission of great content to you. I'm John Furrier. Stu Miniman. We're here with the chief of AWS, the chief executive officer Andy Jassy. Tech athlete in and of himself three hour Keynotes. Welcome to theCUBE again, great to see you. >> Great to be here, thanks for having me guys. >> Congratulations on a great show a lot of great buzz. >> Andy: Thank you. >> A lot of good stuff. Your Keynote was phenomenal. You get right into it, you giddy up right into it as you say, three hours, thirty announcements. You guys do a lot, but what I liked, the new addition, the last year and this year is the band; house band. They're pretty good. >> Andy: They're good right? >> They hit the queen notes, so that keeps it balanced. So we're going to work on getting a band for theCUBE. >> Awesome. >> So if I have to ask you, what's your walk up song, what would it be? >> There's so many choices, it depends on what kind of mood I'm in. But, uh, maybe Times Like These by the Foo Fighters. >> John: Alright. >> These are unusual times right now. >> Foo Fighters playing at the Amazon Intersect Show. >> Yes they are. >> Good plug Andy. >> Headlining. >> Very clever >> Always getting a good plug in there. >> My very favorite band. Well congratulations on the Intersect you got a lot going on. Intersect is a music festival, I'll get to that in a second But, I think the big news for me is two things, obviously we had a one-on-one exclusive interview and you laid out, essentially what looks like was going to be your Keynote, and it was. Transformation- >> Andy: Thank you for the practice. (Laughter) >> John: I'm glad to practice, use me anytime. >> Yeah. >> And I like to appreciate the comments on Jedi on the record, that was great. But I think the transformation story's a very real one, but the NFL news you guys just announced, to me, was so much fun and relevant. You had the Commissioner of NFL on stage with you talking about a strategic partnership. That is as top down, aggressive goal as you could get to have Rodger Goodell fly to a tech conference to sit with you and then bring his team talk about the deal. >> Well, ya know, we've been partners with the NFL for a while with the Next Gen Stats that they use on all their telecasts and one of the things I really like about Roger is that he's very curious and very interested in technology and the first couple times I spoke with him he asked me so many questions about ways the NFL might be able to use the Cloud and digital transformation to transform their various experiences and he's always said if you have a creative idea or something you think that could change the world for us, just call me he said or text me or email me and I'll call you back within 24 hours. And so, we've spent the better part of the last year talking about a lot of really interesting, strategic ways that they can evolve their experience both for fans, as well as their players and the Player Health and Safety Initiative, it's so important in sports and particularly important with the NFL given the nature of the sport and they've always had a focus on it, but what you can do with computer vision and machine learning algorithms and then building a digital athlete which is really like a digital twin of each athlete so you understand, what does it look like when they're healthy and compare that when it looks like they may not be healthy and be able to simulate all kinds of different combinations of player hits and angles and different plays so that you could try to predict injuries and predict the right equipment you need before there's a problem can be really transformational so we're super excited about it. >> Did you guys come up with the idea or was it a collaboration between them? >> It was really a collaboration. I mean they, look, they are very focused on players safety and health and it's a big deal for their- you know, they have two main constituents the players and fans and they care deeply about the players and it's a-it's a hard problem in a sport like Football, I mean, you watch it. >> Yeah, and I got to say it does point out the use cases of what you guys are promoting heavily at the show here of the SageMaker Studio, which was a big part of your Keynote, where they have all this data. >> Andy: Right. >> And they're data hoarders, they hoard data but the manual process of going through the data was a killer problem. This is consistent with a lot of the enterprises that are out there, they have more data than they even know. So this seems to be a big part of the strategy. How do you get the customers to actually wake up to the fact that they got all this data and how do you tie that together? >> I think in almost every company they know they have a lot of data. And there are always pockets of people who want to do something with it. But, when you're going to make these really big leaps forward; these transformations, the things like Volkswagen is doing where they're reinventing their factories and their manufacturing process or the NFL where they're going to radically transform how they do players uh, health and safety. It starts top down and if the senior leader isn't convicted about wanting to take that leap forward and trying something different and organizing the data differently and organizing the team differently and using machine learning and getting help from us and building algorithms and building some muscle inside the company it just doesn't happen because it's not in the normal machinery of what most companies do. And so it always, almost always, starts top down. Sometimes it can be the Commissioner or CEO sometimes it can be the CIO but it has to be senior level conviction or it doesn't get off the ground. >> And the business model impact has to be real. For NFL, they know concussions, hurting their youth pipe-lining, this is a huge issue for them. This is their business model. >> They lose even more players to lower extremity injuries. And so just the notion of trying to be able to predict injuries and, you know, the impact it can have on rules and the impact it can have on the equipment they use, it's a huge game changer when they look at the next 10 to 20 years. >> Alright, love geeking out on the NFL but Andy, you know- >> No more NFL talk? >> Off camera how about we talk? >> Nobody talks about the Giants being 2 and 10. >> Stu: We're both Patriots fans here. >> People bring up the undefeated season. >> So Andy- >> Everybody's a Patriot's fan now. (Laughter) >> It's fascinating to watch uh, you and your three hour uh, Keynote, uh Werner in his you know, architectural discussion, really showed how AWS is really extending its reach, you know, it's not just a place. For a few years people have been talking about you know, Cloud is an operational model its not a destination or a location but, I felt it really was laid out is you talked about Breadth and Depth and Werner really talked about you know, Architectural differentiation. People talk about Cloud, but there are very-there are a lot of differences between the vision for where things are going. Help us understand why, I mean, Amazon's vision is still a bit different from what other people talk about where this whole Cloud expansion, journey, put ever what tag or label you want on it but you know, the control plane and the technology that you're building and where you see that going. >> Well I think that, we've talked about this a couple times we have two macro types of customers. We have those that really want to get at the low level building blocks and stitch them together creatively however they see fit to create whatever's in their-in their heads. And then we have the second segment of customers that say look, I'm willing to give up some of that flexibility in exchange for getting 80% of the way there much faster. In an abstraction that's different from those low level building blocks. And both segments of builders we want to serve and serve well and so we've built very significant offerings in both areas. I think when you look at microservices um, you know, some of it has to do with the fact that we have this very strongly held belief born out of several years of Amazon where you know, the first 7 or 8 years of Amazon's consumer business we basically jumbled together all of the parts of our technology in moving really quickly and when we wanted to move quickly where you had to impact multiple internal development teams it was so long because it was this big ball, this big monolithic piece. And we got religion about that in trying to move faster in the consumer business and having to tease those pieces apart. And it really was a lot of impetus behind conceiving AWS where it was these low level, very flexible building blocks that6 don't try and make all the decisions for customers they get to make them themselves. And some of the microservices that you saw Werner talking about just, you know, for instance, what we-what we did with Nitro or even what we did with Firecracker those are very much about us relentlessly working to continue to uh, tease apart the different components. And even things that look like low level building blocks over time, you build more and more features and all of the sudden you realize they have a lot of things that are combined together that you wished weren't that slow you down and so, Nitro was a completely re imagining of our Hypervisor and Virtualization layer to allow us, both to let customers have better performance but also to let us move faster and have a better security story for our customers. >> I got to ask you the question around transformation because I think that all points, all the data points, you got all the references, Goldman Sachs on stage at the Keynote, Cerner, I mean healthcare just is an amazing example because I mean, that's demonstrating real value there there's no excuse. I talked to someone who wouldn't be named last night, in and around the area said, the CIA has a cost bar like this a cost-a budget like this but the demand for mission based apps is going up exponentially, so there's need for the Cloud. And so, you see more and more of that. What is your top down, aggressive goals to fill that solution base because you're also a very transformational thinker; what is your-what is your aggressive top down goals for your organization because you're serving a market with trillions of dollars of spend that's shifting, that's on the table. >> Yeah. >> A lot of competition now sees it too, they're going to go after it. But at the end of the day you have customers that have a demand for things, apps. >> Andy: Yeah. >> And not a lot of budget increase at the same time. This is a huge dynamic. >> Yeah. >> John: What's your goals? >> You know I think that at a high level our top down aggressive goals are that we want every single customer who uses our platform to have an outstanding customer experience. And we want that outstanding customer experience in part is that their operational performance and their security are outstanding, but also that it allows them to build, uh, build projects and initiatives that change their customer experience and allow them to be a sustainable successful business over a long period of time. And then, we also really want to be the technology infrastructure platform under all the applications that people build. And we're realistic, we know that you know, the market segments we address with infrastructure, software, hardware, and data center services globally are trillions of dollars in the long term and it won't only be us, but we have that goal of wanting to serve every application and that requires not just the security operational premise but also a lot of functionality and a lot of capability. We have by far the most amount of capability out there and yet I would tell you, we have 3 to 5 years of items on our roadmap that customers want us to add. And that's just what we know today. >> And Andy, underneath the covers you've been going through some transformation. When we talked a couple of years ago, about how serverless is impacting things I've heard that that's actually, in many ways, glue behind the two pizza teams to work between organizations. Talk about how the internal transformations are happening. How that impacts your discussions with customers that are going through that transformation. >> Well, I mean, there's a lot of- a lot of the technology we build comes from things that we're doing ourselves you know? And that we're learning ourselves. It's kind of how we started thinking about microservices, serverless too, we saw the need, you know, we would have we would build all these functions that when some kind of object came into an object store we would spin up, compute, all those tasks would take like, 3 or 4 hundred milliseconds then we'd spin it back down and yet, we'd have to keep a cluster up in multiple availability zones because we needed that fault tolerance and it was- we just said this is wasteful and, that's part of how we came up with Lambda and you know, when we were thinking about Lambda people understandably said, well if we build Lambda and we build this serverless adventure in computing a lot of people were keeping clusters of instances aren't going to use them anymore it's going to lead to less absolute revenue for us. But we, we have learned this lesson over the last 20 years at Amazon which is, if it's something that's good for customers you're much better off cannibalizing yourself and doing the right thing for customers and being part of shaping something. And I think if you look at the history of technology you always build things and people say well, that's going to cannibalize this and people are going to spend less money, what really ends up happening is they spend less money per unit of compute but it allows them to do so much more that they ultimately, long term, end up being more significant customers. >> I mean, you are like beating the drum all the time. Customers, what they say, we encompass the roadmap, I got that you guys have that playbook down, that's been really successful for you. >> Andy: Yeah. >> Two years ago you told me machine learning was really important to you because your customers told you. What's the next traunch of importance for customers? What's on top of mind now, as you, look at- >> Andy: Yeah. >> This re:Invent kind of coming to a close, Replay's tonight, you had conversations, you're a tech athlete, you're running around, doing speeches, talking to customers. What's that next hill from if it's machine learning today- >> There's so much I mean, (weird background noise) >> It's not a soup question (Laughter) And I think we're still in the very early days of machine learning it's not like most companies have mastered it yet even though they're using it much more then they did in the past. But, you know, I think machine learning for sure I think the Edge for sure, I think that um, we're optimistic about Quantum Computing even though I think it'll be a few years before it's really broadly useful. We're very um, enthusiastic about robotics. I think the amount of functions that are going to be done by these- >> Yeah. >> robotic applications are much more expansive than people realize. It doesn't mean humans won't have jobs, they're just going to work on things that are more value added. We're believers in augmented virtual reality, we're big believers in what's going to happen with Voice. And I'm also uh, I think sometimes people get bored you know, I think you're even bored with machine learning already >> Not yet. >> People get bored with the things you've heard about but, I think just what we've done with the Chips you know, in terms of giving people 40% better price performance in the latest generation of X86 processors. It's pretty unbelievable in the difference in what people are going to be able to do. Or just look at big data I mean, big data, we haven't gotten through big data where people have totally solved it. The amount of data that companies want to store, process, analyze, is exponentially larger than it was a few years ago and it will, I think, exponentially increase again in the next few years. You need different tools and services. >> Well I think we're not bored with machine learning we're excited to get started because we have all this data from the video and you guys got SageMaker. >> Andy: Yeah. >> We call it the stairway to machine learning heaven. >> Andy: Yeah. >> You start with the data, move up, knock- >> You guys are very sophisticated with what you do with technology and machine learning and there's so much I mean, we're just kind of, again, in such early innings. And I think that, it was so- before SageMaker, it was so hard for everyday developers and data scientists to build models but the combination of SageMaker and what's happened with thousands of companies standardizing on it the last two years, plus now SageMaker studio, giant leap forward. >> Well, we hope to use the data to transform our experience with our audience. And we're on Amazon Cloud so we really appreciate that. >> Andy: Yeah. >> And appreciate your support- >> Andy: Yeah, of course. >> John: With Amazon and get that machine learning going a little faster for us, that would be better. >> If you have requests I'm interested, yeah. >> So Andy, you talked about that you've got the customers that are builders and the customers that need simplification. Traditionally when you get into the, you know, the heart of the majority of adoption of something you really need to simplify that environment. But when I think about the successful enterprise of the future, they need to be builders. how'l I normally would've said enterprise want to pay for solutions because they don't have the skill set but, if they're going to succeed in this new economy they need to go through that transformation >> Andy: Yeah. >> That you talk to, so, I mean, are we in just a total new era when we look back will this be different than some of these previous waves? >> It's a really good question Stu, and I don't think there's a simple answer to it. I think that a lot of enterprises in some ways, I think wish that they could just skip the low level building blocks and only operate at that higher level abstraction. That's why people were so excited by things like, SageMaker, or CodeGuru, or Kendra, or Contact Lens, these are all services that allow them to just send us data and then run it on our models and get back the answers. But I think one of the big trends that we see with enterprises is that they are taking more and more of their development in house and they are wanting to operate more and more like startups. I think that they admire what companies like AirBnB and Pintrest and Slack and Robinhood and a whole bunch of those companies, Stripe, have done and so when, you know, I think you go through these phases and eras where there are waves of success at different companies and then others want to follow that success and replicate it. And so, we see more and more enterprises saying we need to take back a lot of that development in house. And as they do that, and as they add more developers those developers in most cases like to deal with the building blocks. And they have a lot of ideas on how they can creatively stich them together. >> Yeah, on that point, I want to just quickly ask you on Amazon versus other Clouds because you made a comment to me in our interview about how hard it is to provide a service to other people. And it's hard to have a service that you're using yourself and turn that around and the most quoted line of my story was, the compression algorithm- there's no compression algorithm for experience. Which to me, is the diseconomies of scale for taking shortcuts. >> Andy: Yeah. And so I think this is a really interesting point, just add some color commentary because I think this is a fundamental difference between AWS and others because you guys have a trajectory over the years of serving, at scale, customers wherever they are, whatever they want to do, now you got microservices. >> Yeah. >> John: It's even more complex. That's hard. >> Yeah. >> John: Talk about that. >> I think there are a few elements to that notion of there's no compression algorithm for experience and I think the first thing to know about AWS which is different is, we just come from a different heritage and a different background. We ran a business for a long time that was our sole business that was a consumer retail business that was very low margin. And so, we had to operate at very large scale given how many people were using us but also, we had to run infrastructure services deep in the stack, compute storage and database, and reliable scalable data centers at very low cost and margins. And so, when you look at our business it actually, today, I mean its, its a higher margin business in our retail business, its a lower margin business in software companies but at real scale, it's a high volume, relatively low margin business. And the way that you have to operate to be successful with those businesses and the things you have to think about and that DNA come from the type of operators we have to be in our consumer retail business. And there's nobody else in our space that does that. So, you know, the way that we think about costs, the way we think about innovation in the data center, um, and I also think the way that we operate services and how long we've been operating services as a company its a very different mindset than operating package software. Then you look at when uh, you think about some of the uh, issues in very large scale Cloud, you can't learn some of those lessons until you get to different elbows of the curve and scale. And so what I was telling you is, its really different to run your own platform for your own users where you get to tell them exactly how its going to be done. But that's not the way the real world works. I mean, we have millions of external customers who use us from every imaginable country and location whenever they want, without any warning, for lots of different use cases, and they have lots of design patterns and we don't get to tell them what to do. And so operating a Cloud like that, at a scale that's several times larger than the next few providers combined is a very different endeavor and a very different operating rigor. >> Well you got to keep raising the bar you guys do a great job, really impressed again. Another tsunami of announcements. In fact, you had to spill the beans earlier with Quantum the day before the event. Tight schedule. I got to ask you about the musical festival because, I think this is a very cool innovation. It's the inaugural Intersect conference. >> Yes. >> John: Which is not part of Replay, >> Yes. >> John: Which is the concert tonight. Its a whole new thing, big music act, you're a big music buff, your daughter's an artist. Why did you do this? What's the purpose? What's your goal? >> Yeah, it's an experiment. I think that what's happened is that re:Invent has gotten so big, we have 65 thousand people here, that to do the party, which we do every year, its like a 35-40 thousand person concert now. Which means you have to have a location that has multiple stages and, you know, we thought about it last year and when we were watching it and we said, we're kind of throwing, like, a 4 hour music festival right now. There's multiple stages, and its quite expensive to set up that set for a party and we said well, maybe we don't have to spend all that money for 4 hours and then rip it apart because actually the rent to keep those locations for another two days is much smaller than the cost of actually building multiple stages and so we thought we would try it this year. We're very passionate about music as a business and I think we-I think our customers feel like we've thrown a pretty good music party the last few years and we thought we would try it at a larger scale as an experiment. And if you look at the economics- >> At the headliners real quick. >> The Foo Fighters are headlining on Saturday night, Anderson Paak and the Free Nationals, Brandi Carlile, Shawn Mullins, um, Willy Porter, its a good set. Friday night its Beck and Kacey Musgraves so it's a really great set of um, about thirty artists and we're hopeful that if we can build a great experience that people will want to attend that we can do it at scale and it might be something that both pays for itself and maybe, helps pay for re:Invent too overtime and you know, I think that we're also thinking about it as not just a music concert and festival the reason we named it Intersect is that we want an intersection of music genres and people and ethnicities and age groups and art and technology all there together and this will be the first year we try it, its an experiment and we're really excited about it. >> Well I'm gone, congratulations on all your success and I want to thank you we've been 7 years here at re:Invent we've been documenting the history. You got two sets now, one set upstairs. So appreciate you. >> theCUBE is part of re:Invent, you know, you guys really are apart of the event and we really appreciate your coming here and I know people appreciate the content you create as well. >> And we just launched CUBE365 on Amazon Marketplace built on AWS so thanks for letting us- >> Very cool >> John: Build on the platform. appreciate it. >> Thanks for having me guys, I appreciate it. >> Andy Jassy the CEO of AWS here inside theCUBE, it's our 7th year covering and documenting the thunderous innovation that Amazon's doing they're really doing amazing work building out the new technologies here in the Cloud computing world. I'm John Furrier, Stu Miniman, be right back with more after this short break. (Outro music)

Published Date : Sep 29 2020

SUMMARY :

at org the org to the andyc and it was. of time. That's hard. I think that

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy ClemenkoPERSON

0.99+

AndyPERSON

0.99+

Stu MinimanPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Andy JassyPERSON

0.99+

CIAORGANIZATION

0.99+

John FurrierPERSON

0.99+

AWSORGANIZATION

0.99+

EuropeLOCATION

0.99+

JohnPERSON

0.99+

3QUANTITY

0.99+

StackRoxORGANIZATION

0.99+

80%QUANTITY

0.99+

4 hoursQUANTITY

0.99+

100%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

VolkswagenORGANIZATION

0.99+

Rodger GoodellPERSON

0.99+

AirBnBORGANIZATION

0.99+

RogerPERSON

0.99+

40%QUANTITY

0.99+

Brandi CarlilePERSON

0.99+

PintrestORGANIZATION

0.99+

PythonTITLE

0.99+

two daysQUANTITY

0.99+

4 hourQUANTITY

0.99+

7th yearQUANTITY

0.99+

Willy PorterPERSON

0.99+

Friday nightDATE

0.99+

andy@stackrox.comOTHER

0.99+

7 yearsQUANTITY

0.99+

Goldman SachsORGANIZATION

0.99+

two tagsQUANTITY

0.99+

IntelORGANIZATION

0.99+

millionsQUANTITY

0.99+

Foo FightersORGANIZATION

0.99+

last yearDATE

0.99+

GiantsORGANIZATION

0.99+

todayDATE

0.99+

andyc.info/dc20OTHER

0.99+

65 thousand peopleQUANTITY

0.99+

Saturday nightDATE

0.99+

SlackORGANIZATION

0.99+

two setsQUANTITY

0.99+

flask.docker.lifeOTHER

0.99+

WernerPERSON

0.99+

two thingsQUANTITY

0.99+

Shawn MullinsPERSON

0.99+

RobinhoodORGANIZATION

0.99+

IntersectORGANIZATION

0.99+

thousandsQUANTITY

0.99+

Kacey MusgravesPERSON

0.99+

4 hundred millisecondsQUANTITY

0.99+

first imageQUANTITY

0.99+

Andy


 

>> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, I've got the Keynote file there. YAMLs, I've got Dockerfiles, Compose files, all that good stuff. If you want to follow along, great, if not go back and review later, kind of fun. So let me tell you a little bit about myself. I am a former DOD contractor. This is my seventh DockerCon. I've spoken, I had the pleasure to speak at a few of them, one even in Europe. I was even a Docker employee for quite a number of years, providing solutions to the federal government and customers around containers and all things Docker. So I've been doing this a little while. One of the things that I always found interesting was the lack of understanding around labels. So why labels, right? Well, as a former DOD contractor, I had built out a large registry. And the question I constantly got was, where did this image come from? How did you get it? What's in it? Where did it come from? How did it get here? And one of the things we did to kind of alleviate some of those questions was we established a baseline set of labels. Labels really are designed to provide as much metadata around the image as possible. I ask everyone in attendance, when was the last time you pulled an image and had 100% confidence, you knew what was inside it, where it was built, how it was built, when it was built, you probably didn't, right? The last thing we obviously want is a container fire, like our image on the screen. And one kind of interesting way we can kind of prevent that is through the use of labels. We can use labels to address security, address some of the simplicity on how to run these images. So think of it, kind of like self documenting, Think of it also as an audit trail, image provenance, things like that. These are some interesting concepts that we can definitely mandate as we move forward. What is a label, right? Specifically what is the Schema? It's just a key-value. All right? It's any key and pretty much any value. What if we could dump in all kinds of information? What if we could encode things and store it in there? And I've got a fun little demo to show you about that. Let's start off with some of the simple keys, right? Author, date, description, version. Some of the basic information around the image. That would be pretty useful, right? What about specific labels for CI? What about a, where's the version control? Where's the source, right? Whether it's Git, whether it's GitLab, whether it's GitHub, whether it's Gitosis, right? Even SPN, who cares? Where are the source files that built, where's the Docker file that built this image? What's the commit number? That might be interesting in terms of tracking the resulting image to a person or to a commit, hopefully then to a person. How is it built? What if you wanted to play with it and do a git clone of the repo and then build the Docker file on your own? Having a label specifically dedicated on how to build this image might be interesting for development work. Where it was built, and obviously what build number, right? These kind of all, not only talk about continuous integration, CI but also start to talk about security. Specifically what server built it. The version control number, the version number, the commit number, again, how it was built. What's the specific build number? What was that job number in, say, Jenkins or GitLab? What if we could take it a step further? What if we could actually apply policy enforcement in the build pipeline, looking specifically for some of these specific labels? I've got a good example of, in my demo of a policy enforcement. So let's look at some sample labels. Now originally, this idea came out of label-schema.org. And then it was a modified to opencontainers, org.opencontainers.image. There is a link in my GitHub page that links to the full reference. But these are some of the labels that I like to use, just as kind of like a standardization. So obviously, Author's, an email address, so now the image is attributable to a person, that's always kind of good for security and reliability. Where's the source? Where's the version control that has the source, the Docker file and all the assets? How it was built, build number, build server the commit, we talked about, when it was created, a simple description. A fun one I like adding in is the healthZendpoint. Now obviously, the health check directive should be in the Docker file. But if you've got other systems that want to ping your applications, why not declare it and make it queryable? Image version, obviously, that's simple declarative And then a title. And then I've got the two fun ones. Remember, I talked about what if we could encode some fun things? Hypothetically, what if we could encode the Compose file of how to build the stack in the first image itself? And conversely the Kubernetes? Well, actually, you can and I have a demo to show you how to kind of take advantage of that. So how do we create labels? And really creating labels as a function of build time okay? You can't really add labels to an image after the fact. The way you do add labels is either through the Docker file, which I'm a big fan of, because it's declarative. It's in version control. It's kind of irrefutable, especially if you're tracking that commit number in a label. You can extend it from being a static kind of declaration to more a dynamic with build arguments. And I can show you, I'll show you in a little while how you can use a build argument at build time to pass in that variable. And then obviously, if you did it by hand, you could do a docker build--label key equals value. I'm not a big fan of the third one, I love the first one and obviously the second one. Being dynamic we can take advantage of some of the variables coming out of version control. Or I should say, some of the variables coming out of our CI system. And that way, it self documents effectively at build time, which is kind of cool. How do we view labels? Well, there's two major ways to view labels. The first one is obviously a docker pull and docker inspect. You can pull the image locally, you can inspect it, you can obviously, it's going to output as JSON. So you going to use something like JQ to crack it open and look at the individual labels. Another one which I found recently was Skopeo from Red Hat. This allows you to actually query the registry server. So you don't even have to pull the image initially. This can be really useful if you're on a really small development workstation, and you're trying to talk to a Kubernetes cluster and wanting to deploy apps kind of in a very simple manner. Okay? And this was that use case, right? Using Kubernetes, the Kubernetes demo. One of the interesting things about this is that you can base64 encode almost anything, push it in as text into a label and then base64 decode it, and then use it. So in this case, in my demo, I'll show you how we can actually use a kubectl apply piped from the base64 decode from the label itself from skopeo talking to the registry. And what's interesting about this kind of technique is you don't need to store Helm charts. You don't need to learn another language for your declarative automation, right? You don't need all this extra levels of abstraction inherently, if you use it as a label with a kubectl apply, It's just built in. It's kind of like the kiss approach to a certain extent. It does require some encoding when you actually build the image, but to me, it doesn't seem that hard. Okay, let's take a look at a demo. And what I'm going to do for my demo, before we actually get started is here's my repo. Here's a, let me actually go to the actual full repo. So here's the repo, right? And I've got my Jenkins pipeline 'cause I'm using Jenkins for this demo. And in my demo flask, I've got the Docker file. I've got my compose and my Kubernetes YAML. So let's take a look at the Docker file, right? So it's a simple Alpine image. The org statements are the build time arguments that are passed in. Label, so again, I'm using the org.opencontainers.image.blank, for most of them. There's a typo there. Let's see if you can find it, I'll show you it later. My source, build date, build number, commit. Build number and get commit are derived from the Jenkins itself, which is nice. I can just take advantage of existing URLs. I don't have to create anything crazy. And again, I've got my actual Docker build command. Now this is just a label on how to build it. And then here's my simple Python, APK upgrade, remove the package manager, kind of some security stuff, health check getting Python through, okay? Let's take a look at the Jenkins pipeline real quick. So here is my Jenkins pipeline and I have four major stages, four stages, I have built. And here in build, what I do is I actually do the Git clone. And then I do my docker build. From there, I actually tell the Jenkins StackRox plugin. So that's what I'm using for my security scanning. So go ahead and scan, basically, I'm staging it to scan the image. I'm pushing it to Hub, okay? Where I can see the, basically I'm pushing the image up to Hub so such that my StackRox security scanner can go ahead and scan the image. I'm kicking off the scan itself. And then if everything's successful, I'm pushing it to prod. Now what I'm doing is I'm just using the same image with two tags, pre-prod and prod. This is not exactly ideal, in your environment, you probably want to use separate registries and non-prod and a production registry, but for demonstration purposes, I think this is okay. So let's go over to my Jenkins and I've got a deliberate failure. And I'll show you why there's a reason for that. And let's go down. Let's look at my, so I have a StackRox report. Let's look at my report. And it says image required, required image label alert, right? Request that the maintainer, add the required label to the image, so we're missing a label, okay? One of the things we can do is let's flip over, and let's look at Skopeo. Right? I'm going to do this just the easy way. So instead of looking at org.zdocker, opencontainers.image.authors. Okay, see here it says build signature? That was the typo, we didn't actually pass in. So if we go back to our repo, we didn't pass in the the build time argument, we just passed in the word. So let's fix that real quick. That's the Docker file. Let's go ahead and put our dollar sign in their. First day with the fingers you going to love it. And let's go ahead and commit that. Okay? So now that that's committed, we can go back to Jenkins, and we can actually do another build. And there's number 12. And as you can see, I've been playing with this for a little bit today. And while that's running, come on, we can go ahead and look at the Console output. Okay, so there's our image. And again, look at all the build arguments that we're passing into the build statement. So we're passing in the date and the date gets derived on the command line. With the build arguments, there's the base64 encoded of the Compose file. Here's the base64 encoding of the Kubernetes YAML. We do the build. And then let's go down to the bottom layer exists and successful. So here's where we can see no system policy violations profound marking stack regimes security plugin, build step as successful, okay? So we're actually able to do policy enforcement that that image exists, that that label sorry, exists in the image. And again, we can look at the security report and there's no policy violations and no vulnerabilities. So that's pretty good for security, right? We can now enforce and mandate use of certain labels within our images. And let's flip back over to Skopeo, and let's go ahead and look at it. So we're looking at the prod version again. And there's it is in my email address. And that validated that that was valid for that policy. So that's kind of cool. Now, let's take it a step further. What if, let's go ahead and take a look at all of the image, all the labels for a second, let me remove the dash org, make it pretty. Okay? So we have all of our image labels. Again, author's build, commit number, look at the commit number. It was built today build number 12. We saw that right? Delete, build 12. So that's kind of cool dynamic labels. Name, healthz, right? But what we're looking for is we're going to look at the org.zdockerketers label. So let's go look at the label real quick. Okay, well that doesn't really help us because it's encoded but let's base64 dash D, let's decode it. And I need to put the dash r in there 'cause it doesn't like, there we go. So there's my Kubernetes YAML. So why can't we simply kubectl apply dash f? Let's just apply it from standard end. So now we've actually used that label. From the image that we've queried with skopeo, from a remote registry to deploy locally to our Kubernetes cluster. So let's go ahead and look everything's up and running, perfect. So what does that look like, right? So luckily, I'm using traefik for Ingress 'cause I love it. And I've got an object in my Kubernetes YAML called flask.doctor.life. That's my Ingress object for traefik. I can go to flask.docker.life. And I can hit refresh. Obviously, I'm not a very good web designer 'cause the background image in the text. We can go ahead and refresh it a couple times we've got Redis storing a hit counter. We can see that our server name is roundrobing. Okay? That's kind of cool. So let's kind of recap a little bit about my demo environment. So my demo environment, I'm using DigitalOcean, Ubuntu 19.10 Vms. I'm using K3s instead of full Kubernetes either full Rancher, full Open Shift or Docker Enterprise. I think K3s has some really interesting advantages on the development side and it's kind of intended for IoT but it works really well and it deploys super easy. I'm using traefik for Ingress. I love traefik. I may or may not be a traefik ambassador. I'm using Jenkins for CI. And I'm using StackRox for image scanning and policy enforcement. One of the things to think about though, especially in terms of labels is none of this demo stack is required. You can be in any cloud, you can be in CentOs, you can be in any Kubernetes. You can even be in swarm, if you wanted to, or Docker compose. Any Ingress, any CI system, Jenkins, circle, GitLab, it doesn't matter. And pretty much any scanning. One of the things that I think is kind of nice about at least StackRox is that we do a lot more than just image scanning, right? With the policy enforcement things like that. I guess that's kind of a shameless plug. But again, any of this stack is completely replaceable, with any comparative product in that category. So I'd like to, again, point you guys to the andyc.infodc20, that's take you right to the GitHub repo. You can reach out to me at any of the socials @clemenko or andy@stackrox.com. And thank you for attending. I hope you learned something fun about labels. And hopefully you guys can standardize labels in your organization and really kind of take your images and the image provenance to a new level. Thanks for watching. (upbeat music)

Published Date : Sep 28 2020

SUMMARY :

at org the org to the andyc

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy ClemenkoPERSON

0.99+

EuropeLOCATION

0.99+

100%QUANTITY

0.99+

StackRoxORGANIZATION

0.99+

two tagsQUANTITY

0.99+

PythonTITLE

0.99+

flask.docker.lifeOTHER

0.99+

andy@stackrox.comOTHER

0.99+

AndyPERSON

0.99+

andyc.info/dc20OTHER

0.99+

DockerORGANIZATION

0.99+

todayDATE

0.99+

flask.doctor.lifeOTHER

0.99+

third oneQUANTITY

0.99+

DockerfilesTITLE

0.99+

seventhQUANTITY

0.99+

KubernetesTITLE

0.98+

first oneQUANTITY

0.98+

second oneQUANTITY

0.98+

label-schema.orgOTHER

0.98+

OneQUANTITY

0.98+

KeynoteTITLE

0.98+

andyc.infodc20OTHER

0.98+

first imageQUANTITY

0.98+

First dayQUANTITY

0.97+

CentOsTITLE

0.97+

StackRoxTITLE

0.97+

SkopeoORGANIZATION

0.96+

Red HatORGANIZATION

0.96+

GitTITLE

0.96+

Ubuntu 19.10 VmsTITLE

0.95+

oneQUANTITY

0.95+

build 12OTHER

0.95+

JQTITLE

0.95+

base64TITLE

0.93+

JenkinsTITLE

0.93+

build number 12OTHER

0.91+

org.opencontainers.image.OTHER

0.91+

IngressORGANIZATION

0.89+

DODORGANIZATION

0.89+

opencontainers.image.authors.OTHER

0.89+

a secondQUANTITY

0.89+

two major waysQUANTITY

0.89+

Jenkins StackRoxTITLE

0.88+

GitosisTITLE

0.86+

GitLabORGANIZATION

0.86+

GitHubORGANIZATION

0.86+

two fun onesQUANTITY

0.84+

GitLabTITLE

0.82+

skopeoORGANIZATION

0.82+

DockerTITLE

0.81+

JSONTITLE

0.81+

traefikTITLE

0.77+

skopeoTITLE

0.76+

@clemenkoPERSON

0.74+

RancherTITLE

0.74+

IngressTITLE

0.73+

org.zdockerOTHER

0.72+

RedisTITLE

0.72+

DigitalOceanTITLE

0.71+

org.opencontainers.image.blankOTHER

0.71+

KuberORGANIZATION

0.69+

ON DEMAND MIRANTIS OPENSTACK ON K8S FINAL


 

>> Hi, I'm Adrienne Davis, Customer Success Manager on the CFO-side of the house at Mirantis. With me today is Artem Andreev, Product Manager and expert, who's going to enlighten us today. >> Hello everyone. It's great to hear all of you listening to our discussion today. So my name is Artem Andreev. I'm a Product Manager for Mirantis OpenStack line of products. That includes the current product line that we have in the the next generation product line that we're about to launch quite soon. And actually this is going to be the topic of our presentation today. So the new product that we are very, very, very excited about, and that is going to be launched in a matter of several weeks, is called Mirantis OpenStack on Kubernetes. For those of you who have been in Mirantis quite a while already, Mirantis OpenStack on Kubernetes is essentially a reincarnation of our Miranti Cloud Platform version one, as we call it these days. So, and the theme has reincarnated into something more advanced, more robust, and altogether modern, that provides the same, if not more, value to our customers, but packaged in a different shape. And well, we're very excited about this new launch, and we would like to share this excitement with you Of course. As you might know, recently a few months ago, Mirantis acquired Docker Enterprise together with the advanced Kubernetes technology that Docker Enterprise provides. And we made this technology the piece and parcel of our product suite, and this naturally includes OpenStack Mirantis, OpenStack on Kubernetes as well, since this is a part of our product suite. And well, the Kubernetes technology in question, we call Docker Enterprise Container Cloud these days, I'm going to refer to this name a lot over the course of the presentation. So I would like to split today's discussions to several major parts. So for those of you who do not know what OpenStack is in general, a quick recap might be helpful to understand the value that it provides. I will discuss why someone still needs OpenStack in 2020. We will talk about what a modern OpenStack distribution is supposed to do to the expectation that is there. And of course, we will go into a bit of details of how exactly Mirantis OpenStack on Kubernetes works, how it helps to deploy and manage OpenStack clouds. >> So set the stage for me here. What's the base environment we were trying to get to? >> So what is OpenStack? One can think of OpenStack as a free and open source alternative to VMware, and it's a fair comparison. So OpenStack, just as VMware, operates primarily on Virtual Machines. So it gives you as a user, a clean and crispy interface to launch a virtual VM, to configure the virtual networking to plug this VM into it to configure and provision virtual storage, to attach to your VM, and do a lot of other things that actually a modern application requires to run. So the idea behind OpenStack is that you have a clean and crispy API exposed to you as a user, and alters little details and nuances of the physical infrastructure configuration provision that need to happen just for the virtual application to work are hidden, and spread across multiple components that comprise OpenStack per se. So as compared again, to a VMware, the functionality is pretty much similar, but actually OpenStack can do much more than just Vms, and it does that, at frankly speaking much less price, if we do the comparison. So what OpenStack has to offer. Naturally, the virtualization, networking, storage systems out there, it's just the basic entry level functionality. But of course, what comes with it is the identity and access management features, or practical user interface together with the CLI and command line tools to manage the cloud, orchestration functionality, to deploy your application in the form of templates, ability to manage bare metal machines, and of course, some nice and fancy extras like DNSaaS service, Metering, Secret Management, and Load Balancing. And frankly speaking, OpenStack can actually do even more, depending on the needs that you have. >> We hear so much about containers today. Do applications even need VMs anymore? Can't Kubernetes provide all these services? And even if IaaS is still needed, why would one bother with building their own private platform, if there's a wide choice of public solutions for virtualization, like Amazon web services, Microsoft Azure, and Google cloud platform? >> Well, that's a very fair question. And you're absolutely correct. So the whole trend (audio blurs) as the States. Everybody's talking about containers, everybody's doing containers, but to be realistic, yes, the market still needs VMs. There are certain use cases in the modern world. And actually these use cases are quite new, like 5G, where you require high performance in the networking for example. You might need high performance computing as well. So when this takes quite special hardware and configuration to be provided within your infrastructure, that is much more easily solved with the Vms, and not containers. Of course not to mention that, there are still legacy applications that you need to deal with, and that well, they have just switched from the server-based provision into VM-based provision, and they need to run somewhere. So they're not just ready for containers. And well, if we think about, okay, VMs are still needed, but why don't I just go to a public infrastructure as a service provider and run my workloads there? Now if you can do that, but well, you have to be prepared to pay a lot of money, once you start running your workloads at scale. So public IaaSes, they actually tend to hit your pockets heavily. And of course, if you're working in a highly regulated area, like enterprises cover (audio blurs) et cetera, so you have to comply with a lot of security regulations and data placement regulations. And well, public IaaSes, let's be frank, they're not good at providing you with this transparency. So you need to have full control over your whole stack, starting from the hardware to the very, very top. And this is why private infrastructure as a service is still a theme these days. And I believe that it's going to be a theme for at least five years more, if not more. >> So if private IaaSes are useful and demanded, why does Mirantis just stick to the OpenStack that we already have? Why did we decide to build a new product, rather than keep selling the current one? >> Well, to answer this question, first, we need to see what actually our customers believe more in infrastructure as a service platform should be able to provide. And we've compiled this list into like five criteria. Naturally, private IaaS needs to be reliable and robust, meaning that whatever happens on the underneath the API, that should not be impacting the business generated workloads, this is a must, or impacting them as little as possible, the platform needs to be secure and transparent, going back to the idea of working in the highly regulated areas. And this is again, a table stake to enter the enterprise market. The platform needs to be simple to deploy (audio blurs) 'cause well, you as an operator, you should not be thinking about the internals, but try to focus in on enabling your users with the best possible experience. Updates, updates are very important. So the platform needs to keep up with the latest software patches, bug fixes, and of course, features, and upgrading to a new version must not take weeks or months, and has as little impact on the running workloads as possible. And of course, to be able to run modern application, the platform needs to provide the comparable set of services, just as a public cloud so that you can move your application across your terms in the private or public cloud without having to change it severally, so-called the feature parity, it needs to be there. And if we look at the architecture of OpenStack, and we know OpenStack is powerful, it can do a lot. We've just discussed that, right? But the architecture of OpenStack is known to be complex. And well, tell me, how would you enable the robustness and robustness and reliability in this complex system? It's not easy, right? So, and actually this diagrams shelves, just like probably a third part of the modern update OpenStack cloud. So it's just a little illustration. It's not the whole picture. So imagine how hard it is to make a very solid platform out of this architecture. And well, naturally this also imposes some challenges to provide the transparency and security, 'cause well, the more complex the system is, the harder it is to manage, and well the harder it is to see what's on the inside, and well upgrades, yeah. One of the biggest challenges that we learned from our past previous history, well that many of our customers prefer to stay on the older version of OpenStack, just because, well, they were afraid of upgraded, cause they saw upgrades as time-consuming and risky and divorce. And well, instead of just switching to the latest and greatest software, they preferred reliability by sticking to the old stuff. Well, why? Well, 'cause potentially that meant implied certain impact on their workloads and well an upgrade required thorough planning and execution, just to be as as riskless as possible. And we are solving all of these challenges, of managing a system as complex as OpenStack is with Kubernetes. >> So how does Kubernetes solve these problems? >> Well, we look at OpenStack as a typical microservice architecture application, that is organized into multiple little moving parts, demons that are connected to each other and that talk to each other through the standard API. And altogether, that feels as very good feet to run on top of a Kubernetes cluster, because many of the modern applications, they fall exactly on the same pattern. >> How exactly did you put OpenStack on Kubernetes? >> Well, that's not easy. I'm going to be frank with you. And if you look at the architectural diagram, so this is a stack of Miranda's products represented with a focus of course, on the Mirantis OpenStack, as a central part. So what you see in the middle shelving pink, is Mirantis OpenStack on Kubernetes itself. And of course around that are supporting components that are needed to be there, to run OpenStack on Kubernetes successfully. So on the very bottom, there is hardware, networking, storage, computing, hardware that somebody needs to configure provision and manage, to be able to deploy the operating system on top of it. And this is just another layer of complexity that abstracts the Mirantis OpenStack on Kubernetes just from the under lake. So once we have operating system there, there needs to be a Kubernetes cluster, deployed and managed. And as I mentioned previously, we are using the capabilities that this Kuberenetes cluster provides to run OpenStack itself, the control plane that way, because everything in Mirantis OpenStack on Kuberentes is a container, or whatever you can think of. Of course naturally, it doesn't sound like an easy task to manage this multi-layered pie. And this is where Docker Enterprise Container Cloud comes into play, 'cause this is our single pane of glass into day one and day two operations for the hardware itself, for the operating system, and for Docker Enterprise Kubernetes. So it solves the need to have this underlay ready and prepared. And once the underlay is there, you go ahead, and deploy Mirantis OpenStack on Kubernetes, just as another Kubernetes application, application following the same practices and tools as you use with any other applications. So naturally of course, once you have OpenStack up and running, you can use it to create your own... To give your users ability to create their own private little Kubernetes clusters inside OpenStack projects. And this is one of the measure just cases for OpenStack these days, again, being an underlay for containers. So if you look at the operator experience, how does it look like for a human operator who is responsible for deployment the management of the cloud to deal with Mirantis OpenStack on Kubernetes? So first, you deploy Docker Enterprise Container Cloud, and you use the built-in capabilities that it provides to provision your physical infrastructure, that you discover the hardware nodes, you deploy operating system there, you do configuration of the network interfaces in storage devices there, and then you deploy Kubernetes cluster on top of that. This Kubernetes cluster is going to be dedicated to Mirantis OpenStack on Kuberenetes itself. So it's a special (indistinct) general purpose thing, that well is dedicated to OpenStack. And that means that inside of this cluster, there are a bunch of life cycle management modules, running as Kubernetes operators. So OpenStack itself has its own LCM module or operator. There is a dedicated operator for Ceph, cause Ceph is our major storage solution these days, that we integrate with. Naturally, there is a dedicated lifecycle management module for Stack Light. Stack Light is our operator, logging monitoring alerting solution for OpenStack on Kubernetes, that we bundle toegether with the whole product suite. So Kubernetes operators, directly through, it keeps the TL command or through the practical records that are provided by Docker Enterprise Container Cloud, as a part of it, to deploy the OpenStack staff and Stack Light clusters one by one, and connect them together. So instead of dealing with hundreds of YAML files, while it's five definitions, five specifications, that you're supposed to provide these days and that's safe. And although data management is performed through these APIs, just as the deployment as easily. >> All of this assumes that OpenStack has containers. Now, Mirantis was containerizing back long before Kubernetes even came along. Why did we think this would be important? >> That is true. Well, we've been containerizing OpenStack for quite a while already, it's not a new thing at all. However, is the way that we deploy OpenStack as a Kubernetes application that matters, 'cause Kubernetes solves a whole bunch of challenges that we have used to deal with, with MCP1, when deploying OpenStack on top of bare operating systems as packages. So, naturally Kubernetes provides us with... Allows us to achieve reliability through the self (audio blurs) auto-scaling mechanisms. So you define a bunch of policies that describe the behavior of OpenStack control plane. And Kubernetes follows these policies when things happen, and without actually any need for human interaction. So isolation of the dependencies or OpenStack services within Docker images is a good thing, 'cause previously we had to deal with packages and conflicts in between the versions of different libraries. So now we just ship everything together as a Docker image, and I think that early in updates is an advanced feature that Kubernetes provides natively. So updating OpenStack has never been as easy as with Kubernetes. Kubernetes also provides some fancy building blocks for network and like hold balancing, and of course, collegial tunnels, and service meshes. They're also quite helpful when dealing with such a complex application like OpenStack when things need to talk to each other and without any problem in the configuration. So Helm Reconciling is a place that also has a great deal of role. So it actually is our soul for Kubernetes. We're using Helm Bubbles, which are for opens, provide for OpenStack into upstream, as our low level layer of logic to deploy OpenStack app services and connect them to each other. And they'll naturally automatic scale-up of control plane. So adding in, YouNote is easy, you just add a new Kubernetes work up with a bunch of labels there and well, it handles the distribution of the necessary service automatically. Naturally, there are certain drawbacks. So there's fancy features come at a cost. Human operators, they need to understand Kubernetes and how it works. But this is also a good thing because everything is moving towards Kubernetes these days, so you would have to learn at some point anyway. So you can use this as a chance to bring yourself to the next level of knowledge. OpenStack is not 100% Cloud Native Application by itself. Unfortunately, there are certain components that are stateful like databases, or NOAA compute services, or open-the-switch demons, and that have to be dealt with very carefully when doing operates, updates, and all the whole deployment. So there's extra life cycle management logic build team that handles these components carefully for you. So, a bit of a complexity we had to have. And naturally, Kubernetes requires resources, and keeping the resources itself to run. So you need to have this resources available and dedicated to Kubernetes control plane, to be able to control your application, that is all OpenStack and stuff. So a bit of investment is required. >> Can anybody just containerize OpenStack services and get these benefits? >> Well, yes, the idea is not new, there's a bunch of OpStream open, sorry, community projects doing pretty much the same thing. So we are not inventing a rocket here, let's be fair. However, it's only the way that Kubernetes cooks OpenStack, gives you the robustness and reliability that enterprise and like big customers actually need. And we're doing a great deal of a job, ultimating all the possible day to work polls and all these caveats complexities of the OpenStack management inside our products. Okay, at this point, I believe we shall wrap this discussion a bit up. So let me conclude for you. So OpenStack is an opensource infrastructure as a service platform, that still has its niche in 2020th, and it's going to have it's niche for at least five years. OpenStack is a powerful but very complex tool. And the complexities of OpenStack and OpenStack life cycle management, are successfully solved by Mirantis, through the capabilities of Kubernetes distribution, that provides us with the old necessary primitives to run OpenStack, just as another containerized application these days.

Published Date : Sep 14 2020

SUMMARY :

on the CFO-side of the house at Mirantis. and that is going to be launched So set the stage for me here. So as compared again, to a VMware, And even if IaaS is still needed, and they need to run somewhere. So the platform needs to keep up and that talk to each other of the cloud to deal with All of this assumes that and keeping the resources itself to run. and it's going to have it's

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Adrienne DavisPERSON

0.99+

Artem AndreevPERSON

0.99+

2020DATE

0.99+

five specificationsQUANTITY

0.99+

five definitionsQUANTITY

0.99+

MirantisORGANIZATION

0.99+

100%QUANTITY

0.99+

OpenStackTITLE

0.99+

hundredsQUANTITY

0.99+

CephORGANIZATION

0.99+

MicrosoftORGANIZATION

0.98+

todayDATE

0.98+

OneQUANTITY

0.98+

five criteriaQUANTITY

0.98+

firstQUANTITY

0.98+

KubernetesTITLE

0.97+

2020thDATE

0.96+

oneQUANTITY

0.95+

GoogleORGANIZATION

0.93+

MCP1TITLE

0.92+

twoQUANTITY

0.92+

Mirantis OpenStackTITLE

0.91+

Mirantis OpenStackTITLE

0.91+

YouNoteTITLE

0.9+

Docker EnterpriseORGANIZATION

0.9+

Helm BubblesTITLE

0.9+

KubernetesORGANIZATION

0.9+

least five yearsQUANTITY

0.89+

singleQUANTITY

0.89+

Mirantis OpenStack on KubernetesTITLE

0.88+

few months agoDATE

0.86+

OpenStack on KubernetesTITLE

0.86+

Docker EnterpriseTITLE

0.85+

K8STITLE

0.84+

Tobi Knaup, D2iQ | KubeCon + CloudNativeCon NA 2019


 

>> Announcer: Live from San Diego, California, it's theCUBE. Covering KubeCon and CloudNativeCon. Brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome back, I'm Stu Miniman and my Co-host is John Troyer. And you're watching theCUBE here in day two of our coverage of KubeCon and CloudNativeCon. And joining me is Tobi Knaup who is the co-founder and CTO of D2iQ. See what I did there, Tobi? >> That's right, I love it. >> Alright. So Tobi, first of all, KubeCon, of course D2iQ, last year when we were here it was Mesosphere, so give us a little bit, you've been to lots of customer meetings, 12,000 people in attendance, tell us a little bit about the energy and how your team's finding the show so far. >> Yeah, obviously biggest KubeCon so far and it's just amazing how far this community has come, how it's grown. How many projects are part of it now, how many vendors here, too. You know two expo halls with different booths and you know, I think it just shows how important this community, this ecosystem is. When customers come to us and say they want to work with Kubernetes the community's why they're really doing it. >> Yeah, it is a great community, great vibe for people that aren't already in it. It's easy to get started, but one of the big themes we're hearing here is simplicity, how to make it easier to get going and once they get going, what happens after day one? That's some of the rebranded pieces. So for our audience, explain a little bit, why the rebrand focus of the company, Day 2 operations, absolutely something that I hear a lot of discussion on and why is your team specifically well positioned for that environment. >> No absolutely, so the rebrand we did because obviously our old company named Mesosphere has Mesos in it. That's the open source product we started with. But we've been doing a lot more than that actually for many years, right? We help customers run Apache Kafka and Spark and Cassandra. We've been doing a lot with Kubernetes also for some time now and even more so now. So having one particular technology in the company name was holding us back, right. People just put us in that box but we're doing so much more. So that was the reason for the rebrand and so, we wanted a name that doesn't have a particular technology in it and so we're looking for what is really expressed, what we do, what we help our customers with? And we've always been focused on Day 2 operations, so everything that happens after the initial install. How do you monitor things properly, upgrade them and so on? So that's why we loved that Day 2 concept. And then the IQ really stands for a couple of things. First of all we try to put a lot of automation into our products, so make those products smart to help our customers. But more importantly too, when we look at the ecosystem as a whole, where are most customers at, where are most companies at. Well, they're still early in their cloud-native journey and they need to get up to speed, they need to get smart about cloud-native and about Day 2 operations and so that's the IQ piece. We want to help our customers become smart about this space, get educated and then learn to do cloud-native. >> So Tobi, one of the things that fascinates me about the Kubernetes ecosystem is that people bring stuff to the table. Kubernetes is here, that's evolving. Other companies, entities, projects are coming to the table with other open source concepts and solving problems that they have in the field. At D2iQ, when you were Mesosphere, you have years of experience dealing with production issues, scaling management, all these sort of really, really fascinating cloud-native problems, so you bring a lot of experience to the table. So one of the projects that you are now working on and working with your customers and partners and the bigger ecosystem on is a way of approaching operators. The concept of bringing this kind of lifecycle automation to applications and helping with all these Day 2 problems. Can you talk a little about so KUDO is the name of the framework, I guess. Can you talk a little bit about that and how you're bringing that here to sit at the table and what some people's experiences with that are and what they are using it for? >> Absolutely, yeah, so these data services, these stapler workloads like Kafka, Cassandra and Spark, that's been in our DNA for a very long time. In fact, a little known fact, Apache Spark was originally a demo application for Apache Mesos. That's how it started originally. Obviously, it took off. So, we've been doing that since even before we were a company. And we've been helping our customers on top of Mesos with running these complex data stacks and there's some equivalent of operators on top of Mesos called frameworks. So we've been building these frameworks and we realized it's a little too hard to build these things. We typically had to write thousands of lines of code, 10, 20,000 sometimes and it took too long. So what we actually did on Mesos many years ago is we extracted the common patterns from those frameworks and built it into a library and made it so you can actually build a framework with just configuration, with just YAML, so it's a language that allows you to essentially sequence your operations into phases and steps. kind of like you would write a run book that a human operator takes and then goes through, right? So when we looked at the Kubernetes Operator space, we saw some of those same challenges that we had faced years ago. Building a Kubernetes Operator requires to write a lot of code. Not every company has Go programmers, people that are skilled enough in Kubernetes that they can write an operator. And more importantly too, once you write those 10,000 lines of code or more, you also have to maintain it. You have to keep up with API changes and so, a lot of folks we talked to at KubeCon last year and to customers, said it's just too hard to build operators. The other side of that too, is folks said it's a little too hard to use those operators too because very common use cases, you build a data pipeline. That means you'll be using multiple different operators, say Kafka, Cassandra and Spark. So if those all have different APIs, that's pretty hard to manage. So we wanted to simplify that. We wanted to create an alternative way for building operators that doesn't require you to learn Go, doesn't require you to write code, it works with just this orchestration language that KUDO offers and then for the KUDO users, the API is the same across these different operators. It has a plugin for Kube Cuddle, so you can interface with all the different operators through that. So yeah, simplicity and a great developer experience are the keys here. >> Tobi, I was wondering maybe you bring us inside the personas you target with this type of solution. As we've seen the maturation of this space, first couple of years I came, it felt very infrastructure heavy. The last year or two, there's more of the AppDev discussion there. They don't always speak the same languages. Looks like you've got some tooling here to help simplify that environment and make it easier because of course your application developers don't want to worry about that stuff. That's the promise of things like serverless, or just we're going to take care of that and stats and whatnot, so where specifically do you target and what are you hearing from customers as to how they're sorting through these organizational changes? >> Yeah, so I think ultimately, everybody kind of wants a platform as a service in some way, right? If you're building an app for your business, you don't want to think about, how do I provision this database, how to do that? And obviously, I can go to a public cloud and I can use all those public cloud services but what a lot of folks are doing now is they're running on various different types of infrastructure. They're running on multiple public clouds. They're running on the Edge. We work with a lot of customers that have a need to deploy these data services, these operators in Edge locations, on the manufacturing floor in a factory, for instance. Or on a cruise ship, that's one company we're working with. So, how do you bring this API-driven deployment of these services to all these different types of locations? And so that's what we try to achieve with KUDO for the data services and then with our other products too, like Kommander, which is a multi-cluster control plane. It's about when organizations have all these different clusters. And very typically they get into the dozens or even hundreds of clusters fast. How do you then manage that? How do you apply configuration consistently across these clusters? Manage your secrets and RBAC rules and things like that? So those are all the Day 2 things that we try to help customers with. There's a little bit of a tension there sometimes, right? Because the great thing about Kubernetes is it's great for developers. It has a nice API, people love the API. People are very quick to adopt it, right? They try it out on their laptop, they setup their first cluster. That typically goes very fast and they very quickly have their first app running. So it happens organically, right? But every large organization also has a need to put the right governance in place, right? How I keep those clusters secure? How do I meet my regulatory requirements? How do I make sure I can upgrade those clusters fast, if I need to fix a security issue and so on? So there's that tension between the governance, the central IT and what the developers want to do. We try to strike a balance there with our products to give developers the agility that cloud-native promises but at the same time, give the IT folks the right controls so they can meet their requirements. >> Tobi, here at the show this year, obviously bigger and a lot more folks at different parts of their cloud-native journey. Again, with the experience you all have, as you talk to folks this year, obviously people are clearly in production. You talk about some of the governance issues, is there anything you can say about either what you think is going to make for a successful partnership with you and a successful customer? What qualities do you need to have by the time you're growing up in production and then also as they're making choices here, what should the end users be looking at? >> Right, so one of the things we realized over the years is actually cloud-native is a journey. Every organization is somewhere else on that journey. And you said partnership, I think that's the key word here. We want to partner with our customers because we realize that this stuff is complicated, right? And it's actually for us as a company, our journey has been kind of interesting because we started at this large scale spot, right? Before we were even a company, we were running these clusters with tens of thousands of notes. These large online services at Twitter and other companies, that's where we started and that's where our first product kind of landed. It's at that large scale is what we're known for but most organizations out there are much earlier in their journey to cloud-native. As so, what we realized is that we really need to partner with folks to even at the very first steps, where they're just getting educated about this space, right? What are containers? How are they different from VMs? What is this cluster management thing, right? How does this all fit together? So we try to hold our customers' hands, catch them where they are. Besides all of the software that we're building, we also offer trainings for example. And so we just try to have the conversation with the customer. Figure out what their needs are, whether that's training, whether that's services or different products. And the different products that come together in our Kubernetes product line, they're really designed to meet the customer at these different stages. There's Konway, that's our Kubernetes distribution, get your first project up and running. Then once you get a little bit more sophisticated, you probably want to do CI/CD. So we have an upcoming product for that, it's called Dispatch. Pretty excited about it. The data services with KUDO. Folks typically add that next and then very quickly you have these dozens of hundreds of clusters. Now, you need Kommander, right? So we try to fit that all together. Meet the customer where they are and I think education is a big piece of that. >> All right, Tobi, we want to give you the final word. You talked about some of the things coming out here, so just give us your viewpoint of the ecosystem broader as to what next things need to be done to help even further the journey that we're all on? >> Yeah, I think in terms of next things, there's a lot of interest around operators. Well, operators as the implementation but really what's happening is, people are running more and more different workloads on top of Kubernetes, right? And I think that's where a lot of the work is going to happen over the next year. There's some discussions in the CNCF now even. What is an operator? How do we define that? Is it something fairly broad? Is it something fairly specific? But Kubernetes is definitely the factor standard for doing cloud-native and people are putting it in a lot of different environments. They're putting it in Edge locations. So I think we need to figure out how do you have a sane sort of development workflow for these types of deployments? How do you define an application that might actually run on multiple different clusters? So I think there's going to be a lot of talk. Operators obviously, but also on the developers side, in a layer above Kubernetes, right? How can I just define my application in a way where I say maybe just run this thing at a highly available way on two different cloud providers, instead of saying specifically it needs to go here, it needs to go there? Or deploy this thing in a follow the sun model or whatever that is. So I think that's where a lot of the conversations are going to happen, is that level above. >> All right well Tobi, appreciate the updates. Congratulations on the progress and definitely look forward to catching more from you and D2iQ team in the near future. >> Thank you very much for having me. >> All right, for John Troyer, I'm Stu Miniman, lots more to come. Thanks for watching theCUBE. (light music)

Published Date : Nov 20 2019

SUMMARY :

Brought to you by Red Hat, and my Co-host is John Troyer. and how your team's finding the show so far. and you know, I think it just shows how important and once they get going, what happens after day one? and so that's the IQ piece. So one of the projects that you are now working on and made it so you can actually build and what are you hearing from customers for the data services and then with our other products too, Again, with the experience you all have, and then very quickly you have these dozens All right, Tobi, we want to give you the final word. So I think there's going to be a lot of talk. and definitely look forward to catching lots more to come.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John TroyerPERSON

0.99+

Stu MinimanPERSON

0.99+

Tobi KnaupPERSON

0.99+

10,000 linesQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

San Diego, CaliforniaLOCATION

0.99+

TobiPERSON

0.99+

first appQUANTITY

0.99+

last yearDATE

0.99+

TwitterORGANIZATION

0.99+

first productQUANTITY

0.99+

hundredsQUANTITY

0.99+

first clusterQUANTITY

0.99+

first stepsQUANTITY

0.99+

12,000 peopleQUANTITY

0.99+

KubeConEVENT

0.99+

KonwayORGANIZATION

0.99+

MesosphereORGANIZATION

0.99+

first projectQUANTITY

0.99+

10, 20,000QUANTITY

0.98+

KubernetesTITLE

0.98+

oneQUANTITY

0.98+

D2iQPERSON

0.98+

MesosORGANIZATION

0.98+

thousands of linesQUANTITY

0.98+

this yearDATE

0.98+

MesosTITLE

0.97+

next yearDATE

0.97+

two different cloud providersQUANTITY

0.97+

Apache KafkaORGANIZATION

0.96+

KubernetesORGANIZATION

0.96+

SparkTITLE

0.96+

dozensQUANTITY

0.96+

CloudNativeConEVENT

0.95+

two expo hallsQUANTITY

0.94+

KUDOTITLE

0.94+

Day 2QUANTITY

0.93+

FirstQUANTITY

0.93+

CNCFORGANIZATION

0.92+

D2iQORGANIZATION

0.92+

CassandraORGANIZATION

0.9+

CloudNativeCon NA 2019EVENT

0.89+

KUDOORGANIZATION

0.89+

SparkORGANIZATION

0.88+

KommanderORGANIZATION

0.85+

first coupleQUANTITY

0.85+

ApacheORGANIZATION

0.84+

Kube CuddleTITLE

0.83+

yearsQUANTITY

0.81+

one companyQUANTITY

0.8+

day oneQUANTITY

0.8+

tens of thousands of notesQUANTITY

0.79+

day twoQUANTITY

0.76+

twoQUANTITY

0.76+

EdgeTITLE

0.73+

clustersQUANTITY

0.73+

Parag Dave, Red Hat | AnsibleFest 2019


 

>> Narrator: Live from Atlanta, Georgia, it's theCUBE, covering Ansible Fest 2019. Brought to you by Red Hat. >> Welcome back, this is theCUBE's live coverage of Ansible Fest 2019, here in Atlanta, Gerogia. I'm Stu Miniman, my co-host is John Furrier and we're going to dig in and talk a bit about developers. Our guest on the program, Parag Dave, who is senior principle product manager with Red Hat. Thank you so much for joining us. >> Glad to be here, thanks for having me. >> Alright, so configuration management, really maturing into an entire automation journey for customers today, lets get into it. Tell us a little bit about your role and what brings you to the event. >> Yeah, so I actually have a very deep background in automation. I started by doing worker automation. Which is basically about how to help businesses do their processing. So, from processing an invoice, how do I create the flows to do that? And we saw the same thing, like automation was just kind of like a an operational thing and was brought on just to fulfill the business, make it faster and next thing you know it grew like, I don't know, like wildfire. I mean it was amazing and we saw the growth, and people saw the value, people saw how easy it was to use. Now, I think that combination is kicking in. So, now I'm focusing more on developers and the depth tools used at Red Hat and it's the same thing. You know, Parag, you know when you look in IT, you know Automation is not a new term. It's like we've been talking about this for decades. Talk to us a little bit about how it's different today and you know, you talked about some of the roles that are involved here, how does Ansible end up being a developer tool? >> Yeah, you know you see, it's very interesting, because Ansible was never really targeted for developers, right? And in fact, automation was always considered like an operational thing. Well, now what has happened is, the entire landscape of IT in a company is available to be executed programmatically. Before it was, interfaces were only available for a few programs. Everything else you had to kind of write your own programs to do, but now the advent of API's, you know with really rich CLI's it's very easy to interact with anything and not just like in software, you can interact with the other network devices, with your infrastructure, with your storage devices. So, all of the sudden when everything became available, developers who were trying to create applications and needed environments to test, to integrate, saw that automation is a great way to create something that cannot be replicated and be consistent every time you run it. So, the need for consistency and replication drove developers to adopt to the Ansible. And we were, you know cause they had the Ansible, we never marketed to developer and then we see that wow, they are really pulling it down, it's great. The whole infrastructure is code, which is one of the key pillars for devOps has become one of the key drivers for it, because now what you are seeing is the ability for developers to say that I can now, when I'm done with my coding and my application is ready for say a test environment or a staging environment, I can now provision everything I need right from configuring my network devices, getting the infrastructure ready for it, run my test, bring it down, and I can do all of that through code, right? So, that really drives the adoption for Ansible. >> And the could scale has shown customers at scale, whether its on-premises or cloud or Edge is really going to be a big factor in their architecture. The other thing that's interesting, and Stu were talking about this on our opening yesterday, is that you have the networking and the bottom of that stack moving up the stack and you have the applications kind of wanting to move down the stack. So, they're kind of meeting in the middle in this programmability in between them. You know, Containers, Kubernetes, Microservices, is developing as a nice middle layer between those two worlds. So, the networks have to telegraph up data and also be programmable, this is causing a lot of disruption and evasion. >> Parag: Absolutely. >> You're thought on this, 'cause it's DevSecOps beefs DevOps, that's DeVops. This is now all that's coming together. Exactly, and what's happening is, what we are seeing with developers is that there's a lot more empowerment going on. You know, before there was like a lot of silo's, there was like a lot of checks and balances in place that kind of made it hard to do things. It was okay, this what you, developers you write code, we will worry about all this. And now, this whole blending that has happened and developers being empowered to do it. And now, the empowerment is great and with great power comes great responsibility. SO, can you please make sure that you know, what you're using is enterprise grade, that it's going to be you know, you're not just doing things with your break environment So, once everybody become comfortable that yes, by merging these things together, we're actually not breaking things. You're actually increasing speed, 'cause what's the number one driver right now for organizations? Is speed with security, right? Can I achieve that business agility, so that by the time I need a feature develop, by the time I need a feature delivered in production and my tool comes for it, I need to close that gap. I cannot have a long gap between that. So, we are seeing a lot of that happening. >> People love automation, they love AI. These are two areas that, it's a no-brainer. When you have automation, you talk AI, yeah bring it on, right? What does that mean? So, when you think about automation the infrastructure that's in the hands of the operators, but also they want to enable applications to do it themselves as well, hence the DevOps. Where is the automation focus? Because that's the number one question. How do I land, get the adoption, and then expand out across. This seems to be the form that Ansible's kind of cracked the code on. The organic growth has been there, but now as a large enterprise comes in, I got to get the developers using it and it's got to be operator friendly. This seems to be the key, >> The balance has to be there >> the key to the kingdom. >> Yeah, no you're absolutely right. And so, when you look at it, like what do developers want? So, something that is frictionless to use, very quick, very easy, and so that I don't have to spend a lot of time learning it and doing it, right? And so we saw that with Ansible. It's like the fact that it's so easy to use, it's most of everything is in YAML. Which is very needed for developers, right? So, we see that from their perspective, they're very eager now, and they've been adopting it, if you look at the download stats it tells you. Like there's a lot of volume happening in terms of developers adopting it. What companies are now noticing is that, wait that's great, but now we have a lot developers doing their own thing. So, there is now like way of bringing all this together, right? So, it's like if I have 20 teams in one line of business and each team tries to do things their own way, what I'm going to end up with is a lot of repeatable, you know like a lot of work that gets repeated, I say it's duplicated. So, we see that's what we are seeing with collections for example. What Ansible is trying to bring to the table is okay, how do I help you kind of bring things into one umbrella? And how can I help you as a developer decide that, wow I got like 100 plus engine extra rolls I can use in Ansible. Well, which one do I pick? And you pick one, somebody else picks something else, Somebody creates a playbook with like one separate, you know one different thing in it, versus yours. How do we get our hands around it? And I think that's where we are seeing that happen. >> Right open star standpoint. I see Red Hat, Ansible doing great stuff and for the folks in the ivory tower, the executive CXO'S. They hear Ansible, glue layer, integration layer, and they go, wait a minute isn't that Kubernetes? Isn't Kubernetes suppose to provide all this stuff? So, talk about where Ansible fits in the wave that's coming with Kubernetes. Pat Gelsinger at VMware, thinks Kubernetes is going to be the dial-tone, it's going to be like the TCP/IP like protocol, to use his words, but there's a relationship that Ansible has with those Microservices that are coming. Can you explain that fit? >> You hit the nail on the head. Like, Kubernetes is like, we call it the new operating system. It's like that's what everything runs on now, right? And it's very easy for us, you know from a development perspective to say, great I have my Containers, I have my applications built, I can bring them up on demand, I don't have to worry about you know having the whole stack of an operating system delivered every time. So, Kubernetes has become like the defactual standard upon which things run. So, one of the concepts that has really caught a lot of momentum, is the operator framework, right? Which was introduced with the Kubernetes, the later Razor 3.x. Some of that, and operator framework, it's very easy now for application teams. I mean, it's not a great uptake from software vendors themselves. How do I give you my product, that you can very easily deliver on Kubernetes as a Container, but I'll give you enough configuration options, you can make it work the way you want to. So, we saw a lot oof software vendors creating and delivering their products as operators. Now we are seeing that a lot of software application developers themselves, for their own applications, want to create operators. It's a very easy way of actually getting your application deployed onto Kubernetes. So, Ansible operator is one of the easiest ways of creating an operator. Now, there are other options. You can do a Golang operator, you can do Helm, but Ansible operators has become extremely easier to get going. It doesn't require additional tools on top of it. Just because the operator SDK, you know, you're going to use playbooks. Which you're used to already and you're going to use playbooks to execute your application workflows. So, we feel that developers are really going to use Ansible operators as a way to create their own operators, get it out there, and this is true for any Kubernetes world. So, there's nothing different about, you know an Ansible operator versus any other operator. >> With no chains to Kubernetes, but Kubernetes obviously has the cons of the Microservices, which is literally non-user intervention. The apps take of all provisioning of services. This is an automation requirement, this feeds into the automation theme, right? >> Exactly, and what this does for you is it helps you, like if you look at operator framework, it goes all the way from basic deployers, everybody's use to, like okay, I want instantaneous deployment, automatically just does it. Automatically recognize changes that I give you in reconfiguration and go redeploy a new instance the way it should. So, how do I automate that? Like how do I ensure that my operator that is actually running my application can set up it's own private environment in Kubernetes and then it can actually do it automatically when I say okay now go make one change to it. Ansible operator allows you to do that and it goes all the way into the life cycle, the full five phases of life cycle that we have in the operator framework. Which is the last one's about autopilot. So, Autoscale, AutoRemedy itself. Your application now on Kubernetes through Ansible can do all that and you don't have to worry about coding at all. It's all provided to you because of the Ansible operator. >> Parag, in the demo this morning, I think the audience really, it resonated with the audience, it talked about some of the roles and how they worked together and it was kind of, okay the developers on this side and the developers expectation is, oh the infrastructure's not going to be ready, I'm not going to have what I need. Leave me alone, I'm going to play my video games until I can actually do my work and then okay, I'll get it done and do my magic. Speak a little bit to how Ansible is helping to break through those silo's and having developers be able to fully collaborate and communicate with all their other team members not just be off on their own. >> Oh yeah, that's a good point, you know. And what is happening is the developers, like what Ansible is bringing to the table is giving you a very prescriptive set of rules that you can actually incorporate into your developer flows. So, what developers are now doing is that I can't create a infrastructure contribution without actually having discussions with the infrastructure folks and the network team will have to share with me what is the ideal contribution I should be using. So, the empowerment that Ansible brings to the table is enabled cross team communications to happen. So, there is prescriptive way of doing things and you can create this all into an automation and then just set up so that it gets triggered every time a developer makes a change to it. So, internally they do that. Now other teams come and say, hey how are you doing this? Right, 'cause they need they same thing. Maybe you're destinations are going to be different obviously, but in the end the mechanism is the same, because you are under the same enterprise, right? So, you're going to have the same layer of network tools, same infrastructure tools. So, then teams start talking to each other. I was talking to the customer and they were telling me that they started with four teams working independently, building their own Ansible playbooks and then talking to the admins and next thing they know everybody had the full automation done and nobody knew about it. And now they're finding out and they were saying, wow, I got like hundreds of these teams doing this. So, A, I'm very happy, but B, now I would like these guests to talk to each other more and come up with a standard way of doing it. And going back to that collections concept. That's what's really going to help them. And we feel that with the collections it's very similar to what we did with Operator Hub for the OpenShift. It's where we have certified set of collections, so that they're supported by Red Hat. We have partners who contribute theirs and then they're supported by them, but we become a single source. So, as an enterprise you kind of have this way of saying, okay now I can feel confident about what I'm going to let you deploy in my environment and everybody's going to follow the same script and so now I can open up the floodgates in my entire organization and go for it. >> Yeah, what about how are people in the community getting to learn form everyone else? When you talk about a platform it should be if I do something not only can by organization learn from it, but potentially others can learn from it. That's kind of the value proposition of SaaS. >> Yes, yes it and having the galaxy offering out there, where we see so many users contributing, like we have close to a hundred thousand rolls out there now and that really brought the Ansible community together. It was already a strong community of contributors and everything. By giving them a platform where they can have these discussions, where they can see what everybody else is doing, it's the story is where you will now see a lot more happening like today, I think it was Ansible is like the top five Get Up projects in terms of progress that are happening out there. I mean the community is so wide run, it's incredible. Like they're driving this change and it's a community made up of developers, a lot of them. And that's what's creating this amazing synergy between all the different organizations. So, we feel that Ansible is actually bringing a lot of us together. Especially, as more and more automation becomes prevalent in the organizations. >> Alright, Parag want to give you a final word, Ansible Fest 2019, final take aways. >> No, this is great, this is my first one and I'd never been to one before and just the energy, and just seeing what all the other partners are also sharing, it's incredible. And Like I said with my backgrounds automations, I love this, anything automation for me, I think that's just the way to go. >> John: Alright, well that's it. >> Stu: Thank you so much for sharing the developer angle with us >> Thank you very much. >> For John Furrier, I'm Stu Miniman. Back to wrap-up from theCUBe's coverage of Ansible Fest 2019. Thanks for watching theCUBE. (intense music)

Published Date : Sep 25 2019

SUMMARY :

Brought to you by Red Hat. Thank you so much for joining us. and what brings you to the event. how do I create the flows to do that? but now the advent of API's, you know with really rich CLI's So, the networks have to telegraph up data that it's going to be you know, and it's got to be operator friendly. It's like the fact that it's so easy to use, and for the folks in the ivory tower, the executive CXO'S. So, one of the concepts that has really caught has the cons of the Microservices, It's all provided to you because of the Ansible operator. oh the infrastructure's not going to be ready, So, the empowerment that Ansible brings to the table That's kind of the value proposition of SaaS. it's the story is where you will now see Alright, Parag want to give you a final word, and I'd never been to one before and just the energy, Back to wrap-up from theCUBe's coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Pat GelsingerPERSON

0.99+

Stu MinimanPERSON

0.99+

John FurrierPERSON

0.99+

20 teamsQUANTITY

0.99+

JohnPERSON

0.99+

Red HatORGANIZATION

0.99+

Parag DavePERSON

0.99+

AnsibleORGANIZATION

0.99+

hundredsQUANTITY

0.99+

StuPERSON

0.99+

each teamQUANTITY

0.99+

Atlanta, GeorgiaLOCATION

0.99+

yesterdayDATE

0.99+

two areasQUANTITY

0.99+

one lineQUANTITY

0.99+

KubernetesTITLE

0.99+

five phasesQUANTITY

0.98+

two worldsQUANTITY

0.98+

VMwareORGANIZATION

0.98+

oneQUANTITY

0.97+

first oneQUANTITY

0.97+

four teamsQUANTITY

0.97+

Ansible Fest 2019EVENT

0.96+

theCUBEORGANIZATION

0.95+

100 plusTITLE

0.95+

AnsibleFestEVENT

0.95+

todayDATE

0.95+

single sourceQUANTITY

0.92+

Atlanta, GerogiaLOCATION

0.91+

DevSecOpsTITLE

0.91+

Razor 3.x.TITLE

0.91+

Operator HubORGANIZATION

0.88+

playbooksTITLE

0.86+

one changeQUANTITY

0.83+

hundred thousandQUANTITY

0.83+

one questionQUANTITY

0.8+

this morningDATE

0.8+

SDKTITLE

0.8+

theCUBeORGANIZATION

0.8+

DevOpsTITLE

0.79+

KubernetesORGANIZATION

0.79+

top fiveQUANTITY

0.73+

decadesQUANTITY

0.7+

ParagORGANIZATION

0.62+

pillarsQUANTITY

0.61+

devOpsTITLE

0.6+

Jason Edelman, Network to Code | Cisco Live EU 2019


 

>> Live, from Barcelona Spain, it's theCUBE, covering Cisco Live! Europe. Brought to you by Cisco and its ecosystem partners. >> Welcome back to theCUBE, here at Cisco Live! 2019 in Barcelona, Spain, I'm Stu Miniman, happy to welcome to the program a first-time guest, but someone I've known for many years, Jason Edelman, who is the founder of Network to Code. Jason, great to see you, and thanks for joining us. >> Thank you for having me, Stu. >> Alright, Jason, let's first, for our audiences, this is your first time on the program, give us a little bit about your background, and what led to you being the founder of Network to Code. >> Right, so my background is that of a traditional network engineer. I've spent 10+ years managing networks, deploying networks, and really, acting in a pre-sales capacity, supporting Cisco infrastructure. And it was probably around 2012 or 13, working for a large Cisco VAR, that we had access to something called Cisco onePK, and we kind of dove into that as the first SDK to control network devices. We have today iPhone SDKs, SDKs for Android, to program for phone apps, this was one of the first SDKs to program against a router and a switch. And that, for me, was just eye-opening, this is kind of back in 2013 or so, to see what could be done to write code in Python, Seer, Java, against network devices. Now, when this was going on, I didn't know how to code, so I kind of used that as the entrance to ramp up, but that was, for me, the pivot point. And then, the same six-week period, I had a demo of Puppet and Ansible automated networking devices, and so that was the pivot point where it was like, wow, realizing I've spent a career architecture and designing networks, and realizing there's a challenge in operating networks day to day. >> Yeah, Jason, dial back. You've some Cisco certifications in your background? >> Sure, yes, CCIE, yeah. >> Yeah, so I think back, when this all, OpenFlow, and before we even called it Software-Defined Networking, you were blogging about this type of stuff. But, as you said, you weren't a coder. It wasn't your background, you were a network guy, and I think the Network to Code, a lot of the things we've been looking at, career-wise, it's like, does everyone need to become coders? How will the tools mature? Give us a little bit about that journey, as how you got into coding and let's go from there. >> Yeah, it was interesting. In 2010, I started blogging OpenFlow-related, I thought it was going to change the world, saw what NICRO was doing at the time, and then Big Switch at the time, and I just speculated and blogged and really just envisioned this world where networks were different in some capacity. And it took a couple years to really shed light on management and operations of networking, and I made some career shifts. And I remember going back to onePK, at the time, my manager then, who is now our CEO at Network to Code, he actually asked, well, why don't you do it? And it was just like, me? Me, automate our program? What do you mean? And so it was kind of like a moment for me to kind of reflect on what I can do. Now, I will say I don't believe every network engineer should know how to code. That was my on-ramp because of partnership with Cisco at the time, and learning onePK and programming languages, but that was for me, I guess, what I needed as that kick in the butt to say, you know what? I am going to do this. I do believe in the shift that's going to happen in the next couple years, and that was where I kind of just jumped in feet first, and now we are where we are. >> Yeah, Jason, some great points there. I know for myself, I look at, Cisco's gone through so much change. A year ago, up on stage, Cisco's talking about their future is as a software company. You might not even think of us as networking first, you will talk to us about software first. So that initial shift that you saw back in 2010, it's happening. It's a different form than we might have thought originally, and it's not necessarily a product, but we're going through that shift. And I like what you said about how not everybody needs to code, but it's this change in paradigms and what we need to do are different. You've got some connections, we're here in the DevNet Zone. I saw, at the US show in Orlando last year, Network to Code had a small booth, there were a whole bunch of startups in that space. Tell us how you got involved into DevNet, really since the earliest days. >> Yes, since the early days, it was really pre-DevNet. So the emergence of DevNet, I've seen it grow into, the last couple years, Cisco Live! And for us, given what we do at Network to Code, as a network-automation-focused company, we see DevNet in use by our clients, by DevNet solutions and products, things like, mentioned yesterday on a panel, but DevNet has always-on sandboxes, too. One of the biggest barriers we've seen with our clients is getting access to the right lab gear on getting started to automate. So DevNet has these sandboxes always on to hit Nexus API or Catalyst API, right? Things like that. And there's really a very good, structured learning path to get started through DevNet, which usually, where we intersect in our client engagement, so it's kind of like post-DevNet, you're kind of really showing what's possible, and then we'll kind of get in and craft some solutions for our clients. >> Yeah, take us inside some of your clients, if you can. Are most of them hitting the API instead of the COI now when they're engaging? >> Yeah, it's actually a good question. Not usually talked about, but the reality is, APIs are still very new. And so we actively test a lot of the newer APIs from Cisco, as an example. IOS XE has some of the best APIs that exist around RESTCONF, NETCONF, modeled from the same YANG models, and great APIs. But the truth is that a lot of our clients, large enterprises that've been around for 20+ years, the install base is still largely not API-enabled. So a lot of the automation that we do is definitely SSH-based. And when you look at what's possible with platforms, if it is something like a custom in Python, or even an ANSEL off the shelf, a lot of the integrations are hidden from the user, so as long as we're able to accomplish the goal, it's the most important thing right now. And our clients' leaderships sometimes care, and it's true, right? You want the outcome. And initially, it's okay if we're not using the API, but once we do flip that switch, it does provide a bit more structure and safety for automating. But the install base is so large right now that, to automate, you have to use SSH, and we don't believe in waiting 'til every device is API-enabled because it'll just take a while to turn that base. >> Alright, Jason, a major focus of the conference this year has been around multi-cloud. How's that impacting your business and your customers? >> So, it's in our path as a company. Right now, there's a lot of focus around multi-cloud and data center, and the truth is, we're doing a lot of automation in the Campus networking space. Right, automating networks to get deployed in wiring closets and firewalls and load balancers and things like that. So from our standpoint, as we start planning with our clients, we see the services that we offer really port over to multi-cloud and making sure that with whatever automation is being deployed today, regardless of toolset, and look at a tool chain to deploy, if it's a CI/CD Pipeline for networking, be able to do that if you're managing a network in the Campus, a data center network, or multi-cloud network, to make sure we have a uniform-looking field to operations, and doing that. >> Alright, so Jason, you're not only founder of your company, you're also an author. Maybe tell us about the, I believe it's an update, or is it a new book, that recently got out. >> Yes, I'm a co-author of a book with Matt Oswalt and Scott Lowe, and it's an O'Reilly book that was published last year. And look, I'm a believer in education, and to really make a change and change an industry, we have to educate, and I think the book, the goal was to play a small part in really bringing concepts to light. As a network engineer by trade, there's fundamental concepts that network engineers should be aware of, and it could be basics and a lot of these, it could be Python or Jinja templating in YAML and Git and Linux, for that matter. It's just kind of providing that baseline of skills as an entrance into automation. And once you have the baseline, it kind of really uncovers what's possible. So writing the book was great. Great opportunity, and thank you to Matt and Scott for getting involved there. It really took a lot of the work effort and collaborated with them on it. >> Want to get your perception on the show, also. Education, always a key feature of what happens at the show. Not far from us is the Cisco bookshop. I see people getting a lot of the big Cisco books, but I think ten years ago, it was like, everybody, get my CCIE, all my different certifications updated, here. Here in the DevNet Zone, a lot of people, they're building stuff, they're building new pieces, they're playing in the labs, and they're doing some of these environments. What's your experience here at the show? Anything in particular that catches your eye? >> So, I do believe in education. I think to do anything well, you have to be educated on it. And I've read Cisco Press books over the years, probably a dozen of them, for the CCIE and beyond. I think when we look at what's in DevNet, when we look at what's in the bookstore, people have to immerse themselves into the technology, and reading books, like the learning labs that are here in the DevNet Zone, the design sessions that are right behind us. Just amazing for me to have seen the DevNet Zone grow to be what it is today. And really the goal of educating the market of what's possible. See, even from the start, Network to Code, we started as doing a lot of training, because you really can't change the methodology of network operations without being aware of what's possible, and it really does kind of come back to training. Whatever it is, on-demand, streaming, instructor-led, reading a book. Just glad to see this happen here, and a lot more to do around the industry, in the space around community involvement and development, but training, a huge part of it. >> Alright, Jason, want to give you the final word, love the story of network engineer gone entrepreneurial, out of your comfort zone, coding, helping to build a business. So tell us what you see, going forward. >> So, we've grown quite a bit in the past couple years. Right now, we're over 20 engineers strong, and starting from essentially just one a couple years ago, was a huge transformation, and seeing this happen. I believe in bringing on A-players to help make that happen. I think for us as a business, we're continuing to grow and accelerating what we do in this network automation space, but I just think, one thought to throw out there is, oftentimes we talk about lower-level tools, Python, Git, YAML, a lot of new acronyms and buzzwords for network engineers, but also, the flip side is true, too. As our client base evolves, and a lot of them are in the Fortune 100, so large clients, looking at consumption models of technology's super-important, meaning is there ITSM tools deployed today, like a ServiceNow, or Webex teams, or Slack for chat integration. To really think through early on how the internal customers of automation will consume automation, 'cause it really does us no good, Cisco, vendors, or clients no good, if we deploy a great network automation platform, and no one uses it, because it doesn't fit the culture of the brand of the organization. So it's just, as we continue to grow, that's really what's top of mind for us right now. >> Alright, well Jason, congratulations on everything that you've done so far, wish you the best of luck going forward, and thank you so much, of course, for watching. We'll have more coverage, three day, wall-to-wall, here at Cisco Live! 2019 in Barcelona. I'm Stu Miniman, and thanks for watching theCUBE. (electronic music)

Published Date : Jan 30 2019

SUMMARY :

Brought to you by Cisco and its ecosystem partners. Jason, great to see you, and thanks for joining us. and what led to you being the founder of Network to Code. to program for phone apps, this was one of the first You've some Cisco certifications in your background? and I think the Network to Code, as that kick in the butt to say, you know what? And I like what you said about One of the biggest barriers we've seen with our clients instead of the COI now when they're engaging? So a lot of the automation that we do Alright, Jason, a major focus of the conference this year and data center, and the truth is, or is it a new book, that recently got out. And look, I'm a believer in education, and to really Here in the DevNet Zone, a lot of people, the DevNet Zone grow to be what it is today. So tell us what you see, going forward. I believe in bringing on A-players to help make that happen. and thank you so much, of course, for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JasonPERSON

0.99+

CiscoORGANIZATION

0.99+

MattPERSON

0.99+

Jason EdelmanPERSON

0.99+

Stu MinimanPERSON

0.99+

Matt OswaltPERSON

0.99+

2013DATE

0.99+

2010DATE

0.99+

Scott LowePERSON

0.99+

ScottPERSON

0.99+

10+ yearsQUANTITY

0.99+

BarcelonaLOCATION

0.99+

StuPERSON

0.99+

IOS XETITLE

0.99+

Network to CodeORGANIZATION

0.99+

last yearDATE

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

13DATE

0.99+

first timeQUANTITY

0.99+

OrlandoLOCATION

0.99+

Barcelona SpainLOCATION

0.99+

O'ReillyORGANIZATION

0.99+

JavaTITLE

0.99+

PythonTITLE

0.99+

Barcelona, SpainLOCATION

0.99+

three dayQUANTITY

0.99+

six-weekQUANTITY

0.99+

AndroidTITLE

0.99+

first-timeQUANTITY

0.98+

20+ yearsQUANTITY

0.98+

NICROORGANIZATION

0.98+

yesterdayDATE

0.98+

oneQUANTITY

0.98+

firstQUANTITY

0.98+

ten years agoDATE

0.98+

LinuxTITLE

0.98+

GitTITLE

0.97+

Cisco PressORGANIZATION

0.97+

DevNetTITLE

0.97+

A year agoDATE

0.97+

Cisco Live! 2019EVENT

0.97+

todayDATE

0.97+

over 20 engineersQUANTITY

0.96+

this yearDATE

0.96+

YAMLTITLE

0.96+

RESTCONFTITLE

0.95+

JinjaTITLE

0.93+

first SDKQUANTITY

0.93+

next couple yearsDATE

0.92+

USLOCATION

0.92+

first SDKsQUANTITY

0.92+

SeerTITLE

0.91+

Nexus APITITLE

0.91+

theCUBEORGANIZATION

0.91+

NETCONFTITLE

0.88+

Cisco Live EU 2019EVENT

0.86+

last couple yearsDATE

0.86+

2012DATE

0.85+

OneQUANTITY

0.85+

Catalyst APITITLE

0.83+

SSHTITLE

0.81+

Roman Alekseenkov, Aptomi | OpenStack Summit 2018


 

>> Announcer: Live from Vancouver Canada, it's theCUBE covering OpenStack Summit North America 2018. Brought to you by RedHat, the OpenStack foundation and its ecosystem partners. >> Welcome back to theCUBE's coverage of OpenStack Summit 2018 in Vancouver. I'm Stu Miniman with my co-host for the week John Troyer. And helping us to bring it on home we have Roman Alekseenkov who's the co-founder of Aptomi. Brand new start up, I feel we've got the exclusive here to help you know, we have some blog posts out there and the like, but help to introduce you to our community and some of the broader world. Thanks for joining us. >> Yep, my first time at theCUBE. >> Alright so Roman, give us a little bit about your background and you know, we need with any, you know, founder the why of your company. >> Okay so I guess let's start with a background. So I used to work for one of the cloud infrastructure startups called Mirantis. And I worked there for a very long time. And last year I decided to start something on my own. Right, so now I am one of the main guys and one of the core contributors to the project called Aptomi. So and, I don't know if it's relevant, but before Mirantis, I've been doing a lot of the programming competitions like Google Code Jam, ACMICPC and Top Coder. My team ended up winning ACMICPC world finals. So I have like a decent background in algorithms, computer science, data structures, and things like that. >> Yeah. >> So that's me. >> We always see people are always humble there. It's, we know Mike Dvorkin is on your team. >> He is. >> People in the networking world, you know, might have run across Mike, and so super smart people. Give us the you know, the problem statement that your company's looking to solve. >> Right, so... I think it's going to be not one sentence answer. It's going to be a slightly longer answer. So when we talk to a number of companies who are using Kubernetes and who are building apps on top of Kubernetes, we looked into CI space and the CD space. And we looked at the CI, and in the CI for the most part, most of the problems seem to be solved, right. Everything that starts from your source code and then Docker file, how you build your artifacts, how you test it, and how you publish the binary to the repo, all that part seems to be streamlined. You take Jenkins, you take Docker, you take all the tools. You write some Kubernetes key, so this part, packaging components, it's not a big deal. And what we saw is where all the people are struggling is actually in the CD space, right. Once you start putting multi-container complex applications out of those pieces once you start wiring those pieces together, maybe microservices, maybe not, but once you start wiring things together, once you start running them across multiple environments, multiple clusters, right, that's where the things become really, really difficult for people who just rely on the tool set that we have today. Right, and that's where we saw an opportunity to build this service abstraction which allow people to wire things together and run them and operate them in a controllable way across multiple clusters and multiple environments integrated obviously with the continuous delivery pipelines. >> So if people weren't using Aptomi, what would they be using now? Or what kind of, what kinds of tools and processes are they bringing together if they're not doing this? Are they doing everything by hand, or how do you compare it to some of the other tools? >> Right, so a lot of people, they use some homegrown frameworks right now on top of Kubernetes and Helm. Or maybe on top of Kubernetes and YAML files. Or maybe Kubernetes and JSON is also one of the ways to do this. But there are some drawbacks in, in the approaches, right? Because we think that you want to start reasoning about those as actually applications and services not as like as a bunch of YAMLs and containers right? And so once you start talking about this as services as well as rules around those services, right maybe I want to say like hey everything that goes in my production environment should be secure or I want all my services with label "X" deployed to the dev environment or to cluster US east right? I mean the things become easier for you, 'cause you don't have to deal with the YAML file. >> Kind of from the abstraction layer up to maybe up, say to in other part of IT you might say it's policy driven almost, it's declarative, intent driven; I want this to happen rather than writing this kind of crazy YAML. Actually one of the Kubernetes founders, I dunno recently on Twitter or somewhere I was reading was saying that YAML was never supposed to be written by humans, that was kind of a mistake we meant for it to be under the covers but here we are. >> Roman: Right, but you are exactly right. It's services as well as intent around the services. >> Stu: Roman, I want to get your thoughts on just the Kubernetes ecosystem itself, you know for years here at OpenStack it was "Oh wait there's a lot of different distributions", you know, moving between one or the other wasn't necessarily easy. Kubernetes seems like we're a little bit better, a little further along, might've learned from some of the issues that we've had here. There's, last I saw it was getting around 40 different options but you know the thing I also wonder about is Kubernetes tends to get baked into platforms so you've got people that will build their own, just take the code, but you know Red Hat has a platform, all the public clouds have a platform, then there's a number of startups there. What's that like from your standpoint kind of being in this ecosystem is it, and maybe give us a little comparison compared to what it would have been like in the OpenStack world? >> Roman: Sounds good, so for us we actually we don't really care on what Kubernetes we run because we run, we help people to deliver apps and services on top. But if you talk about Kubernetes itself, we don't actually last year we haven't seen a lot of issues with Kubernetes right because we run a cluster in our lab, it just works. JKE always doesn't let me down, we also run things on Azure so speaking about the Kubernetes infrastructure I think the state of Kubernetes right now it's pretty reliable. So we don't see a lot of issues with that. But you also mentioned the platform, right so Kubernetes is part of the platform and that's the interesting part because a couple of years ago everyone was talking about Pass. It's Pass, Pass, Pass, Pass everywhere. Now you see a lot of conversations about Pass because Pass is like a monolith platform, doesn't exist anymore because it basically gets decomposed into what people call I guess containers of service and the modular tool set. And container orchestration is one part, and there is like 15 or 16 different parts from ad definition, to orchestration, and CD pipelines and security components, right? And that's why you see so many products out there with overlapping functionality. >> I mean do you think that the concept of Pass is going away at this point? Will we continue to redefine what a Pass is? I think every few years maybe that's the pattern. >> My personal opinion is that the concept of Pass is gone. There's is no more Pass. The future is the modular stack and the modular tool set. >> Stu: Yeah, so absolutely the future is becoming more distributed. I'm curious your thoughts then on something like Serverless which tends to change that even a little bit more than what we've been looking at. >> Roman: Sure well Serverless is, I guess it's not for everyone. It also depends on the type of workload that you run. If you want to run something compute intensive I guess it's still going to be containers or even VMs but likely containers. But if you have some stateless front-end or API, something that you sometimes make a call to and have to do something and get a response back sure Serverless is great, and Serverless actually fits quite well into what Mike and are tying to do with Aptomi. >> John: Roman I also wanted to ask about dependency mapping and visualizing dependencies. Hybrid cloud has been a big theme this week. It's actually a big theme in enterprise and elsewhere. When that happens when you have separate components whether they are monolithic components that are talking to each other down to microservices, dependencies are huge at that, the application level dependencies, especially as you move to hybrid cloud because you might be moving some component away from the rest and you better know what's talking to the other components. Any thoughts on how that is developing as architecture, application architectures and what you guys are doing to help there? >> Roman: Yeah so there's basically two ways how you can approach this so one way is the traditional way where you just open up your Kubernetes to a bunch of developers and people just run their things in different namespaces. If you use that approach I think those dependencies between different components, what relies on what, who's talking to whom, they become non-obvious, it's really hard to discover them once you got things deployed. So we are taking a slightly different approach because we require a little bit more information upfront about dependencies between components so once you deploy things through Aptomi we kind of already know what exists on the clusters and why, and who owns the resources, and who asked for certain services to be deployed. So we do provide some contextual visibility into that. And what's really nice is we're trying to build this, or we are building this on top of the community standards, we are not reinventing the whole platform, or trying to invent a new language, it's basically build ontop of Kubernetes and Helm. It's just a simple declarative service based abstraction and it rules. >> Stu: Last thing I wanted to ask, Aptomi itself, you know what's the state of the project? Is it a 1.0, are you looking for contributors, where are you with customers, help round off the understanding of the company and project. >> Sounds good, so we are one year into the project. The project is completely open source, it's on Github. It has 4 contributors right now and close to 2,000 commits maybe a little bit more. 100 star, 100+ on Github, so we're getting some traction, in the open source. Speaking about the readiness I think it's we're not 1.0 yet but we're getting close to 1.0. And the core of it, and the whole project is completely open source right, it's 100% Apache 2.0, but what we also do we also offer a hosted version with support. Right so when people come and they can just get the complete CD system with the service based layer and abstraction through our hosted version with support and that's what we are charging money for and revenue wise we do have paying customers, but it's only a year in so. Not a big amount but, there's going to be more. >> Stu: Alright well, Roman Alekseenkov really appreciate you sharing with us. Congratulations on the progress so far, seen an item I'd like working for us and for John Troyer. I'm Stu Miniman, we thank you for joining for 3 days of live wall-to-wall coverage of big final shout-out to the OpenStack Foundation and the supports of theCUBE for the whole crew here. Thank you for watching theCUBE. >> (electro-dance music) >> (soft piano) >> Astronaught: I recommend you activate my bit-ray over.

Published Date : May 24 2018

SUMMARY :

Brought to you by RedHat, the OpenStack foundation and the like, but help to introduce you to our community we need with any, you know, founder and one of the core contributors It's, we know Mike Dvorkin is on your team. in the networking world, you know, and then Docker file, how you build your artifacts, And so once you start talking about this as services say to in other part of IT you might say it's policy Roman: Right, but you are exactly right. the Kubernetes ecosystem itself, you know for years And that's why you see so many products out there I mean do you think that the concept of Pass My personal opinion is that the concept of Pass Stu: Yeah, so absolutely the future is becoming that you sometimes make a call to and have to do something some component away from the rest and you better know it's really hard to discover them once you got where are you with customers, help round off And the core of it, and the whole project is completely I'm Stu Miniman, we thank you for joining for 3 days

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MikePERSON

0.99+

John TroyerPERSON

0.99+

Mike DvorkinPERSON

0.99+

OpenStackORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

15QUANTITY

0.99+

4 contributorsQUANTITY

0.99+

Roman AlekseenkovPERSON

0.99+

JohnPERSON

0.99+

MirantisORGANIZATION

0.99+

3 daysQUANTITY

0.99+

last yearDATE

0.99+

two waysQUANTITY

0.99+

RedHatORGANIZATION

0.99+

VancouverLOCATION

0.99+

one yearQUANTITY

0.99+

OpenStack FoundationORGANIZATION

0.99+

one partQUANTITY

0.99+

KubernetesTITLE

0.99+

theCUBEORGANIZATION

0.99+

USLOCATION

0.99+

one wayQUANTITY

0.99+

KubernetesORGANIZATION

0.98+

OpenStack Summit 2018EVENT

0.98+

oneQUANTITY

0.98+

Vancouver CanadaLOCATION

0.98+

first timeQUANTITY

0.97+

one sentenceQUANTITY

0.97+

StuPERSON

0.97+

AptomiPERSON

0.97+

OpenStack Summit North America 2018EVENT

0.97+

AptomiORGANIZATION

0.97+

100%QUANTITY

0.97+

16 different partsQUANTITY

0.96+

RomanPERSON

0.96+

100+QUANTITY

0.96+

AzureTITLE

0.96+

this weekDATE

0.96+

around 40 different optionsQUANTITY

0.96+

a yearQUANTITY

0.95+

100 starQUANTITY

0.95+

Apache 2.0TITLE

0.94+

todayDATE

0.93+

GithubORGANIZATION

0.93+

ServerlessTITLE

0.92+

AptomiTITLE

0.91+

OpenStackTITLE

0.91+

YAMLTITLE

0.88+

HelmTITLE

0.88+

Code JamTITLE

0.85+

couple of years agoDATE

0.83+

Red HatORGANIZATION

0.82+

ACMICPCEVENT

0.8+

DockerTITLE

0.77+

JSONTITLE

0.75+

2,000 commitsQUANTITY

0.75+

JenkinsTITLE

0.73+

1.0OTHER

0.71+

TwitterORGANIZATION

0.68+

one of the waysQUANTITY

0.66+

Nigel Poulton, The Kubernetes Book | KubeCon 2017


 

>> Narrator: Live from Austin, Texas. It's theCUBE, covering KubeCon and CloudNativeCon 2017. Brought to you by Red Hat, the Linux Foundation, and theCUBE's ecosystem partners. >> Hello everyone. Welcome back to theCUBE's exclusive coverage, here live in Austin, Texas for KubeCon and CloudNativeCon. I'm John Furrier, the co-founder of SiliconANGLE Media with my co-host Stu Miniman, Next is Nigel Poulten, who's the author of the Kubernetes book, also container guru, trainer, been in the business for a long time in the community. Great to have you on for our intro. >> Thank you >> Stu, keynote, let's get down to it. What was the big highlights? >> Yeah, well, first of all John, we've officially entered KubeCon Days here. So CloudNativeCon was yesterday. We've got two more days of KubeCon. Kelsey Hightower, you know, we had him on theCUBE yesterday. Phenomenal speaker, everybody's looking forward to him. Lines to talk to him. Made sure that there was a standing ovation before and after his. Very demo heavy. I mean, you know, this group loves it. There were a lot of, you know, great pithy lines. Arguments over, you know, which is the best language, which is the best way to do things? Knocking on things like YAML. So, it was definitely a fun, geeky discussion. I'm a big Game of Thrones fan. So I loved to see season seven delivered on Kubernetes. >> What was the summary of the keynote? What was the take? >> So I think from my perspective, the summary was Kubernetes is boring. Which translates to us generally, as in it's maturing. It's something that you might want to be able to trust in your production environment, if you're an enterprise. I mean, look, as a technology guy we always think we like to know the details, the weeds. And we like to play with YAML and stuff like that. But at the end of the day, business is down and developers tend not to want to. They want a smooth pipeline. And that's boring, and so boring is good. >> Yeah, and I do want to poke at it a little bit, Nigel, I definitely want your opinion on this, because there are certain technologies we say, "Oh right, it's reached that boring phase", which means it's kind of steady state. Kubernetes is not like One Dot Nine. Coming into the show it was like, how complex it is. Oh my God, there's all these things above and below. Yin gave a really nice keynote showing kind of a layer cake there. >> Yeah. >> I think maybe the Kubernetes layer might be, it's stable enough and used, and people can use it. But this ecosystem by no means is it boring. >> No >> And there's lots of things to make out. What are you seeing? >> Totally, and it's that definition of boring, really. So I would say boring would translate into usable. But you're right, in no way is it boring in any sense. In fact, it's exciting and it's dangerous as well. >> Yeah, and ... >> So I'll give you an example, right. So Kubernetes is massively successful. I think we all grock that at the moment, okay. But it's almost potentially going to be a victim of it's own success. It's always at one of the many summits that was going on before KubeCon and CloudNativeCon started, and it was about networking and there was a bunch of guys here from big carriers and they really want to take this simple networking model that Kubernetes currently has and make it fit their needs, which would make it really complex, dare I say, almost OpenStack Neutron. (laughing) And I think there's so many people here at this conference right now that want to take Kubernetes and use it for their own purposes. And as successful as it is, and as much uptake as it's got, there is a potential danger there, I think, that it explodes out of control, and I don't want to knock OpenStack, but becomes difficult and not what we want it to be, and that's dangerous for them. >> Nigel, you bring up a great point here, because something we've been looking at is every time we abstract or make this new design model, it's "Oh well". We want to make sure the developer doesn't have to worry about that infrastructure. Clayton from Red Hat, we had him on theCUBE, and he talked about it in the keynote, boring means when I write my code I don't have to think about the infrastructure, but networking and storage. Networking some of the basis pieces are done but there's a lot of activity in that space, and storage, we're still arguing over what Container Native Storage should be, what CloudNative storage should be. So it's still to my definition, it's not boring. That's the direction, and I like it. Kind of was where we talked about invisible infrastructure. >> Yeah >> What do you see? You've got a heavy background on that side too. >> So I think I quite like this space that networking is at within Kubernetes. It's simple, and that works for me, right. Storage is certainly, it's still playing catch up there, and I think a lot of decisions still need to be made. The future, in my opinion, is still not clear there. But I think a lot of games have got to be played to say, now how far do we take networking, and how far do we take storage and things like that so that it, in the one sense doesn't balloon out of control, but on the other side you do want it to meet more use cases than just the very basic use cases. So, I mean, that plays back to my idea that that danger aspect of Kubernetes, it seems to have won in the orchestration space at the moment, but I think the road ahead, there still loads of potholes, and there's tight bends, and there's cliff edges and things that we still could fall off, and that's exciting. >> Nigel, your dangerous comment reminds me of some of the early days of V-M-ware. >> Nigel: Right >> You know, people that would get in there, they'd do some really cool things, they'd write it up, share it with the community. And absolutely, it feels like that, almost even bigger. >> Yeah, like the top layer that interfaces with the developers and things like that, that's getting pretty stable. But underneath, I mean, that is a happening place underneath right now, and I imagine it's going to be a happening place for quite a few years. >> What about service meshes and also pluggable architectures? Because that seems to be the answer to the dangerous question. Oh don't worry about it, carriers and what not. You can just build pluggable architectures, no one's going to get hurt. >> Nigel: Yeah >> Not ready for prime time? What's your thoughts? >> So I think service mesh is almost certainly in my opinion, the hot topic of the conference so far. I like this idea of it getting born and stuff, and that's good for the project. But if there's one take away, if it's something that you're not quite clued upon at the moment, go away and look into service mesh. I've got to do a lot of that myself, to be perfectly honest. But this whole idea of running like sidecar containers and what have you, inside of the pods, alongside your application to look at your ingress traffic, your incoming traffic, your outgoing traffic. It's all cool and it can add so much functionality and make it so much more usable to a lot of users. But at the same time there's not ... I don't know, right, look I'm a little bit old fashioned. I remember the days of deploying agents on servers. And we would have server bills that had agent upon agent upon agent. And we have this backlash in the industry of like, you're not bringing your product in vendor x, y or z, okay. If it deploys an agent, we're going fully agentless here. We're sick of managing all these different agents in our stack, and I wonder again, playing to the danger topic here, that like, are we going to end up having loads of these sidecar containers in our pods that are affectively the modern day agents that we then have to manage, and consume resources >> Explain the sidecar generation, it's important. Take a minute to explain the dynamic because containerization has been around for awhile, Google and everyone else knows that. >> Nigel: Yeah. >> But Docker really put it on the map. Now the commoditization of containers with Kubernetes. What's this sidecar thing about? >> Nigel: Okay >> Quick, take a minute to explain to the folks. >> Right, so in the Kubernetes world I guess the atomic unit of deployment, the equivalent of a V-M from the V-M World space would be the pod, which is effectively a container, right? But within that pod you run your application container. And I think for most people you run one container inside of that pod, it's your application, right? What we're starting to see now is, and Kubernetes has always had this ability to run multiple containers inside of a pod. Most people don't do it. And it seems that a lot of the external projects, and a lot of the third party vendors are starting to pick up on this and say, "Alright, well let's run another container "Inside of that pod". It's not your actual application and we call it a sidecar container. And it adds functionality and what have you, but is also potentially eats through resources, it makes your deployments maybe more complicated. I mean it's always a trade off, isn't it? >> Yeah >> You get additional functionality but it's never for free. >> Yeah it's overhead. Alright, talk about the customer guys. What we saw in keynote, we saw HBO on stage. How are customers using Kubernetes? Because I'm trying to put my finger on it. I love Orchestrate, I know what that does, and I understand the benefits, but how are actually people using it today? >> So I think it's a little bit like the whole container thing, right? The early adopters of the Netflix's and the HBOs and the people like that that have got large engineering teams, that have a lot of developers on staff, they're really just comfortable going and taking these new technologies, and rolling them themselves, and they've got this appetite for danger, again within their organization almost. Their risk taking organizations, right. They're all over the containers and the Kubernetes. The more traditional enterprises I think are still kicking the tires. They're still throwing out the occasional new project within the organization and saying, "Let's test the waters with this new feature "That we want to add to our main product", or "We've got something new, "Let's try containers and Kubernetes." They're certain, at least the ones that I speak to, certainly not at the phase where they're taking their legacy apps. >> HBO was using it for like traffic, identifying ingress, you mentioned that earlier, I mean basic stuff. Not a lot of heavy lifting, or is it? >> Well, I think the HBO, I mean ... How much they ran the season seven of Game of Thrones on Kubernetes. I mean, I'm sure there was some non-Kubernetes stuff in there as well, but it seemed like from the presentation pretty much, well, a lot of that stuff was running containers and Kubernetes, and lets be fair, when it comes to HBO, Game of Thrones is like their, it's their killer product at the end of the day, isn't it? And so they've taken a risk there with that. >> Yeah >> But again you know HBO, a rare... >> There's a lot of online viewers, by the way on that too. >> Yeah. >> With HBO Go. >> Oh, an insane number! But I would say compared to a traditional enterprise they're a risk taking organization. They live in the Cloud. They like living on the edge. They're willing to take risks with new technologies to push the product forward. >> Alright, so I want to get your guys' thoughts on a tweet I saw out there. "Think of Kubernetes as the colonel "For modern distributed systems. "It's not about zero ops, it's about op power tools "to unlock developer productivity." Craig McLuckie from Heptio mentioned that on stage. Really kind of rallying around Kubernetes. Thoughts on that quote? What does that mean? >> So I mean John, you know there was for a while people saying, "How do we deprecate? "Or even go to kind of noOps?" Absolutely, many of the keynotes talked about who's deploying them and who's running them. We're not talking about eliminating ops. Even when I can have a voice assistant help roll things out, they're still absolutely a major piece of who needs to run this, but the right things to the right part of the organization. >> Yeah, I think instead of using the word colonel maybe use the word Linux, you know. Looking at Kubernetes as the Linux of the Cloud, and that's not my term, I've heard other people say it. But it's open source for a start like Linux is, it's got a great thriving community of people contributing to it. You can fork it, you can do what ever you want with it, but if you're going to deploy a CloudNative application right now, then Kubernetes is that substrate. You've just got to look at what came out of re:Invent. So A-W-S is now offering a native Kubernetes hosted service, obviously Google does it, Azure does it with Microsoft. They're all picking up on this realizing that people deploying CloudNative apps, they're going to be deploying it on Kubernetes. >> Thoughts about Red Hat. I just saw Gabe Monroy, the keynote, Stu. Red Hat's contribution to hardening Kubernetes cannot be overstated. C-C OpenShift And we had Bryan Gracie on yesterday. I mean OpenShift, what a bet. Microsoft betting heavily on Kubernetes. Google obviously sees this as an opportunity. Multi-Cloud fantasies out there somewhere, but that's what customers are kind of asking for, not yet in tangible product, but this is interesting. You've got Red Hat, the king of the enterprise, OpenSource. >> Nigel: Absolutely, yeah. >> No debate about that. Microsoft and Google, old guard with Microsoft and then new guard in Google. Really if they don't throw a line at the main Cloud trend with Kubernetes, they could be left in the dust. So I see a lot of things at play. How is the Red Hat and the Kubernetes investment paying off? How do you guys see that playing out? Good strategic move, headroom to it? What comments and caller commentary on that? >> Well I think if you compare Red Hat to Microsoft, if you don't mind me doing that, Microsoft has a cash cow in Windows in the past and I think it quickly realized that the cash cow was not going to live forever, and they invested heavily in Azure. Red Hat live a lot, I guess as well, off support contracts and things like that, the Red Hat enterprise Linux. How long of a tail that has, I'm not sure. So certainly they're doing at least, they're looking in the right direction at least by investing heavily in Kubernetes. If they want to go in and be the enterprise's trusted Kubernetes partner, I think they've got a great story. They've contributed a ton to it. They're already in the door at most enterprises, and I think you couple those two things together if the enterprise is going to adopt Kubernetes at some point. I'm not saying they've go the best story, but they've got a pretty decent story. >> Alright, in the last minute I want to ask both you guys this question because it's been kind of on my mind, I've been thinking about it. Maybe I'm overstretching here but three day conference, one day to CloudNative, two days to Kubernetes, KubeCon. Why? More important? Growing community? CloudNative I think, would be probably stronger sessions. Is it because there's more emphasis on the Kubernetes? >> Kubernetes is the core, Kubernetes is what started the C-N-C-F. >> John: Yeah >> All the other projects really build off to it. I think it's pretty... >> It needs more attention. >> Kubernetes, I mean, while there's ... You know I love Kelsey's line this morning. He looked out at the audience he says, "I think everyone that's running Kubernetes "In the globe is here." So, there's jokes about how many people are actually running in production >> Yeah, they're probably here. >> So look, there's still so many people that are getting the Kubernetes 1-0-1. The whole CloudNative, all of these other projects are all building off of it. I think it's really straight forward on there. We even heard, do we call it the C-N-C-F? Do we rename it to something that's a little more Kubernetes focused? Because CloudNative gets talked about some, there's service mesh, absolutely Nigel, it was the buzz coming into the show. I hear those sessions are overflowing here. We didn't even get to talk about, there's like another alternative to Istio that's there. >> And Lou Tucker, by the way, affirmed that same thread yesterday about the service mesh. Nigel, final word for you on this segment. How big order of magnitude and important is Kubernetes? I mean given you've seen, talk about agent-ism in the old days, and all the ways that have come, that's been kind of incremental proving balls been moved down the field here and there. And some big chunk yardage, if you will, use this football analogy. How big, because I've seen Kubernetes just go from here to here. >> Yeah >> Really move the need along the community, it's galvanized. How important is Kubernetes, from an order of magnitude, when we look back a few years from now, what are we going to be saying? "Hey, remember KubeCon in 2017?" How important is Kubernetes? >> Well, can I say I think it's really early days, okay? And I like the analogy that it is the Linux of the Cloud or of CloudNative, okay? But I think there's danger in that as well because the world is changing so fast now. I mean Linux has lived for a very long time, okay. Will Kubernetes live that long or will it be replaced by something else? It probably will be, but I do feel these are early days, and I think it has got a long stretch ahead. A long stretch as in like... >> John: Yeah. >> Good four or five years. And within two to three years, you know, just about every organization in my opinion is going to have some Kubernetes in it. >> And the beginning signs of maturity's coming. Stack Wars too, all the vendors really trying to figure out, strategically it's like a 3-D chess match right now. Open source is kind of like arbiter of this, really good stuff. I think it's going to be super important. Thanks for the commentary. kicking off day two of Cube exclusive coverage here at KubeCon. CloudNativeCon was yesterday. Two days of KubeCon. We'll be back with more live coverage. From theCUBE, I'm John Furrier. Stu Miniman and Nigel Poulten after this short break. (light techno music)

Published Date : Dec 7 2017

SUMMARY :

Brought to you by Red Hat, been in the business for a long time in the community. Stu, keynote, let's get down to it. I mean, you know, this group loves it. But at the end of the day, business is down Coming into the show it was like, how complex it is. I think maybe the Kubernetes layer might be, to make out. Totally, and it's that definition of boring, really. It's always at one of the many summits that was going on and he talked about it in the keynote, You've got a heavy background on that side too. and I think a lot of decisions still need to be made. of some of the early days of V-M-ware. people that would get in there, Yeah, like the top layer that interfaces Because that seems to be the answer and that's good for the project. Explain the sidecar generation, it's important. Now the commoditization of containers with Kubernetes. to explain to the folks. And it seems that a lot of the external projects, Alright, talk about the customer guys. and the people like that Not a lot of heavy lifting, or is it? but it seemed like from the presentation pretty much, by the way on that too. They like living on the edge. "Think of Kubernetes as the colonel Absolutely, many of the keynotes talked about Looking at Kubernetes as the Linux of the Cloud, I just saw Gabe Monroy, the keynote, Stu. How is the Red Hat and the Kubernetes investment paying off? the enterprise is going to adopt Kubernetes at some point. Alright, in the last minute I want to ask both you guys Kubernetes is the core, Kubernetes is what started All the other projects really build off to it. "In the globe is here." that are getting the Kubernetes 1-0-1. and all the ways that have come, Really move the need along the community, it's galvanized. And I like the analogy that it is the Linux of the Cloud is going to have some Kubernetes in it. I think it's going to be super important.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stu MinimanPERSON

0.99+

Nigel PoultenPERSON

0.99+

John FurrierPERSON

0.99+

Nigel PoultonPERSON

0.99+

Gabe MonroyPERSON

0.99+

HBOORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

JohnPERSON

0.99+

NetflixORGANIZATION

0.99+

two daysQUANTITY

0.99+

one dayQUANTITY

0.99+

Craig McLuckiePERSON

0.99+

NigelPERSON

0.99+

Lou TuckerPERSON

0.99+

Red HatORGANIZATION

0.99+

Game of ThronesTITLE

0.99+

ClaytonPERSON

0.99+

Linux FoundationORGANIZATION

0.99+

five yearsQUANTITY

0.99+

Bryan GraciePERSON

0.99+

three dayQUANTITY

0.99+

KubeConEVENT

0.99+

bothQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

yesterdayDATE

0.99+

KubernetesTITLE

0.99+

fourQUANTITY

0.99+

CloudNativeConEVENT

0.99+

two thingsQUANTITY

0.99+

Kelsey HightowerPERSON

0.99+

LinuxTITLE

0.99+

StuPERSON

0.98+

Red HatTITLE

0.98+

CloudNativeORGANIZATION

0.98+

Two daysQUANTITY

0.98+

three yearsQUANTITY

0.98+

KelseyPERSON

0.98+

KubernetesORGANIZATION

0.98+

theCUBEORGANIZATION

0.98+

One Dot NineTITLE

0.98+

day twoQUANTITY

0.97+

HBOsORGANIZATION

0.97+

twoQUANTITY

0.97+

Andrius Benokraitis, Red Hat - Red Hat Summit 2017


 

>> Red Hat OpenShift Container Platform >> Announcer: Live from Boston, Massachusetts, it's theCube Covering Red Hat Summit 2017. Brought to you by Red Hat. >> Welcome back to theCube's coverage, I'm Rebecca Knight your host, here with Stu Miniman. Our guest now is Andrius Benokraitis, he is the Principle Product Manager at Ansible Red Hat Network Automation, thanks so much Andrius. >> Thanks for having me I appreciate it. >> This is your first time on the program. >> Andrius: First time. >> We're nice, >> Really nervous, so, okay. we don't bite. >> Start a little bit with your new to the company relatively >> Andrius: Relatively. >> networking guy by background, can you give us a little bit about your background. >> Sure, I mean, I actually started at Red Hat in 2003. And then did about four five jobs there for about 11 years. And then jumped, went to a startup named Cumulus Networks for about two years. Great crew, and then, now I'm at Ansible, been there since about December, so working on the Network Automation Use Case for Ansible. >> Alright, so networking, has a little bit of coverage here, I remember, you know, something like the Open Daylight stuff and I have, actually there are a couple of Red Hatters that I interviewed at one show ended up forming a company that got bought by Dockers, so you know, there's definitely networking people, but maybe give us a broad view of where networking fits into this stuff that you're working on specifically. >> Yeah, sure thing. I think it's interesting to point out that as everything started in the compute side, and everything started to get disaggregated, the networking side has come along for the ride per se. It's been a little bit behind. When we talk about networking a lot of people just think automatically that's the end. And we're actually trying to think a little bit lower level, so layer one, layer two, layer three, so switching, routing, firewalls, load balancers, all those things are still required in the data center. And when people started using Ansible, it started five years ago on the compute side, a lot of the people started saying, I need to run the whole rec, and I'm not a CCIE, and I don't really know what to do there but I've been thrown in to do something, I'm a cloud admin, the new title right. I have to run the network, so what do I do. I don't know anything about networking, I'm just trying to be good enough, well, I know Ansible, so why don't I just treat switches like servers, and just treat them like, like what I know, they just have a lot more interfaces, but they just treat it that way. So a lot of the expertise came from the ground up with the opensource model and said this is the new use case. >> Well, Jay Rivers, the founder of Cumulus, it's like networking will just be a Linux operating model, you know, extended to the network, which is always like, hey, sounds like a company like Red Hat should be doing that kind of stuff. >> Exactly, it's interesting to see a Bash prompt in the networking right, it's familiar to a lot of people, in the devop space, absolutely. >> So it's a very rapidly changing time, as we know, in this digital computing age, the theme of this conference is the power of the individual, celebrating that individual, the developer, empowering the developers to take risks, be able to fail, make changes, modify. You're not a developer, but you manage developers, you lead developers, how do you work on creating that context, that Jim Whitehurst talked about today. >> I think it starts with, the true empowerment, you have the majority of the networking platforms are still proprietary and walled off, walled off gardens, they're black boxes you can't really do much with them, but you still have the ability to SSH into them, you have familiar terms and concepts from the server side in the networking side. So as long as you have SSH in the box and you know your CLI commands to make changes, you can utilize that in part of Ansible to generate larger abstractions to use the play books in order to build out your data center, with the terms and the Lexicon of YAML, the language of Ansible, things that you already know and utilizing that and going further. >> Can you speak to us a little bit about customers, you know, what's holding them back, how are you guys moving them forward to the more agile development space? >> Our customers are mostly brownfield, they're trying to extend what they already have. They have all their gear, they have everything they have that they need but they're trying to do things better. >> I don't find greenfield customers when it comes to the network side of the house, I mean we've all got what I have and we knew that IT's always additive, so, I mean that's got to be a challenge. >> It's a huge challenge. >> Something you can help with right? >> It's a huge challenge, and I think from the network operators and network engineers, a lot of them are saying, again, they're looking at their friends on the compute side, and they can spin up VMs and provision hardware instantaneously, but why does it have to take four to six weeks to provision a VLAN or get a VLAN added to a network switch? That sounds ridiculous, so a lot of the network engineers and operators are saying, well I think I can be as agile as you, so we can actually work together, using a common framework, common language with Ansible, and we can get things done, and we can get all of this stuff I hate doing, and we don't have to do that anymore, we can worry about more important things in our network, like designing the next big thing, if you want to do BGP, design your BGP infrastructure, you want to move from a layer two to a layer three or an SDN solution. >> I love that you talk about everybody, kind of the software wave and breaking down silos, network and storage people are like, oh my God, you're taking my job away. >> Exactly, completely, no, we're not taking your job. We are augmenting what you already have. We're giving you more tools in your tool belt to do better at your job, and that's truly it, we don't have to, people can be smarter so, if you want to add a VLAN, that can be a code snippet created by the sys admin, it can be in Git, and then the network engineer can say, oh yeah, that looks good, and then I just say, submit. What we see today with some of the customers is, yeah, I want to automate, I really want to automate, and you say, great, let's automate. But then you start getting, you peel back the onion, and you start seeing that, well, how are you managing your inventory, how are you managing your endpoints. And they're like, I have a spreadsheet? And you're like, as a networking guy I guess you, (excited clamoring) >> Networking is scary for a lot, >> It's super scary, yeah. >> So how, do you break that down? >> You do what you can, you do it in small pieces, we're not trying to change the world, we're not trying to say, you're going to go 100% devops in the network. Start small, start with something, like again, you really hate doing, if you want to change, something really low risk, things you really hate doing, just start small, low risk things. And then you can propagate that, and as you start getting confidence, and you start getting the knowledge, and the teams, and every one starts, everyone has to be bought in by the way. This is not something you just go in and say, go do it. You have to have everyone on board, the entire organization, it can't be bottom up, it can't be top down, everyone has to be on board. >> And Andrius, when I talk to people in the networking space, risk is the number one thing they're worried about. They buy on risk, they build on risk, and the problem we have with the networks, they're too many things that are manual. So if I'm typing in some you know, 16 digit hexadecimal code >> From notepad, manually you're copying and pasting >> from like a spreadsheet. Copying and pasting, or gosh, so things like that, the room for error is too high. So there's the things that we need to be able to automate, so that we don't have somebody that's tired or just, wait, was that a one or an L or an I. I don't know, so we understand that it actually should be able to reduce risk, increase security, all the things that the business is telling you. >> All these network vendors have virtual instances. You can do all your testing and deployment, all your testing and your infrastructure, and you can do everything in Jenkins and have all your networking switches, virtually, you can have your whole data center in a virtual environment if you want. So if you talk about lower risk, instead of just copying and pasting, and oh was that a slash 24 or a slash 16, oops, I mean that looked right, but it was wrong, but did it go through test, it probably didn't. And then someone's going to get paged at three in the morning, and a router's down, an edge router's down and your toast. So enabling the full devops cycle of continuous integration. So bringing in the same concepts that you have on the compute side, testing, changes, in a full cycle, and then doing that. >> You talked about the importance of buy in and also the difficulties of getting buy in. How much of that is an impediment to the innovation process, but one of the things we've been talking about, is can big companies innovate? What are the challenges that you see, and how do you overcome them? >> That is the number one, that is the biggest issue right now in the network space, is getting buy in. Whether it's someone who has done it on their own, someone can just install Ansible and do something, and then deploy a switch, but if they leave the company and there's no remediation, if it's not in the MOP, if it's not in the Method of Procedure, no one knows about it. So it has to be part of your, you want to keep all the things you have, all the good things you have today with your checks and balances in the networking, and the CIOs and the people at the top have to understand, you can keep all that stuff, but you have to buy in to the automation framework, and everyone has to be onboard to understand how it fits in in order to go from where you are today to where you want to be. >> At the show here what's exciting your customers? You know, give us a little bit of a viewpoint for people that are checking out your stuff, what to expect. >> Well I think the one thing is they're not used to seeing, they think it's black magic, they think it's just magic. They're like, I can use the same things for everything? I say, yeah, you can. The development processes, the innovation in the community, you know for example, if you want to assist, go ACI Module, it's in GitHub, it's in Cisco's GitHub, you can just go ahead and do that. Now we're trying, starting to migrate those things into core. So the more that we get innovation in the community, and that we have the vendors and the partners driving it, and you're seeing that today, you know, we have F5 here we have Cisco, we have Juniper we have Avi, all those people, you know, they have certified platforms with Ansible, Ansible Core, which is going to be integrated with Ansible Tower, we have full buy in from them. They want to meet with us and say how can we do better. How can we innovate with you to drive the nexgen data centers with our products. >> You talked about yourself as a boomerang employee, what is the value in that, and are you seeing a lot of colleagues who are bouncing around and then coming back from ... >> Absolutely, I think pre acquisition Ansible, the vast majority of the people, I believe were ex-Red Hatters that went to Ansible. So what's really nice to come back home and understand the people that left, that came back to understand already what the, >> And people feel that way, it's a coming home? >> Yeah, it's a coming home, it really is. They understand, you know, they came back, they understood the values of opensource and the culture, again, I started Red Hat in 2003, I see the great things, I see new people getting hired and I see the same things I saw back then, 2003, 2004, with all the great things that people are doing, and the culture. You know, Jim's done a great job at keeping the culture how it is, even way back then when there was only 400 people when I started. >> Andrius, extend that culture, I think about the network community and opensource and you know, you talk about, there's risk there, and you know, you think about, I grew up with kind of enterprise, infrastructure mentality, it's like, don't touch it, don't play with it. We always joked, I got every thing there, really don't walk by it and definitely, you know, some zip tie or duct tape's going to come apart. Are we getting better, is networking embracing this? >> Yes, for sure. I think the nice thing is you start seeing these communities pop up. You're starting to see network operators and engineers, they've been historically, if they don't know the answer, they won't go find it. They kind of may be shy, shy to ask for help, per se. >> If it wasn't on their certification, >> Exactly. >> They weren't going to do it. >> If it wasn't there I'm not going to go, we're bringing them into, so we have, whether there's slack instance, there are networking communities, networking automation, communities, just for network automation. And there's one, there's an Ansible channel, on the network decode, select channel, has almost 800 people on it. So they're coming and now they have a place, they have a safe place to ask questions. They don't have to kind of guess or say, you know what, I'm not going to do that. And know they have a safe place for network engineers, for network engineers to get into the net devop space. >> Another one of the sort of sub themes of this summit is people's data strategy, and customers and vendors, how they're dealing with the massive amounts of data that they're customers are generating. What is your data strategy, and how are you using data? >> So there's two aspects here. So the data can be the actual playbooks themselves, the actual, the golden master images, so you can pull configs from switches, and you can store them and you can use them for continuous compliance. You can say, you know, a rogue engineer might make a change, you know, configuration drift happens. But you need to be able to make those comparisons to the other versions. So we're utilizing things like Git, so you're data strategy can be in the cloud, it can be similar on your side, you can do Stash locally. For part of the operations piece, you can use that. A second piece is, log aggregation is a big piece of the Ansible. So when you actually want to make sure that a change happens, that it's been successful, and that you want to ensure continuous compliance, all that data has to go somewhere, right? So you can utilize Ansible Tower as an aggregator, you can go off using the integrations like Splunk and some other log aggregation connectors with Ansible Tower to help utilize your data strategy with the partners that are really the driving, the people that know data and data structures, so we can use them. >> And one of the other issues is the building the confidence to make decisions with all the data, are you working on that too with your team? >> Yes, we are working with that, and that's part of the larger tower organization, so it goes beyond networking. So, whatever networking gets, everyone else gets. When we started developing Ansible Core and the community and Ansible Tower in-house, we think about networking and we think about Windows, that's a huge opportunity there, you know, we're talking about AWS in the cloud. So cloud instances, these are all endpoints that Ansible can manage, and it's not just networking, so we have to make sure that all of the pieces, all of the endpoints can be managed directly. Everyone benefits from that. >> Andrius thank you so much for your time we appreciate it. >> Thanks again for having me. >> I'm Rebecca Knight for Stu Miniman, thank you very much for joining us. We'll be back after this.

Published Date : May 3 2017

SUMMARY :

Brought to you by Red Hat. he is the Principle Product Manager we don't bite. can you give us a little bit about your background. And then did about four five jobs there for about 11 years. I remember, you know, something like So a lot of the expertise came from the ground up you know, extended to the network, in the networking right, it's familiar to a lot of people, empowering the developers to take risks, the language of Ansible, things that you already know that they need but they're trying to do things better. the network side of the house, I mean we've all got like designing the next big thing, if you want to do BGP, I love that you talk about everybody, and you start seeing that, and you start getting the knowledge, and the problem we have with the networks, all the things that the business is telling you. and you can do everything in Jenkins What are the challenges that you see, all the good things you have today At the show here what's exciting your customers? How can we innovate with you to drive the nexgen and are you seeing a lot of colleagues that came back to understand already what the, They understand, you know, they came back, and you know, you talk about, there's risk there, you start seeing these communities pop up. They don't have to kind of guess or say, you know what, the massive amounts of data that and that you want to ensure continuous compliance, and the community and Ansible Tower in-house, Andrius thank you so much for your time thank you very much for joining us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jay RiversPERSON

0.99+

Rebecca KnightPERSON

0.99+

Andrius BenokraitisPERSON

0.99+

2003DATE

0.99+

CiscoORGANIZATION

0.99+

JimPERSON

0.99+

Jim WhitehurstPERSON

0.99+

Stu MinimanPERSON

0.99+

Red HatORGANIZATION

0.99+

100%QUANTITY

0.99+

Cumulus NetworksORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AnsibleORGANIZATION

0.99+

2004DATE

0.99+

two aspectsQUANTITY

0.99+

fourQUANTITY

0.99+

CumulusORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

first timeQUANTITY

0.99+

oneQUANTITY

0.99+

second pieceQUANTITY

0.99+

todayDATE

0.99+

Red HattersORGANIZATION

0.98+

16 digitQUANTITY

0.98+

six weeksQUANTITY

0.98+

Ansible Red Hat Network AutomationORGANIZATION

0.98+

Ansible TowerORGANIZATION

0.98+

five years agoDATE

0.98+

JenkinsTITLE

0.98+

First timeQUANTITY

0.98+

about 11 yearsQUANTITY

0.98+

AndriusPERSON

0.98+

JuniperORGANIZATION

0.97+

400 peopleQUANTITY

0.97+

about two yearsQUANTITY

0.97+

DockersORGANIZATION

0.97+

LinuxTITLE

0.96+

WindowsTITLE

0.96+

Ansible CoreORGANIZATION

0.95+

Red Hat Summit 2017EVENT

0.95+

GitTITLE

0.93+

about four five jobsQUANTITY

0.93+

AndriusTITLE

0.9+

almost 800 peopleQUANTITY

0.89+

threeDATE

0.87+

YAMLTITLE

0.86+

layer oneQUANTITY

0.85+

GitHubTITLE

0.85+

theCubeORGANIZATION

0.84+

AviORGANIZATION

0.84+

one showQUANTITY

0.82+

layer threeQUANTITY

0.77+

HatORGANIZATION

0.71+

layer twoQUANTITY

0.7+

StashTITLE

0.68+

F5ORGANIZATION

0.68+

layerQUANTITY

0.67+

one thingQUANTITY

0.65+

SplunkORGANIZATION

0.65+

aboutDATE

0.62+

OpenShift Container PlatformTITLE

0.62+

RedTITLE

0.6+

threeOTHER

0.59+