Image Title

Search Results for K native:

Day 2 Keynote Analysis & Wrap | KubeCon + CloudNativeCon NA 2022


 

>>Set restaurants. And who says TEUs had got a little ass more skin in the game for us, in charge of his destiny? You guys are excited. Robert Worship is Chief Alumni. >>My name is Dave Ante, and I'm a long time industry analyst. So when you're as old as I am, you've seen a lot of transitions. Everybody talks about industry cycles and waves. I've seen many, many waves. Met a lot of industry executives and of a little bit of a, an industry historian. When you interview many thousands of people, probably five or 6,000 people as I have over the last half of a decade, you get to interact with a lot of people's knowledge and you begin to develop patterns. And so that's sort of what I bring is, is an ability to catalyze the conversation and, you know, share that knowledge with others in the community. Our philosophy is everybody's expert at something. Everybody's passionate about something and has real deep knowledge about that's something well, we wanna focus in on that area and extract that knowledge and share it with our communities. This is Dave Ante. Thanks for watching the Cube. >>Hello everyone and welcome back to the Cube where we are streaming live this week from CubeCon. I am Savannah Peterson and I am joined by an absolutely stellar lineup of cube brilliance this afternoon. To my left, a familiar face, Lisa Martin. Lisa, how you feeling? End of day two. >>Excellent. It was so much fun today. The buzz started yesterday, the momentum, the swell, and we only heard even more greatness today. >>Yeah, yeah, abs, absolutely. You know, I, I sometimes think we've hit an energy cliff, but it feels like the energy is just >>Continuous. Well, I think we're gonna, we're gonna slide right into tomorrow. >>Yeah, me too. I love it. And we've got two fantastic analysts with us today, Sarge and Keith. Thank you both for joining us. We feel so lucky today. >>Great being back on. >>Thanks for having us. Yeah, Yeah. It's nice to have you back on the show. We were, had you yesterday, but I miss hosting with you. It's been a while. >>It has been a while. We haven't done anything in since, Since pre >>Pandemic, right? Yeah, I think you're >>Right. Four times there >>Be four times back in the day. >>We, I always enjoy whole thing, Lisa, cuz she's so well prepared. I don't have to do any research when I come >>Home. >>Lisa will bring up some, Oh, sorry. Jeep, I see that in 2008 you won this award for Yeah. Being just excellent and I, I'm like, Oh >>Yeah. All right Keith. So, >>So did you do his analysis? >>Yeah, it's all done. Yeah. Great. He only part, he's not sitting next to me too. We can't see it, so it's gonna be like a magic crystal bell. Right. So a lot of people here. You got some stats in terms of the attendees compared >>To last year? Yeah, Priyanka told us we were double last year up to 8,000. We also got the scoop earlier that 2023 is gonna be in Chicago, which is very exciting. >>Oh, that is, is nice. Yeah, >>We got to break that here. >>Excellent. Keith, talk to us about what some of the things are that you've seen the last couple of days. The momentum. What's the vibe? I saw your tweet about the top three things you were being asked. Kubernetes was not one of them. >>Kubernetes were, was not one of 'em. This conference is starting to, it, it still feels very different than a vendor conference. The keynote is kind of, you know, kind of all over the place talking about projects, but the hallway track has been, you know, I've, this is maybe my fifth or sixth CU con in person. And the hallway track is different. It's less about projects and more about how, how do we adjust to the enterprise? How do we Yes. Actually do enterprise things. And it has been amazing watching this community grow. I'm gonna say grow up and mature. Yes. You know, you know, they're not wearing ties yet, but they are definitely understanding kind of the, the friction of implementing new technology in, in an enterprise. >>Yeah. So ge what's your, what's been your take, We were with you yesterday. What's been the take today to take aways? >>NOMA has changed since yesterday, but a few things I think I, I missed talking about that yesterday were that, first of all, let's just talk about Amazon. Amazon earnings came out, it spooked the market and I think it's relevant in this context as well, because they're number one cloud provider. Yeah. And all, I mean, almost all of these technologies on the back of us here, they are related to cloud, right? So it will have some impact on these. Like we have to analyze that. Like will it make the open source go faster or slower in, in lieu of the fact that the, the cloud growth is slowing. Right? So that's, that's one thing that's put that's put that aside. I've been thinking about the, the future of Kubernetes. What is the future of Kubernetes? And in that context, I was thinking like, you know, I think in, when I put a pointer there, I think in tangents, like, what else is around this thing? So I think CN CNCF has been writing the success of Kubernetes. They are, that was their number one flagship project, if you will. And it was mature enough to stand on its own. It it was Google, it's Google's Borg dub da Kubernetes. It's a genericized version of that. Right? So folks who do tech deep down, they know that, Right. So I think it's easier to stand with a solid, you know, project. But when the newer projects come in, then your medal will get tested at cncf. Right. >>And cncf, I mean they've got over 140 projects Yeah. Right now. So there's definitely much beyond >>Kubernetes. Yeah. So they, I have numbers there. 18 graduated, right, 37 in incubation and then 81 in Sandbox stage. They have three stages, right. So it's, they have a lot to chew on and the more they take on, the less, you know, quality you get goes into it. Who is, who's putting the money behind it? Which vendors are sponsoring like cncf, like how they're getting funded up. I think it >>Something I pay attention to as well. Yeah. Yeah. Lisa, I know you've got >>Some insight. Those are the things I was thinking about today. >>I gotta ask you, what's your take on what Keith said? Are you also seeing the maturation of the enterprise here at at coupon? >>Yes, I am actually, when you say enterprise versus what's the other side? Startups, right? Yeah. So startups start using open source a lot more earlier or lot more than enterprises. The enterprise is what they need. Number one thing is the, for their production workloads, they want a vendor sporting them. I said that yesterday as well, right? So it depend depending on the size of the enterprise. If you're a big shop, definitely if you have one of the 500 or Fortune five hundreds and your tech savvy shop, then you can absorb the open source directly coming from the open source sort of universe right. Coming to you. But if you are the second tier of enterprise, you want to go to a provider which is managed service provider, or it can be cloud service provider in this case. Yep. Most of the cloud service providers have multiple versions of Kubernetes, for example. >>I'm not talking about Kubernetes only, but like, but that is one example, right? So at Amazon you can get five different flavors of Kubernetes, right? Fully manage, have, manage all kind of stuff. So people don't have bandwidth to manage that stuff locally. You have to patch it, you have to roll in the new, you know, updates and all that stuff. Like, it's a lot of work for many. So CNCF actually is formed for that reason. Like the, the charter is to bring the quality to open source. Like in other companies they have the release process and they, the stringent guidelines and QA and all that stuff. So is is something ready for production? That's the question when it comes to any software, right? So they do that kind of work and, and, and they have these buckets defined at high level, but it needs more >>Work. Yeah. So one of the things that, you know, kind of stood out to me, I have good friend in the community, Alex Ellis, who does open Fast. It's a serverless platform, great platform. Two years ago or in 2019, there was a serverless day date. And in serverless day you had K Native, you had Open Pass, you had Ws, which is supported by IBM completely, not CNCF platforms. K native came into the CNCF full when Google donated the project a few months ago or a couple of years ago, now all of a sudden there's a K native day. Yes. Not a serverless day, it's a K native day. And I asked the, the CNCF event folks like, what happened to Serverless Day? I missed having open at serverless day. And you know, they, they came out and said, you know what, K native got big enough. >>They came in and I think Red Hat and Google wanted to sponsor a K native day. So serverless day went away. So I think what what I'm interested in and over the next couple of years is, is they're gonna be pushback from the C against the cncf. Is the CNCF now too big? Is it now the gatekeeper for do I have to be one of those 147 projects, right? In order enough to get my project noticed the open, fast, great project. I don't think Al Alex has any desire to have his project hosted by cncf, but it probably deserves, you know, shoulder left recognition with that. So I'm pushing to happen to say, okay, if this is open community, this is open source. If CNC is the place to have the cloud native conversation, what about the projects that's not cncf? Like how do we have that conversation when we don't have the power of a Google right. Or a, or a Lenox, et cetera, or a Lenox Foundation. So GE what, >>What are your thoughts on that? Is, is CNC too big? >>I don't think it's too big. I think it's too small to handle the, what we are doing in open source, right? So it's a bottle. It can become a bottleneck. Okay. I think too big in a way that yeah, it has, it has, it has power from that point of view. It has that cloud, if you will. The people listen to it. If it's CNCF project or this must be good, it's like in, in incubators. Like if you are y white Combinator, you know, company, it must be good. You know, I mean, may not be >>True, but, >>Oh, I think there's a bold assumption there though. I mean, I think everyone's just trying to do the best they can. And when we're evaluating projects, a very different origin and background, it's incredibly hard. Very c and staff is a staff of 30 people. They've got 180,000 people that are contributing to these projects and a thousand maintainers that they're trying to uphold. I think the challenge is actually really great. And to me, I actually look at events as an illustration of, you know, what's the culture and the health of an organization. If I were to evaluate CNCF based on that, I'd say we're very healthy right now. I would say that we're in a good spot. There's a lot of momentum. >>Yeah. I, I think CNCF is very healthy. I'm, I'm appreciative for it being here. I love coupon. It's becoming the, the facto conference to have this conversation has >>A totally >>Different vibe to other, It's a totally different vibe. Yeah. There needs to be a conduit and truth be told, enterprise buyers, to subject's point, this is something that we do absolutely agree on, on enterprise buyers. We want someone to pick winners and losers. We do, we, we don't want a box of Lego dumped on our, the middle of our table. We want somebody to have sorted that out. So while there may be five or six different service mesh solutions, at least the cncf, I can go there and say, Oh, I'll pick between the three or four that are most popular. And it, it's a place to curate. But I think with that curation comes the other side of it. Of how do we, how, you know, without the big corporate sponsor, how do I get my project pushed up? Right? Elevated. Elevated, Yep. And, and put onto the show floor. You know, another way that projects get noticed is that startups will adopt them, Push them. They may not even be, I don't, my CNCF project may not, my product may not even be based on the CNCF product. But the new stack has a booth, Ford has a booth. Nothing to do with a individual prod up, but promoting open source. What happens when you're not sponsored? >>I gotta ask you guys, what do you disagree on? >>Oh, so what, what do we disagree on? So I'm of the mindset, I can, I can say this, I I believe hybrid infrastructure is the future of it. Bar none. If I built my infrastructure, if I built my application in the cloud 10 years ago and I'm still building net new applications, I have stuff that I built 10 years ago that looks a lot like on-prem, what do I do with it? I can't modernize it cuz I don't have the developers to do it. I need to stick that somewhere. And where I'm going to stick that at is probably a hybrid infrastructure. So colo, I'm not gonna go back to the data center, but I'm, I'm gonna look, pick up something that looks very much like the data center and I'm saying embrace that it's the future. And if you're Boeing and you have, and Boeing is a member, cncf, that's a whole nother topic. If you have as 400 s, hpu X, et cetera, stick that stuff. Colo, build new stuff, but, and, and continue to support OpenStack, et cetera, et cetera. Because that's the future. Hybrid is the future. >>And sub g agree, disagree. >>I okay. Hybrid. Nobody can deny that the hybrid is the reality, not the future. It's a reality right now. It's, it's a necessity right now you can't do without it. Right. And okay, hybrid is very relative term. You can be like 10% here, 90% still hybrid, right? So the data center is shrinking and it will keep shrinking. Right? And >>So if by whole is the data center shrinking? >>This is where >>Quick one quick getting guys for it. How is growing by a clip? Yeah, but there's no data supporting. David Lym just came out for a report I think last year that showed that the data center is holding steady, holding steady, not growing, but not shrinking. >>Who sponsored that study? Wait, hold on. So the, that's a question, right? So more than 1 million data centers have been closed. I have, I can dig that through number through somebody like some organizations we published that maybe they're cloud, you know, people only. So the, when you get these kind of statements like it, it can be very skewed statements, right. But if you have seen the, the scene out there, which you have, I know, but I have also seen a lot of data centers walk the floor of, you know, a hundred thousand servers in a data center. I cannot imagine us consuming the infrastructure the way we were going into the future of co Okay. With, with one caveat actually. I am not big fan of like broad strokes. Like make a blanket statement. Oh no, data center's dead. Or if you are, >>That's how you get those esty headlines now. Yeah, I know. >>I'm all about to >>Put a stake in the ground. >>Actually. The, I think that you get more intelligence from the new end, right? A small little details if you will. If you're golden gold manak or Bank of America, you have so many data centers and you will still have data centers because performance matters to you, right? Your late latency matters for applications. But if you are even a Fortune 500 company on the lower end and or a healthcare vertical, right? That your situation is different. If you are a high, you know, growth startup, your situation is different, right? You will be a hundred percent cloud. So cloud gives you velocity, the, the, the pace of change, the pace of experimentation that actually you are buying innovation through cloud. It's proxy for innovation. And that's how I see it. But if you have, if you're stuck with older applications, I totally understand. >>Yeah. So the >>We need that OnPrem. Yeah, >>Well I think the, the bring your fuel sober, what we agree is that cloud is the place where innovation happens. Okay? At some point innovation becomes legacy debt and you have thus hybrid, you are not going to keep your old applications up to date forever. The, the, the math just doesn't add up. And where I differ in opinion is that not everyone needs innovation to keep moving. They need innovation for a period of time and then they need steady state. So Sergeant, we >>Argue about this. I have a, I >>Love this debate though. I say it's efficiency and stability also plays an important role. I see exactly what you're talking about. No, it's >>Great. I have a counter to that. Let me tell you >>Why. Let's >>Hear it. Because if you look at the storage only, right? Just storage. Just take storage computer network for, for a minute. There three cost reps in, in infrastructure, right? So storage earlier, early on there was one tier of storage. You say pay the same price, then now there are like five storage tiers, right? What I'm trying to say is the market sets the price, the market will tell you where this whole thing will go, but I know their margins are high in cloud, 20 plus percent and margin will shrink as, as we go forward. That means the, the cloud will become cheaper relative to on-prem. It, it, in some cases it's already cheaper. But even if it's a stable workload, even in that case, we will have a lower tier of service. I mean, you, you can't argue with me that the cloud versus your data center, they are on the same tier of services. Like cloud is a better, you know, product than your data center. Hands off. >>I love it. We, we are gonna relish in the debates between the two of you. Mic drops. The energy is great. I love it. Perspective. It's not like any of us can quite see through the crystal ball that we have very informed opinions, which is super exciting. Yeah. Lisa, any last thoughts today? >>Just love, I love the debate as well. That, and that's, that's part of what being in this community is all about. So sharing about, sharing opinions, expressing opinions. That's how it grows. That's how, that's how we innovate. Yeah. Obviously we need the cloud, but that's how we innovate. That's how we grow. Yeah. And we've seen that demonstrated the last couple days and I and your, your takes here on the Cuban on Twitter. Brilliant. >>Thank you. I absolutely love it. I'm gonna close this out with a really important analysis on the swag of the show. Yes. And if you know, yesterday we were looking at what is the weirdest swag or most unique swag We had that bucket hat that took the grand prize. Today we're gonna focus on something that's actually quite cool. A lot of the vendors here have really dedicated their swag to being local to Detroit. Very specific in their sourcing. Sonotype here has COOs. They're beautiful. You can't quite feel this flannel, but it's very legit hand sound here in Michigan. I can't say that I've been to too many conferences, if any, where there was this kind of commitment to localizing and sourcing swag from around the corner. We also see this with the Intel booth. They've got screen printers out here doing custom hoodies on spot. >>Oh fun. They're even like appropriately sized. They had local artists do these designs and if you're like me and you care about what's on your wrist, you're familiar with Shinola. This is one of my favorite swags that's available. There is a contest. Oh going on. Hello here. Yeah, so if you are Atan, make sure that you go and check this out. The we, I talked about this on the show. We've had the founder on the show or the CEO and yeah, I mean Shine is just full of class as since we are in Detroit as well. One of the fun themes is cars. >>Yes. >>And Storm Forge, who are also on the show, is actually giving away an Aston Martin, which is very exciting. Not exactly manufactured in Detroit. However, still very cool on the car front and >>The double oh seven version named the best I >>Know in the sixties. It's love it. It's very cool. Two quick last things. We talk about it a lot on the show. Every company now wants to be a software company. Yep. On that vein, and keeping up with my hat theme, the Home Depot is here because they want everybody to know that they in fact are a technology company, which is very cool. They have over 500,000 employees. You can imagine there's a lot of technology that has to go into keeping Napa. Absolutely. Yep. Wild to think about. And then last, but not at least very quick, rapid fire, best t-shirt contest. If you've ever ran to one of these events, there are a ton of T-shirts out there. I rate them on two things. Wittiest line and softness. If you combine the two, you'll really be our grand champion for the year. I'm just gonna hold these up and set them down for your laughs. Not afraid to commit, which is pretty great. This is another one designed by locals here. Detroit Code City. Oh, love it. This one made me chuckle the most. Kiss my cash. >>Oh, that's >>Good. These are also really nice and soft, which is fantastic. Also high on the softness category is this Op Sarah one. I also like their bird logo. These guys, there's just, you know, just real nice touch. So unfortunately, if you have the fumble, you're not here with us, live in Detroit. At least you're gonna get taste of the swag. I taste of the stories and some smiles hear from those of us on the cube. Thank you both so much for being here with us. Lisa, thanks for another fabulous day. Got it, girl. My name's Savannah Peterson. Thank you for joining us from Detroit. We're the cube and we can't wait to see you tomorrow.

Published Date : Oct 28 2022

SUMMARY :

And who says TEUs had got a little ass more skin in the game for as I have over the last half of a decade, you get to interact with a lot of people's knowledge Lisa, how you feeling? It was so much fun today. but it feels like the energy is just Thank you both for joining us. It's nice to have you back on the show. We haven't done anything in since, Since pre Right. I don't have to do any research when I come Jeep, I see that in 2008 you won this award You got some stats in terms of the attendees compared We also got the scoop earlier Oh, that is, is nice. What's the vibe? You know, you know, they're not wearing ties yet, but they are definitely understanding kind What's been the take today I was thinking like, you know, I think in, when I put a pointer So there's definitely much the less, you know, quality you get goes into it. Something I pay attention to as well. Those are the things I was thinking about today. So it depend depending on the size of the enterprise. You have to patch it, you have to roll in the new, I have good friend in the community, Alex Ellis, who does open Fast. If CNC is the place to have the cloud native conversation, what about the projects that's Like if you are y white Combinator, you know, I actually look at events as an illustration of, you know, what's the culture and the health of an organization. I love coupon. I don't, my CNCF project may not, my product may not even be based on the CNCF I can't modernize it cuz I don't have the developers to do it. So the data How is growing by a clip? the floor of, you know, a hundred thousand servers in a data center. That's how you get those esty headlines now. So cloud gives you velocity, the, the, We need that OnPrem. hybrid, you are not going to keep your old applications up to date forever. I have a, I I see exactly what you're talking about. I have a counter to that. Like cloud is a better, you know, It's not like any of us can quite see through the crystal ball that we have Just love, I love the debate as well. And if you know, yesterday we were looking at what is the weirdest swag or most unique like me and you care about what's on your wrist, you're familiar with Shinola. And Storm Forge, who are also on the show, is actually giving away an Aston Martin, If you combine the two, you'll really be our grand champion for We're the cube and we can't wait to see you tomorrow.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LenoxORGANIZATION

0.99+

BoeingORGANIZATION

0.99+

PriyankaPERSON

0.99+

Lisa MartinPERSON

0.99+

fiveQUANTITY

0.99+

LisaPERSON

0.99+

Alex EllisPERSON

0.99+

KeithPERSON

0.99+

David LymPERSON

0.99+

ChicagoLOCATION

0.99+

DetroitLOCATION

0.99+

GoogleORGANIZATION

0.99+

2008DATE

0.99+

MichiganLOCATION

0.99+

SargePERSON

0.99+

Savannah PetersonPERSON

0.99+

AmazonORGANIZATION

0.99+

10%QUANTITY

0.99+

IBMORGANIZATION

0.99+

FordORGANIZATION

0.99+

threeQUANTITY

0.99+

30 peopleQUANTITY

0.99+

Dave AntePERSON

0.99+

fourQUANTITY

0.99+

90%QUANTITY

0.99+

Red HatORGANIZATION

0.99+

last yearDATE

0.99+

CNCFORGANIZATION

0.99+

yesterdayDATE

0.99+

Home DepotORGANIZATION

0.99+

2019DATE

0.99+

Lenox FoundationORGANIZATION

0.99+

todayDATE

0.99+

twoQUANTITY

0.99+

37QUANTITY

0.99+

one tierQUANTITY

0.99+

147 projectsQUANTITY

0.99+

second tierQUANTITY

0.99+

180,000 peopleQUANTITY

0.99+

tomorrowDATE

0.99+

KubeConEVENT

0.99+

81QUANTITY

0.99+

TodayDATE

0.99+

over 500,000 employeesQUANTITY

0.99+

Two years agoDATE

0.99+

18QUANTITY

0.99+

Robert WorshipPERSON

0.99+

JeepORGANIZATION

0.99+

LegoORGANIZATION

0.99+

Bank of AmericaORGANIZATION

0.98+

KubernetesTITLE

0.98+

Four timesQUANTITY

0.98+

10 years agoDATE

0.98+

6,000 peopleQUANTITY

0.98+

GEORGANIZATION

0.98+

bothQUANTITY

0.98+

five storage tiersQUANTITY

0.98+

sixthQUANTITY

0.98+

CloudNativeConEVENT

0.98+

KubeCon + CloudNativeCon 2022 Preview w/ @Stu


 

>>Keon Cloud Native Con kicks off in Detroit on October 24th, and we're pleased to have Stewart Miniman, who's the director of Market Insights, hi, at, for hybrid platforms at Red Hat back in the studio to help us understand the key trends to look for at the events. Do welcome back, like old, old, old >>Home. Thank you, David. It's great to, great to see you and always love doing these previews, even though Dave, come on. How many years have I told you Cloud native con, It's a hoodie crowd. They're gonna totally call you out for where in a tie and things like that. I, I know you want to be an ESPN sportscaster, but you know, I I, I, I still don't think even after, you know, this show's been around for so many years that there's gonna be too many ties into Troy. I >>Know I left the hoodie in my off, I'm sorry folks, but hey, we'll just have to go for it. Okay. Containers generally, and Kubernetes specifically continue to show very strong spending momentum in the ETR survey data. So let's bring up this slide that shows the ETR sectors, all the sectors in the tax taxonomy with net score or spending velocity in the vertical axis and pervasiveness on the horizontal axis. Now, that red dotted line that you see, that marks the elevated 40% mark, anything above that is considered highly elevated in terms of momentum. Now, for years, the big four areas of momentum that shine above all the rest have been cloud containers, rpa, and ML slash ai for the first time in 10 quarters, ML and AI and RPA have dropped below the 40% line, leaving only cloud and containers in rarefied air. Now, Stu, I'm sure this data doesn't surprise you, but what do you make of this? >>Yeah, well, well, Dave, I, I did an interview with at Deepak who owns all the container and open source activity at Amazon earlier this year, and his comment was, the default deployment mechanism in Amazon is containers. So when I look at your data and I see containers and cloud going in sync, yeah, that, that's, that's how we see things. We're helping lots of customers in their overall adoption. And this cloud native ecosystem is still, you know, we're still in that Cambridge explosion of new projects, new opportunities, AI's a great workload for these type type of technologies. So it's really becoming pervasive in the marketplace. >>And, and I feel like the cloud and containers go hand in hand, so it's not surprising to see those two above >>The 40%. You know, there, there's nothing to say that, Look, can I run my containers in my data center and not do the public cloud? Sure. But in the public cloud, the default is the container. And one of the hot discussions we've been having in this ecosystem for a number of years is edge computing. And of course, you know, I want something that that's small and lightweight and can do things really fast. A lot of times it's an AI workload out there, and containers is a great fit at the edge too. So wherever it goes, containers is a good fit, which has been keeping my group at Red Hat pretty busy. >>So let's talk about some of those high level stats that we put together and preview for the event. So it's really around the adoption of open source software and Kubernetes. Here's, you know, a few fun facts. So according to the state of enterprise open source report, which was published by Red Hat, although it was based on a blind survey, nobody knew that that Red Hat was, you know, initiating it. 80% of IT execs expect to increase their use of enterprise open source software. Now, the CNCF community has currently more than 120,000 developers. That's insane when you think about that developer resource. 73% of organizations in the most recent CNCF annual survey are using Kubernetes. Now, despite the momentum, according to that same Red Hat survey, adoption barriers remain for some organizations. Stu, I'd love you to talk about this specifically around skill sets, and then we've highlighted some of the other trends that we expect to see at the event around Stu. I'd love to, again, your, get your thoughts on the preview. You've done a number of these events, automation, security, governance, governance at scale, edge deployments, which you just mentioned among others. Now Kubernetes is eight years old, and I always hear people talking about there's something coming beyond Kubernetes, but it looks like we're just getting started. Yeah, >>Dave, It, it is still relatively early days. The CMC F survey, I think said, you know, 96% of companies when they, when CMC F surveyed them last year, were either deploying Kubernetes or had plans to deploy it. But when I talked to enterprises, nobody has said like, Hey, we've got every group on board and all of our applications are on. It is a multi-year journey for most companies and plenty of them. If you, you look at the general adoption of technology, we're still working through kind of that early majority. We, you know, passed the, the chasm a couple of years ago. But to a point, you and I we're talking about this ecosystem, there are plenty of people in this ecosystem that could care less about containers and Kubernetes. Lots of conversations at this show won't even talk about Kubernetes. You've got, you know, big security group that's in there. >>You've got, you know, certain workloads like we talked about, you know, AI and ml and that are in there. And automation absolutely is playing a, a good role in what's going on here. So in some ways, Kubernetes kind of takes a, a backseat because it is table stakes at this point. So lots of people involved in it, lots of activities still going on. I mean, we're still at a cadence of three times a year now. We slowed it down from four times a year as an industry, but there's, there's still lots of innovation happening, lots of adoption, and oh my gosh, Dave, I mean, there's just no shortage of new projects and new people getting involved. And what's phenomenal about it is there's, you know, end user practitioners that aren't just contributing. But many of the projects were spawned out of work by the likes of Intuit and Spotify and, and many others that created some of the projects that sit alongside or above the, the, you know, the container orchestration itself. >>So before we talked about some of that, it's, it's kind of interesting. It's like Kubernetes is the big dog, right? And it's, it's kind of maturing after, you know, eight years, but it's still important. I wanna share another data point that underscores the traction that containers generally are getting in Kubernetes specifically have, So this is data from the latest ETR survey and shows the spending breakdown for Kubernetes in the ETR data set for it's cut for respondents with 50 or more citations in, in by the IT practitioners that lime green is new adoptions, the forest green is spending 6% or more relative to last year. The gray is flat spending year on year, and those little pink bars, that's 6% or down spending, and the bright red is retirements. So they're leaving the platform. And the blue dots are net score, which is derived by subtracting the reds from the greens. And the yellow dots are pervasiveness in the survey relative to the sector. So the big takeaway here is that there is virtually no red, essentially zero churn across all sectors, large companies, public companies, private firms, telcos, finance, insurance, et cetera. So again, sometimes I hear this things beyond Kubernetes, you've mentioned several, but it feels like Kubernetes is still a driving force, but a lot of other projects around Kubernetes, which we're gonna hear about at the show. >>Yeah. So, so, so Dave, right? First of all, there was for a number of years, like, oh wait, you know, don't waste your time on, on containers because serverless is gonna rule the world. Well, serverless is now a little bit of a broader term. Can I do a serverless viewpoint for my developers that they don't need to think about the infrastructure but still have containers underneath it? Absolutely. So our friends at Amazon have a solution called Fargate, their proprietary offering to kind of hide that piece of it. And in the open source world, there's a project called Can Native, I think it's the second or third can Native Con's gonna happen at the cncf. And even if you use this, I can still call things over on Lambda and use some of those functions. So we know Dave, it is additive and nothing ever dominates the entire world and nothing ever dies. >>So we have, we have a long runway of activities still to go on in containers and Kubernetes. We're always looking for what that next thing is. And what's great about this ecosystem is most of it tends to be additive and plug into the pieces there, there's certain tools that, you know, span beyond what can happen in the container world and aren't limited to it. And there's others that are specific for it. And to talk about the industries, Dave, you know, I love, we we have, we have a community event that we run that's gonna happen at Cubans called OpenShift Commons. And when you look at like, who's speaking there? Oh, we've got, you know, for Lockheed Martin, University of Michigan and I g Bank all speaking there. So you look and it's like, okay, cool, I've got automotive, I've got, you know, public sector, I've got, you know, university education and I've got finance. So all of you know, there is not an industry that is not touched by this. And the general wave of software adoption is the reason why, you know, not just adoption, but the creation of new software is one of the differentiators for companies. And that is what, that's the reason why I do containers, isn't because it's some cool technology and Kubernetes is great to put on my resume, but that it can actually accelerate my developers and help me create technology that makes me respond to my business and my ultimate end users. Well, >>And you know, as you know, we've been talking about the Supercloud a lot and the Kubernetes is clearly enabler to, to Supercloud, but I wanted to go back, you and John Furrier have done so many of, you know, the, the cube cons, but but go back to Docker con before Kubernetes was even a thing. And so you sort of saw this, you know, grow. I think there's what, how many projects are in CNCF now? I mean, hundreds. Hundreds, okay. And so you're, Will we hear things in Detroit, things like, you know, new projects like, you know, Argo and capabilities around SI store and things like that? Well, you're gonna hear a lot about that. Or is it just too much to cover? >>So I, I mean the, the good news, Dave, is that the CNCF really is, is a good steward for this community and new things got in get in. So there's so much going on with the existing projects that some of the new ones sometimes have a little bit of a harder time making a little bit of buzz. One of the more interesting ones is a project that's been around for a while that I think back to the first couple of Cube Cuban that John and I did service Mesh and Istio, which was created by Google, but lived under basically a, I guess you would say a Google dominated governance for a number of years is now finally under the CNCF Foundation. So I talked to a number of companies over the years and definitely many of the contributors over the years that didn't love that it was a Google Run thing, and now it is finally part. >>So just like Kubernetes is, we have SEO and also can Native that I mentioned before also came outta Google and those are all in the cncf. So will there be new projects? Yes. The CNCF is sometimes they, they do matchmaking. So in some of the observability space, there were a couple of projects that they said, Hey, maybe you can go merge down the road. And they ended up doing that. So there's still you, you look at all these projects and if I was an end user saying, Oh my God, there is so much change and so many projects, you know, I can't spend the time in the effort to learn about all of these. And that's one of the challenges and something obviously at Red Hat, we spend a lot of time figuring out, you know, not to make winners, but which are the things that customers need, Where can we help make them run in production for our, our customers and, and help bring some stability and a little bit of security for the overall ecosystem. >>Well, speaking of security, security and, and skill sets, we've talked about those two things and they sort of go hand in hand when I go to security events. I mean, we're at reinforced last summer, we were just recently at the CrowdStrike event. A lot of the discussion is sort of best practice because it's so complicated. And, and, and will you, I presume you're gonna hear a lot of that here because security securing containers now, you know, the whole shift left thing and shield right is, is a complicated matter, especially when you saw with the earlier data from the Red Hat survey, the the gaps are around skill sets. People don't have the skill. So should we expect to hear a lot about that, A lot of sort of how to, how to take advantage of some of these new capabilities? >>Yeah, Dave, absolutely. So, you know, one of the conversations going on in the community right now is, you know, has DevOps maybe played out as we expect to see it? There's a newer term called platform engineering, and how much do I need to do there? Something that I, I know your, your team's written a lot about Dave, is how much do you need to know versus what can you shift to just a platform or a service that I can consume? I've talked a number of times with you since I've been at Red Hat about the cloud services that we offer. So you want to use our offering in the public cloud. Our first recommendation is, hey, we've got cloud services, how much Kubernetes do you really want to learn versus you want to do what you can build on top of it, modernize the pieces and have less running the plumbing and electric and more, you know, taking advantage of the, the technologies there. So that's a big thing we've seen, you know, we've got a big SRE team that can manage that for use so that you have to spend less time worrying about what really is un differentiated heavy lifting and spend more time on what's important to your business and your >>Customers. So, and that's, and that's through a managed service. >>Yeah, absolutely. >>That whole space is just taken off. All right, Stu I'll give you the final word. You know, what are you excited about for, for, for this upcoming event and Detroit? Interesting choice of venue? Yeah, >>Look, first of off, easy flight. I've, I've never been to Detroit, so I'm, I'm willing to give it a shot and hopefully, you know, that awesome airport. There's some, some, some good things there to learn. The show itself is really a choose your own adventure because there's so much going on. The main show of QAN and cloud Native Con is Wednesday through Friday, but a lot of a really interesting stuff happens on Monday and Tuesday. So we talked about things like OpenShift Commons in the security space. There's cloud Native Security Day, which is actually two days and a SIG store event. There, there's a get up show, there's, you know, k native day. There's so many things that if you want to go deep on a topic, you can go spend like a workshop in some of those you can get hands on to. And then at the show itself, there's so much, and again, you can learn from your peers. >>So it was good to see we had, during the pandemic, it tilted a little bit more vendor heavy because I think most practitioners were pretty busy focused on what they could work on and less, okay, hey, I'm gonna put together a presentation and maybe I'm restricted at going to a show. Yeah, not, we definitely saw that last year when I went to LA I was disappointed how few customer sessions there were. It, it's back when I go look through the schedule now there's way more end users sharing their stories and it, it's phenomenal to see that. And the hallway track, Dave, I didn't go to Valencia, but I hear it was really hopping felt way more like it was pre pandemic. And while there's a few people that probably won't come because Detroit, we think there's, what we've heard and what I've heard from the CNCF team is they are expecting a sizable group up there. I know a lot of the hotels right near the, where it's being held are all sold out. So it should be, should be a lot of fun. Good thing I'm speaking on an edge panel. First time I get to be a speaker at the show, Dave, it's kind of interesting to be a little bit of a different role at the show. >>So yeah, Detroit's super convenient, as I said. Awesome. Airports too. Good luck at the show. So it's a full week. The cube will be there for three days, Tuesday, Wednesday, Thursday. Thanks for coming. >>Wednesday, Thursday, Friday, sorry, >>Wednesday, Thursday, Friday is the cube, right? So thank you for that. >>And, and no ties from the host, >>No ties, only hoodies. All right Stu, thanks. Appreciate you coming in. Awesome. And thank you for watching this preview of CubeCon plus cloud Native Con with at Stu, which again starts the 24th of October, three days of broadcasting. Go to the cube.net and you can see all the action. We'll see you there.

Published Date : Oct 4 2022

SUMMARY :

Red Hat back in the studio to help us understand the key trends to look for at the events. I know you want to be an ESPN sportscaster, but you know, I I, I, I still don't think even Now, that red dotted line that you And this cloud native ecosystem is still, you know, we're still in that Cambridge explosion And of course, you know, I want something that that's small and lightweight and Here's, you know, a few fun facts. I think said, you know, 96% of companies when they, when CMC F surveyed them last year, You've got, you know, certain workloads like we talked about, you know, AI and ml and that And it's, it's kind of maturing after, you know, eight years, but it's still important. oh wait, you know, don't waste your time on, on containers because serverless is gonna rule the world. And the general wave of software adoption is the reason why, you know, And you know, as you know, we've been talking about the Supercloud a lot and the Kubernetes is clearly enabler to, to Supercloud, definitely many of the contributors over the years that didn't love that it was a Google Run the observability space, there were a couple of projects that they said, Hey, maybe you can go merge down the road. securing containers now, you know, the whole shift left thing and shield right is, So, you know, one of the conversations going on in the community right now is, So, and that's, and that's through a managed service. All right, Stu I'll give you the final word. There, there's a get up show, there's, you know, k native day. I know a lot of the hotels right near the, where it's being held are all sold out. Good luck at the show. So thank you for that. Go to the cube.net and you can see all the action.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

DavidPERSON

0.99+

Lockheed MartinORGANIZATION

0.99+

6%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

DetroitLOCATION

0.99+

50QUANTITY

0.99+

CNCFORGANIZATION

0.99+

October 24thDATE

0.99+

40%QUANTITY

0.99+

Stewart MinimanPERSON

0.99+

FridayDATE

0.99+

GoogleORGANIZATION

0.99+

96%QUANTITY

0.99+

two daysQUANTITY

0.99+

University of MichiganORGANIZATION

0.99+

StuPERSON

0.99+

CMC FORGANIZATION

0.99+

80%QUANTITY

0.99+

TuesdayDATE

0.99+

JohnPERSON

0.99+

WednesdayDATE

0.99+

eight yearsQUANTITY

0.99+

MondayDATE

0.99+

last yearDATE

0.99+

three daysQUANTITY

0.99+

Red HatORGANIZATION

0.99+

secondQUANTITY

0.99+

73%QUANTITY

0.99+

ThursdayDATE

0.99+

LALOCATION

0.99+

more than 120,000 developersQUANTITY

0.99+

two thingsQUANTITY

0.99+

John FurrierPERSON

0.99+

hundredsQUANTITY

0.99+

HundredsQUANTITY

0.99+

first timeQUANTITY

0.99+

twoQUANTITY

0.99+

24th of OctoberDATE

0.99+

oneQUANTITY

0.98+

KubeConEVENT

0.98+

CubeConEVENT

0.98+

CNCF FoundationORGANIZATION

0.98+

cube.netOTHER

0.98+

last summerDATE

0.98+

ValenciaLOCATION

0.98+

thirdQUANTITY

0.98+

SpotifyORGANIZATION

0.98+

IntuitORGANIZATION

0.98+

last yearDATE

0.98+

OneQUANTITY

0.98+

cloud Native Security DayEVENT

0.97+

KubernetesTITLE

0.97+

QANEVENT

0.97+

ESPNORGANIZATION

0.97+

Clayton Coleman, Red Hat | KubeCon + CloudNative Con NA 2021


 

>>welcome back everyone to the cube con cloud, David Kahn coverage. I'm john for a host of the cube, we're here in person, 2020 20 a real event, it's a hybrid event, we're streaming live to you with all the great coverage and guests coming on next three days. Clayton Coleman's chief Hybrid cloud architect for Red Hat is joining me here to go over viewers talk but also talk about hybrid cloud. Multi cloud where it's all going road red hats doing great to see you thanks coming on. It's a pleasure to be >>back. It's a pleasure to be back in cuba con. >>Uh it's an honor to have you on as a chief architect at Red Hat on hybrid cloud. It is the hottest area in the market right now. The biggest story we were back in person. That's the biggest story here. The second biggest story, that's the most important story is hybrid cloud. And what does it mean for multi cloud, this is a key trend. You just gave a talk here. What's your take on it? You >>know, I, I like to summarize hybrid cloud as the answer to. It's really the summarization of yes please more of everything, which is, we don't have one of anything. Nobody has got any kind of real footprint is single cloud. They're not single framework, they're not single language, they're not single application server, they're not single container platform, they're not single VM technology. And so, um, and then, you know, looking around here in this, uh, partner space where eight years into kubernetes and there is an enormous ecosystem of tools, technologies, capabilities, add ons, plug ins components that make our applications better. Um the modern application landscape is so huge that I think that's what hybrid really is is it's we've got all these places to run stuff more than ever and we've got all this stuff to run more than ever and it doesn't slow down. So how do we bring sanity to that? How do we understand it? Bring it together and companies has been a big part of that, like it unlocked some of that. What's the next step? >>Yeah, that's a great, great commentary. I want to take into the kubernetes piece but you know, as we've been reporting the digital transformation at all time, high speed is the number one request. People want to go faster, not just speeds and feeds, but like ship code fast to build apps faster. Make it all run faster and secure. Okay, check, get that. Look what we were 15, 15 years ago, 10 years ago, five years ago, 2016. The first coupe con in Seattle we were there for small events kubernetes, we gotta sell it, figure it out. Right convince people >>that it's a it's worth >>it. Yeah. So what's your take on that? Well, I mean, it's mature, it's kind of de facto standard at this point. What's missing. Where is it? >>So I think Kubernetes has succeeded at the core mission which is helping us stop worrying about all the problems that we spent endless amounts of time arguing about, how do I deploy software, How do I roll it out? But in the meantime we've added more types of software. You know, the rise of ai ml um you know, the whole the whole ecosystem around training software models like what is a what is an Ai model? Is it look like an application, does it look like a job? It's part batch, part service. Um It's spread out to the edge. We've added mobile devices. The explosion in mobile computing over the last 10 years has co evolved. And so kubernetes succeeded at that kind of set a floor for what everybody thought was an application. And in the meantime we've added all these other parts of the application. >>It's funny, you know, David Anthony, we're talking about what's to minimum and networks at red hat will be on later. Back in the first two cubicles were like, you know, this is like a TCP I P moment, the Os I model that was a killer part of the stack. Now it was all standardized below TCP I. P. Company feels like a similar kind of construct where it's unifying, is creating some enablement, It's enabling some innovation and it kind of brought everyone together at the same time everyone realized that that's real, >>the whole >>cloud native is real. And now we're in an era now where people are talking about doing things that are completely different. You mentioned as a batch job house ai new software paradigm development paradigms, not to suffer during the lifecycle, but just like software development in general is impacted. >>Absolutely. And you know, the components like, you know, we spent a lot of time talking about how to test and build application, but those are things that we all kind of internalized now we we have seen the processes is critical because it's going to be in lots of places, people are looking to standardize. But sometimes the new technology comes up alongside the side, the thing we're trying to standardize, we're like, well let's just use the new technology instead function as a service is kind of uh it came up, you know, kubernetes group K Native. And then you see, you know, the proliferation of functions as a service choices, what do people use? So there's a lot of choice and we're all building on those common layers, but everybody kind of has their own opinions, everybody's doing something subtly different. >>Let me ask you your opinion on on more under the Hood kind of complexity challenge. There's general consensus in the industry that does a lot of complexity. Okay, you don't mean debate that, but that's in a way, a good thing in the sense if you solve that, that's where innovation comes in. So the goal is to solve complexity, abstract out of the heavy lifting under heavy living in Sandy Jackson. And I would say, or abstract away complexity make things easier to use >>Well and an open source and this ecosystem is an amazing um it's one of the most effective methods we've ever found for trying every possible solution and keeping the five or six most successful and that's a little bit like developers, developers flow downhill, developers are going to do, it's easy if it's easier to put a credit card in and go to the public cloud, you're gonna do it if you can take control away from the teams at your organization that are there to protect you, but maybe aren't as responsive as you like. People will, people will go around those. And so I think a little bit of what we're trying to do is what are the commonalities that we could pick out of this ecosystem that everybody agrees on and make those the downhill path that people follow, not putting a credit card into a cloud, but offering a way for you not to think about what clouds are on until you need to write, because you want to go to the fridge is a developer, you wanna go the fridge, pull out your favorite brand of soda, that favorite band Isoda might have an AWS label also >>talk about the open shift and the Kubernetes relationship, you guys push the boundaries. Um Den is being controlled playing and nodes, these are things that you talked about in your talk, talk about because you guys made some good bets on open shift, we've been covering that, how's that playing out now? It's a relationship now >>is interesting coming into kubernetes, we came in from the platform as a service angle, right, Platform as a service was the first iteration of trying to make the lowest cost path for developers to flow to business value um and so we added things on top of kubernetes, we knew that we were going to complex, so we built in a little bit um in our structure and our way of thinking about cube that it was never going to be just that basic bare bones package that you're gonna have to make choices for people that made sense. Ah obviously as the ecosystems grown, we've tried to grow with it, we've tried to be a layer above kubernetes, we've tried to be a layer in between kubernetes, we've tried to be a layer underneath kubernetes and all of these are valid places to be. Um I think that next step is we're all kind of asking, you know, we've got all this stuff, are there any ways that we can be more efficient? So I like to think about practical benefits, what is a practical benefit That a little bit of opinion nation could bring to this ecosystem and I think it's around applications, it's being application centric, it's what is a team, 90% of the time need to be successful, they need a way to get their code out, they need to get it to the places that they wanted to be, and that place is everywhere. It's not one cloud or on premises or a data center, it's the edge, it's running as a lambda. It's running inside devices that might be being designed in this very room today. >>It's interesting. You know, you're an architect, but also the computer science industry is the people who were trained in the area are learning. It's pretty fascinating and almost intoxicating right now in this this market because you have an operating system, dynamic systems kind of programming model with distributed cloud, edge on fire, that's only gonna get more complicated with 5G and high density data applications. Um and then you've got this changing modal mode of operations were programming with bots and Ai and machine learning to new things, but it's kind of the same distributed computing paradigm. Yeah. What's your reaction to that? >>Well, and it's it's interesting. I was kind of described like layers. We've gone from Lenox replaced proprietary UNIX or mainframe to virtualization, which, and then we had a lot of Lennox, we had some windows too. And then we moved to public cloud and private cloud. We brought config management and moved to kubernetes, um we still got that. Os at the heart of what we do. We've got, uh application libraries and we've shared services and common services. I think it's interesting like to learn from Lennox's lesson, which is we want to build an open expansive ecosystem, You're kind of like kind of like what's going on. We want to pick enough opinion nation that it just works because I think just works is what, let's be honest, like we could come up with all the great theories of what the right way computers should be done, but it's gonna be what's easy, what gets people help them get their jobs done, trying to time to take that from where people are today on cube in cloud, on multiple clouds, give them just a little bit more consolidation. And I think it's a trick people or convince people by showing them how much easier it could be. >>You know, what's interesting around um, what you guys have done a red hat is that you guys have real customers are demanding, you have enterprise customers. So you have your eye on the front edge of the, of the bleeding edge, making things easier. And I think that's good enough is a good angle, but let's, let's face it, people are just lifting and shifting to the cloud now. They haven't yet re factored and re factoring is a concept of taking what you're doing in the cloud of taking advantage of new services to change the operating dynamic and value proposition of say the application. So the smart money is all going there, seeing the funding come into applications that are leveraging the new platform? Re platform and then re factoring what's your take on that because you got the edge, you have other things happening. >>There are so many more types of applications today. And it's interesting because almost all of them start with real practical problems that enterprises or growing tech companies or companies that aren't tech companies but have a very strong tech component. Right? That's the biggest transformation the last 15 years is that you can be a tech company without ever calling yourself a tech company because you have a website and you have an upset and your entire business model flows like that. So there is, I think pragmatically people are, they're okay with their footprint where it is. They're looking to consolidate their very interested in taking advantage of the scale that modern cloud offers them and they're trying to figure out how to bring all the advantages that they have in these modern technologies to these new footprints and these new form factors that they're trying to fit into, whether that's an application running on the edge next to their load bouncer in a gateway, in telco five Gs happening right now. Red hat's been really heavily involved in a telco ecosystem and it's kubernetes through and through its building on those kinds of principles. What are the concepts that help make a hybrid application, an application that spans the data flowing from a device back to the cloud, out to a Gateway processed by a big data system in a private region, someplace where computers cheap can't >>be asylum? No, absolutely not has to be distributed non siloed based >>and how do we do that and keep security? How do we help you track where your data is and who's talking to whom? Um there's a lot of, there's a lot of people here today who are helping people connect. I think that next step that contact connectivity, the knowing who's talking and how they're connecting, that'll be a fundamental part of what emerges as >>that's why I think the observe ability to me is the data is really about a data funding a new data sector of the market that's going to be addressable. I think data address ability is critical. Clayton really appreciate you coming on. And giving a perspective an expert in the field. I gotta ask you, you know, I gotta say from a personal standpoint how open source has truly been a real enabler. You look at how fast new things could come in and be adopted and vetted and things get kicked around people try stuff that fails, but it's they they build on each other. Right? So a I for example, it's just a great example of look at what machine learning and AI is going on, how fast that's been adopted. Absolutely. I don't think that would be done in open source. I have to ask you guys at red hat as you continue your mission and with IBM with that partnership, how do you see people participating with you guys? You're here, you're part of the ecosystem, big player, how you guys continue to work with the community? Take a minute to share what you're working on. >>So uh first off, it's impossible to get anything done I think in this ecosystem without being open first. Um and that's something the red at and IBM are both committed to. A lot of what I try to do is I try to map from the very complex problems that people bring to us because every problem in applications is complex at some later and you've got to have the expertise but there's so much expertise. So you got to be able to blend the experts in a particular technology, the experts in a particular problem domain like the folks who consult or contract or helped design some of these architectures or have that experience at large companies and then move on to advise others and how to proceed. And then you have to be able to take those lessons put them in technology and the technology has to go back and take that feedback. I would say my primary goal is to come to these sorts of events and to share what everyone is facing because if we as a group aren't all working at some level, there won't be the ability of those organizations to react because none of us know the whole stack, none of us know the whole set of details >>And this text changing too. I mean you got to get a reference to a side while it's more than 80s metaphor. But you know, but that changed the game on proprietary and that was like >>getting it allows us to think and to separate. You know, you want to have nice thin layers that the world on top doesn't worry about below except when you need to and below program you can make things more efficient and public cloud, open source kubernetes and the proliferation of applications on top That's happening today. I >>mean Palmer gets used to talk about the hardened top when he was the VM ware Ceo Back in 2010. Remember him saying that he says she predicted >>the whole, we >>call it the mainframe in the cloud at the time because it was a funny thing to say, but it was really a computer. I mean essentially distributed nature of the cloud. It happened. Absolutely. Clayton, thanks for coming on the Cuban sharing your insights appreciate. It was a pleasure. Thank you. Right click here on the Cuban john furry. You're here live in L A for coupon cloud native in person. It's a hybrid event was streaming Also going to the cube platform as well. Check us out there all the interviews. Three days of coverage, we'll be right back Yeah. Mm mm mm I have

Published Date : Oct 13 2021

SUMMARY :

I'm john for a host of the cube, we're here in person, It's a pleasure to be back in cuba con. Uh it's an honor to have you on as a chief architect at Red Hat on hybrid cloud. And so, um, and then, you know, looking around here in this, I want to take into the kubernetes piece but you know, as we've been reporting the digital transformation Well, I mean, it's mature, it's kind of de facto standard at this point. And in the meantime we've added all these other parts of the application. Back in the first two cubicles were like, you know, this is like a TCP I P moment, the Os I model that development paradigms, not to suffer during the lifecycle, but just like software development in general And you know, the components like, you know, we spent a lot of time talking about So the goal is to solve complexity, abstract out of the heavy lifting to think about what clouds are on until you need to write, because you want to go to the fridge is a developer, you wanna go the fridge, talk about the open shift and the Kubernetes relationship, you guys push the boundaries. Um I think that next step is we're all kind of asking, you know, we've got all this stuff, you have an operating system, dynamic systems kind of programming model with distributed cloud, and moved to kubernetes, um we still got that. You know, what's interesting around um, what you guys have done a red hat is that you guys have real customers are demanding, you have an upset and your entire business model flows like that. How do we help you track where your data is and who's talking to whom? I have to ask you guys at red hat as And then you have to be able to take those lessons put I mean you got to get a reference to a side while it's more than 80s metaphor. that the world on top doesn't worry about below except when you need to and below program you can make Remember him saying that he says she predicted I mean essentially distributed nature of the cloud.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
fiveQUANTITY

0.99+

IBMORGANIZATION

0.99+

David KahnPERSON

0.99+

David AnthonyPERSON

0.99+

ClaytonPERSON

0.99+

Red HatORGANIZATION

0.99+

2010DATE

0.99+

Red HatORGANIZATION

0.99+

SeattleLOCATION

0.99+

LenoxORGANIZATION

0.99+

sixQUANTITY

0.99+

Clayton ColemanPERSON

0.99+

90%QUANTITY

0.99+

AWSORGANIZATION

0.99+

eight yearsQUANTITY

0.99+

K NativeORGANIZATION

0.99+

five years agoDATE

0.99+

LennoxORGANIZATION

0.99+

PalmerPERSON

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

KubeConEVENT

0.99+

telcoORGANIZATION

0.99+

10 years agoDATE

0.98+

Sandy JacksonPERSON

0.98+

Three daysQUANTITY

0.98+

first two cubiclesQUANTITY

0.98+

UNIXTITLE

0.98+

oneQUANTITY

0.98+

IsodaORGANIZATION

0.97+

red hatsORGANIZATION

0.97+

five GsCOMMERCIAL_ITEM

0.97+

bothQUANTITY

0.97+

redORGANIZATION

0.96+

CloudNative ConEVENT

0.96+

more than 80sQUANTITY

0.96+

2016DATE

0.95+

johnPERSON

0.95+

CubanOTHER

0.94+

15 years agoDATE

0.93+

first iterationQUANTITY

0.92+

15DATE

0.91+

single languageQUANTITY

0.88+

single cloudQUANTITY

0.85+

KubernetesORGANIZATION

0.84+

last 15 yearsDATE

0.84+

one cloudQUANTITY

0.83+

NA 2021EVENT

0.82+

last 10 yearsDATE

0.81+

TCP I. P.ORGANIZATION

0.8+

singleQUANTITY

0.78+

second biggest storyQUANTITY

0.73+

single frameworkQUANTITY

0.7+

CeoCOMMERCIAL_ITEM

0.7+

john furryPERSON

0.69+

HoodPERSON

0.68+

lot of peopleQUANTITY

0.67+

red hatORGANIZATION

0.66+

hatORGANIZATION

0.66+

2020 20DATE

0.59+

cubaEVENT

0.59+

one requestQUANTITY

0.58+

threeQUANTITY

0.57+

daysDATE

0.52+

coupeEVENT

0.45+

5GOTHER

0.44+

DenPERSON

0.41+

con.LOCATION

0.4+

conEVENT

0.37+

cubeORGANIZATION

0.34+

Parul Singh, Luke Hinds & Stephan Watt, Red Hat | Red Hat Summit 2021 Virtual Experience


 

>>mhm Yes. >>Welcome back to the Cube coverage of Red Hat summit 21 2021. I'm john for host of the Cubans virtual this year as we start preparing to come out of Covid a lot of great conversations here happening around technology. This is the emerging technology with Red hat segment. We've got three great guests steve watt manager, distinguished engineer at Red Hat hurl saying senior software engineer Red Hat and luke Hines, who's the senior software engineer as well. We got the engineering team steve, you're the the team leader, emerging tech within red hat. Always something to talk about. You guys have great tech chops that's well known in the industry and I'll see now part of IBM you've got a deep bench um what's your, how do you view emerging tech um how do you apply it? How do you prioritize, give us a quick overview of the emerging tech scene at Redhead? >>Yeah, sure. It's quite a conflated term. The way we define emerging technologies is that it's a technology that's typically 18 months plus out from commercialization and this can sometimes go six months either way. Another thing about it is it's typically not something on any of our product roadmaps within the portfolio. So in some sense, it's often a bit of a surprise that we have to react to. >>So no real agenda. And I mean you have some business unit kind of probably uh but you have to have first principles within red hat, but for this you're looking at kind of the moon shot, so to speak, the big game changing shifts. Quantum, you know, you got now supply chain from everything from new economics, new technology because that kind of getting it right. >>Yeah, I think we we definitely use a couple of different techniques to prioritize and filter what we're doing. And the first is something will pop up and it will be like, is it in our addressable market? So our addressable market is that we're a platform software company that builds enterprise software and so, you know, it's got to be sort of fit into that is a great example if somebody came up came to us with an idea for like a drone command center, which is a military application, it is an emerging technology, but it's something that we would pass on. >>Yeah, I mean I didn't make sense, but he also, what's interesting is that you guys have an open source D N A. So it's you have also a huge commercial impact and again, open sources of one of the 4th, 5th generation of awesomeness. So, you know, the good news is open source is well proven. But as you start getting into this more disruption, you've got the confluence of, you know, core cloud, cloud Native, industrial and IOT edge and data. All this is interesting, right. This is where the action is. How do you guys bring that open source community participation? You got more stakeholders emerging there before the break down, how that you guys manage all that complexity? >>Yeah, sure. So I think that the way I would start is that, you know, we like to act on good ideas, but I don't think good ideas come from any one place. And so we typically organize our teams around sort of horizontal technology sectors. So you've got, you know, luke who's heading up security, but I have an edge team, cloud networking team, a cloud storage team. Cloud application platforms team. So we've got these sort of different areas that we sort of attack work and opportunities, but you know, the good ideas can come from a variety of different places. So we try and leverage co creation with our customers and our partners. So as a good example of something we had to react to a few years ago, it was K Native right? So the sort of a new way of doing service um and eventing on top of kubernetes that was originated from google. Whereas if you look at Quantum right, ibms, the actual driver on quantum science and uh that originated from IBM were parole. We'll talk about exactly how we chose to respond to that. Some things are originated organically within the team. So uh luke talking about six law is a great example of that, but we do have a we sort of use the addressable market as a way to sort of focus what we're doing and then we try and land it within our different emerging technologies teams to go tackle it. Now. You asked about open source communities, which are quite interesting. Um so typically when you look at an open source project, it's it's there to tackle a particular problem or opportunity. Sometimes what you actually need commercial vendors to do is when there's a problem or opportunity that's not tackled by anyone open source project, we have to put them together to create a solution to go tackle that thing. That's also what we do. And so we sort of create this bridge between red hat and our customers and multiple different open source projects. And this is something we have to do because sometimes just that one open source project doesn't really care that much about that particular problem. They're motivated elsewhere. And so we sort of create that bridge. >>We got two great uh cohorts here and colleagues parole on the on the Quantum side and you got luke on the security side. Pro I'll start with you. Quantum is also a huge mentioned IBM great leadership there. Um Quantum on open shift. I mean come on. Just that's not coming together for me in my mind, it's not the first thing I think of. But it really that sounds compelling. Take us through, you know, um how this changes the computing landscape because heterogeneous systems is what we want and that's the world we live in. But now with distributed systems and all kinds of new computing modules out there, how does this makes sense? Take us through this? >>Um yeah john's but before I think I want to explain something which is called Quantum supremacy because it plays very important role in the road map that's been working on. So uh content computers, they are evolving and they have been around. But right now you see that they are going to be the next thing. And we define quantum supremacy as let's say you have any program that you run or any problems that you solve on a classical computer. Quantum computer would be giving you the results faster. So that is uh, that is how we define content supremacy when the same workload are doing better on content computer than they do in a classical computer. So the whole the whole drive is all the applications are all the companies, they're trying to find avenues where Quantum supremacy are going to change how they solve problems or how they run their applications. And even though quantum computers they are there. But uh, it is not as easily accessible for everyone to consume because it's it's a very new area that's being formed. So what, what we were thinking, how we can provide a mechanism that you can you don't connect this deal was you have a classical world, you have a country world and that's where a lot of thought process been. And we said okay, so with open shift we have the best of the classical components. You can take open shift, you can develop, deploy around your application in a country raised platform. What about you provide a mechanism that the world clothes that are running on open shift. They are also consuming quantum resources or they are able to run the competition and content computers take the results and integrate them in their normal classical work clothes. So that is the whole uh that was the whole inception that we have and that's what brought us here. So we took an operator based approach and what we are trying to do is establish the best practices that you can have these heterogeneous applications that can have classical components. Talking to our interacting the results are exchanging data with the quantum components. >>So I gotta ask with the rise of containers now, kubernetes at the center of the cloud native value proposition, what work clothes do you see benefiting from the quantum systems the most? Is there uh you guys have any visibility on some of those workloads? >>Uh So again, it's it's a very new, it's very it's really very early in the time and uh we talk with our customers and every customers, they are trying to identify themselves first where uh these contacts supremacy will be playing the role. What we are trying to do is when they reach their we should have a solution that they that they could uh use the existing in front that they have on open shift and use it to consume the content computers that may or may not be uh, inside their own uh, cloud. >>Well I want to come back and ask you some of the impact on the landscape. I want to get the look real quick because you know, I think security quantum break security, potentially some people have been saying, but you guys are also looking at a bunch of projects around supply chain, which is a huge issue when it comes to the landscape, whether its components on a machine in space to actually handling, you know, data on a corporate database. You guys have sig store. What's this about? >>Sure. Yes. So sick store a good way to frame six store is to think of let's encrypt and what let's encrypt did for website encryption is what we plan to do for software signing and transparency. So six Door itself is an umbrella organization that contains various different open source projects that are developed by the Six door community. Now, six door will be brought forth as a public good nonprofit service. So again, we're very much basing this on the successful model of let's Encrypt Six door will will enable developers to sign software artifacts, building materials, containers, binaries, all of these different artifacts that are part of the software supply chain. These can be signed with six door and then these signing events are recorded into a technology that we call a transparency log, which means that anybody can monitor signing events and a transparency log has this nature of being read only and immutable. It's very similar to a Blockchain allows you to have cryptographic proof auditing of our software supply chain and we've made six stores so that it's easy to adopt because traditional cryptographic signing tools are a challenge for a lot of developers to implement in their open source projects. They have to think about how to store the private keys. Do they need specialist hardware? If they were to lose a key then cleaning up afterwards the blast radius. So the key compromise can be incredibly difficult. So six doors role and purpose essentially is to make signing easy easy to adopt my projects. And then they have the protections around there being a public transparency law that could be monitored. >>See this is all about open. Being more open. Makes it more secure. Is the >>thief? Very much yes. Yes. It's that security principle of the more eyes on the code the better. >>So let me just back up, is this an open, you said it's gonna be a nonprofit? >>That's correct. Yes. Yes. So >>all of the code is developed by the community. It's all open source. anybody can look at this code. And then we plan alongside the Linux Foundation to launch a public good service. So this will make it available for anybody to use if your nonprofit free to use service. >>So luke maybe steve if you can way into on this. I mean, this goes back. If you look back at some of the early cloud days, people were really trashing cloud as there's no security. And cloud turns out it's a more security now with cloud uh, given the complexity and scale of it, does that apply the same here? Because I feel this is a similar kind of concept where it's open, but yet the more open it is, the more secure it is. And then and then might have to be a better fit for saying I. T. Security solution because right now everyone is scrambling on the I. T. Side. Um whether it's zero Trust or Endpoint Protection, everyone's kind of trying everything in sight. This is kind of changing the paradigm a little bit on software security. Could you comment on how you see this playing out in traditional enterprises? Because if this plays out like the cloud, open winds, >>so luke, why don't you take that? And then I'll follow up with another lens on it which is the operate first piece. >>Sure. Yes. So I think in a lot of ways this has to be open this technology because this way we have we have transparency. The code can be audited openly. Okay. Our operational procedures can be audit openly and the community can help to develop not only are code but our operational mechanisms so we look to use technology such as cuba netease, open ship operators and so forth. Uh Six store itself runs completely in a cloud. It is it is cloud native. Okay, so it's very much in the paradigm of cloud and yeah, essentially security, always it operates better when it's open, you know, I found that from looking at all aspects of security over the years that I've worked in this realm. >>Okay, so just just to add to that some some other context around Six Law, that's interesting, which is, you know, software secure supply chain, Sixth floor is a solution to help build more secure software secure supply chains, more secure software supply chain. And um so um there's there's a growing community around that and there's an ecosystem of sort of cloud native kubernetes centric approaches for building more secure software. I think we all caught the solar winds attack. It's sort of enterprise software industry is responding sort of as a whole to go and close out as many of those gaps as possible, reduce the attack surface. So that's one aspect about why 6th was so interesting. Another thing is how we're going about it. So we talked about um you mentioned some of the things that people like about open source, which is one is transparency, so sunlight is the best disinfectant, right? Everybody can see the code, we can kind of make it more secure. Um and then the other is agency where basically if you're waiting on a vendor to go do something, um if it's proprietary software, you you really don't have much agency to get that vendor to go do that thing. Where is the open source? If you don't, if you're tired of waiting around, you can just submit the patch. So, um what we've seen with package software is with open source, we've had all this transparency and agency, but we've lost it with software as a service, right? Where vendors or cloud service providers are taking package software and then they're making it available as a service but that operationalize ng that software that is proprietary and it doesn't get contributed back. And so what Lukes building here as long along with our partners down, Lawrence from google, very active contributor in it. Um, the, is the operational piece to actually run sixth or as a public service is part of the open source project so people can then go and take sixth or maybe run it as a smaller internal service. Maybe they discover a bug, they can fix that bug contributed back to the operational izing piece as well as the traditional package software to basically make it a much more robust and open service. So you bring that transparency and the agency back to the SAS model as well. >>Look if you don't mind before, before uh and this segment proportion of it. The importance of immune ability is huge in the world of data. Can you share more on that? Because you're seeing that as a key part of the Blockchain for instance, having this ability to have immune ability. Because you know, people worry about, you know, how things progress in this distributed world. You know, whether from a hacking standpoint or tracking changes, Mutability becomes super important and how it's going to be preserved in this uh new six doorway. >>Oh yeah, so um mutability essentially means cannot be changed. So the structure of something is set. If it is anyway tampered or changed, then it breaks the cryptographic structure that we have of our public transparency service. So this way anybody can effectively recreate the cryptographic structure that we have of this public transparency service. So this mutability provides trust that there is non repudiation of the data that you're getting. This data is data that you can trust because it's built upon a cryptographic foundation. So it has very much similar parallels to Blockchain. You can trust Blockchain because of the immutable nature of it. And there is some consensus as well. Anybody can effectively download the Blockchain and run it themselves and compute that the integrity of that system can be trusted because of this immutable nature. So that's why we made this an inherent part of Six door is so that anybody can publicly audit these events and data sets to establish that there tamper free. >>That is a huge point. I think one of the things beyond just the security aspect of being hacked and protecting assets um trust is a huge part of our society now, not just on data but everything, anything that's reputable, whether it's videos like this being deep faked or you know, or news or any information, all this ties to security again, fundamentally and amazing concepts. Um I really want to keep an eye on this great work. Um Pearl, I gotta get back to you on Quantum because again, you can't, I mean people love Quantum. It's just it feels like so sci fi and it's like almost right here, right, so close and it's happening. Um And then people get always, what does that mean for security? We go back to look and ask them well quantum, you know, crypto But before we get started I wanted, I'm curious about how that's gonna play out from the project because is it going to be more part of like a C. N. C. F. How do you bring the open source vibe to Quantum? >>Uh so that's a very good question because that was a plan, the whole work that we are going to do related to operators to enable Quantum is managed by the open source community and that project lies in the casket. So casket has their own open source community and all the modification by the way, I should first tell you what excuse did so cute skin is the dedicate that you use to develop circuits that are run on IBM or Honeywell back in. So there are certain Quantum computers back and that support uh, circuits that are created using uh Houston S ticket, which is an open source as well. So there is already a community around this which is the casket. Open source community and we have pushed the code and all the maintenance is taken care of by that community. Do answer your question about if we are going to integrate it with C and C. F. That is not in the picture right now. We are, it has a place in its own community and it is also very niche to people who are working on the Quantum. So right now you have like uh the contributors who who are from IBM as well as other uh communities that are specific specifically working on content. So right now I don't think so, we have the map to integrated the C. N. C. F. But open source is the way to go and we are on that tragic Torri >>you know, we joke here the cube that a cubit is coming around the corner can can help but we've that in you know different with a C. But um look, I want to ask you one of the things that while you're here your security guru. I wanted to ask you about Quantum because a lot of people are scared that Quantum is gonna crack all the keys on on encryption with his power and more hacking. You're just comment on that. What's your what's your reaction to >>that? Yes that's an incredibly good question. This will occur. Okay. And I think it's really about preparation more than anything now. One of the things that we there's a principle that we have within the security world when it comes to coding and designing of software and this aspect of future Cryptography being broken. As we've seen with the likes of MD five and Sha one and so forth. So we call this algorithm agility. So this means that when you write your code and you design your systems you make them conducive to being able to easily swap and pivot the algorithms that use. So the encryption algorithms that you have within your code, you do not become too fixed to those. So that if as computing gets more powerful and the current sets of algorithms are shown to have inherent security weaknesses, you can easily migrate and pivot to a stronger algorithms. So that's imperative. Lee is that when you build code, you practice this principle of algorithm agility so that when shot 256 or shot 5 12 becomes the shar one. You can swap out your systems. You can change the code in a very least disruptive way to allow you to address that floor within your within your code in your software projects. >>You know, luke. This is mind bender right there. Because you start thinking about what this means is when you think about algorithmic agility, you start thinking okay software countermeasures automation. You start thinking about these kinds of new trends where you need to have that kind of signature capability. You mentioned with this this project you're mentioning. So the ability to actually who signs off on these, this comes back down to the paradigm that you guys are talking about here. >>Yes, very much so. There's another analogy from the security world, they call it turtles all the way down, which is effectively you always have to get to the point that a human or a computer establishes that first point of trust to sign something off. And so so it is it's a it's a world that is ever increasing in complexity. So the best that you can do is to be prepared to be as open as you can to make that pivot as and when you need to. >>Pretty impressive, great insight steve. We can talk for hours on this panel, emerging tech with red hat. Just give us a quick summary of what's going on. Obviously you've got a serious brain trust going on over there. Real world impact. You talk about the future of trust, future of software, future of computing, all kind of going on real time right now. This is not so much R and D as it is the front range of tech. Give us a quick overview of >>Yeah, sure, yeah, sure. The first thing I would tell everyone is go check out next that red hat dot com, that's got all of our different projects, who to contact if you're interested in learning more about different areas that we're working on. And it also lists out the different areas that we're working on, but just as an overview. So we're working on software defined storage, cloud storage. Sage. Well, the creator of Cf is the person that leads that group. We've got a team focused on edge computing. They're doing some really cool projects around um very lightweight operating systems that and kubernetes, you know, open shift based deployments that can run on, you know, devices that you screw into the sheet rock, you know, for that's that's really interesting. Um We have a cloud networking team that's looking at over yin and just intersection of E B P F and networking and kubernetes. Um and then uh you know, we've got an application platforms team that's looking at Quantum, but also sort of how to advance kubernetes itself. So that's that's the team where you got the persistent volume framework from in kubernetes and that added block storage and object storage to kubernetes. So there's a lot of really exciting things going on. Our charter is to inform red hats long term technology strategy. We work the way my personal philosophy about how we do that is that Red hat has product engineering focuses on their product roadmap, which is by nature, you know, the 6 to 9 months. And then the longer term strategy is set by both of us. And it's just that they're not focused on it. We're focused on it and we spend a lot of time doing disambiguate nation of the future and that's kind of what we do. We love doing it. I get to work with all these really super smart people. It's a fun job. >>Well, great insights is super exciting, emerging tack within red hat. I'll see the industry. You guys are agile, your open source and now more than ever open sources, uh, product Ization of open source is happening at such an accelerated rate steve. Thanks for coming on parole. Thanks for coming on luke. Great insight all around. Thanks for sharing. Uh, the content here. Thank you. >>Our pleasure. >>Thank you. >>Okay. We were more, more redhead coverage after this. This video. Obviously, emerging tech is huge. Watch some of the game changing action here at Redhead Summit. I'm john ferrier. Thanks for watching. Yeah.

Published Date : Apr 28 2021

SUMMARY :

This is the emerging technology with Red So in some sense, it's often a bit of a surprise that we have to react to. And I mean you have some business unit kind of probably uh but you have to have first principles you know, it's got to be sort of fit into that is a great example if somebody came up came to us with an So it's you have also a huge commercial impact and again, open sources of one of the 4th, So I think that the way I would start is that, you know, side and you got luke on the security side. And we define quantum supremacy as let's say you have really very early in the time and uh we talk with our customers and I want to get the look real quick because you know, It's very similar to a Blockchain allows you to have cryptographic proof Is the the code the better. all of the code is developed by the community. So luke maybe steve if you can way into on this. so luke, why don't you take that? you know, I found that from looking at all aspects of security over the years that I've worked in this realm. So we talked about um you mentioned some of the things that Because you know, people worry about, you know, how things progress in this distributed world. effectively recreate the cryptographic structure that we have of this public We go back to look and ask them well quantum, you know, crypto But So right now you have like uh the contributors who who are from in you know different with a C. But um look, I want to ask you one of the things that while you're here So the encryption algorithms that you have within your code, So the ability to actually who signs off on these, this comes back So the best that you can do is to be prepared to be as open as you This is not so much R and D as it is the on their product roadmap, which is by nature, you know, the 6 to 9 months. I'll see the industry. Watch some of the game changing action here at Redhead Summit.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
john ferrierPERSON

0.99+

Stephan WattPERSON

0.99+

luke HinesPERSON

0.99+

IBMORGANIZATION

0.99+

Luke HindsPERSON

0.99+

stevePERSON

0.99+

six monthsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

Parul SinghPERSON

0.99+

6QUANTITY

0.99+

HoneywellORGANIZATION

0.99+

18 monthsQUANTITY

0.99+

LawrencePERSON

0.99+

Linux FoundationORGANIZATION

0.99+

six storesQUANTITY

0.99+

RedheadORGANIZATION

0.99+

4thQUANTITY

0.99+

Six doorORGANIZATION

0.99+

twoQUANTITY

0.99+

first pieceQUANTITY

0.99+

six DoorORGANIZATION

0.99+

six doorsQUANTITY

0.99+

sixthQUANTITY

0.99+

red hat dot comORGANIZATION

0.99+

Redhead SummitEVENT

0.99+

bothQUANTITY

0.99+

googleORGANIZATION

0.98+

9 monthsQUANTITY

0.98+

OneQUANTITY

0.98+

LeePERSON

0.98+

firstQUANTITY

0.98+

red hatsORGANIZATION

0.98+

oneQUANTITY

0.98+

six doorORGANIZATION

0.98+

Red hatORGANIZATION

0.96+

LukesPERSON

0.96+

lukePERSON

0.96+

red hatORGANIZATION

0.96+

first principlesQUANTITY

0.95+

johnPERSON

0.95+

first thingQUANTITY

0.95+

Six LawTITLE

0.95+

PearlPERSON

0.94+

Red hatORGANIZATION

0.92+

six doorwayQUANTITY

0.92+

Sixth floorQUANTITY

0.92+

first pointQUANTITY

0.91+

6thQUANTITY

0.91+

few years agoDATE

0.89+

SixQUANTITY

0.88+

5th generationQUANTITY

0.88+

steve wattPERSON

0.86+

cuba neteaseORGANIZATION

0.85+

CfORGANIZATION

0.84+

three great guestsQUANTITY

0.84+

Six storeORGANIZATION

0.82+

this yearDATE

0.82+

ibmsORGANIZATION

0.82+

Red Hat Summit 2021 VirtualEVENT

0.82+

CubeORGANIZATION

0.81+

TorriPERSON

0.8+

redheadORGANIZATION

0.79+

Red Hat summit 21EVENT

0.79+

CubansPERSON

0.76+

SagePERSON

0.76+

one placeQUANTITY

0.72+

shot 5 12OTHER

0.71+

ShaPERSON

0.69+

cohortsQUANTITY

0.66+

C. N.TITLE

0.65+

K NativeORGANIZATION

0.62+

zero TrustQUANTITY

0.61+

six lawQUANTITY

0.6+

six storeORGANIZATION

0.57+

Clayton Coleman, Red Hat | Google Cloud Next OnAir '20


 

>>From around the globe covering Google cloud next. >>Hi, I'm Stu middleman and this is the cube coverage of Google cloud. Next, happy to welcome back to the program. One of our cube alumni, Clayton Coleman, he's the architect for Kubernetes and OpenShift with red hat Clayton. Thanks for joining us again. Great to see you. Good to see you. All right. So of course, one of the challenges in 2020 is we love to be able to get unity together. And while we can't do it physically, we do get to do it through all of the virtual events and online forum. Of course, you know, we had the cubit red hat summit cube con, uh, for the European show and now Google cloud. So, you know, give us kind of your, your state of the state 2020 Kubernetes. Of course it was Google, uh, taking the technology from Borg, a few people working on it, and, you know, just that this project that has just had massive impact on it. So, you know, where are with the community in Kubernetes today? >>So, uh, you know, 2020 has been a crazy year for a lot of folks. Um, a lot of what I've been spending my time on is, um, you know, taking feedback from people who, you know, in this time of, you know, change and concern and worry and huge shift to the cloud, um, working with them to make sure that we have a really good, um, you know, foundation in Kubernetes and that the ecosystem is healthy and the things are moving forward there. So there's a ton of exciting projects. I will say, you know, the, the pandemics had a, an impact on, um, you know, the community. And so in many places we've reacted by slowing down our schedules or focusing more on the things that people are really worried about, like quality and bugs and making sure that the stuff just works. Uh, I will say this year has been a really interesting one and open source. >>There's been much more focus, I think, on how we start to tie this stuff together. Um, and new use cases and new challenges coming into, um, what maybe, you know, the original Kubernetes was very focused on helping you bring stuff together, bring your applications together and giving you common abstractions for working with them. Um, we went through a phase where we made it easy to extend Kubernetes, which brought a whole bunch of new abstractions. And, and I think now we're starting to see the challenges and the needs of organizations and companies and individuals that are getting out of, um, not just in Kubernetes, but across multiple locations across placement edge has been huge in the last few years. And so the projects in and around Kubernetes are kind of reacting to that. They're starting to, um, bridge, um, many of these, um, you know, disparate locations, different clouds, multicloud hybrid cloud, um, connecting enterprises to data centers are connecting data centers to the cloud, helping workloads be a little bit more portable in of themselves, but helping workloads move. >>And then I think, you know, we're, we're really starting to ask those next big questions about what comes, what comes next for making applications really come alive in the cloud, um, where you're not as focused on the hardware. You're not focused on the details, which are focused on abstractions, like, um, you know, reliability and availability, not just in one cluster, but in multiple. So that's been a really exciting, uh, transition in many of the projects that I've been following. You know, certainly projects like Istio I've been dealing with, um, spanning clusters and connecting existing workloads in and, uh, you know, each step along the way, I see people sort of broaden their scope about what they want, uh, open source to help themselves. >>Yeah, I it's, it's, it's been fascinating to watch just the, the breadth of the projects that can tie in and leverage Kubernetes. Uh, you brought up edge computing and want to get into some of the future pieces, but before we do, you know, let's look at Kubernetes itself. Uh, one dot 19 is kind of where we are at. Uh, um, I already see some, some red stalking about one dot 20. Can you just talk about the, the, the base project itself contributions to it, how the upstream, uh, works and you know, how, how should customers think about, you know, their Kubernetes environment, obviously, you know, red hat with open shifts had a very strong position. You've got thousands of customers now using it, all of the cloud providers have their, uh, Kubernetes flavor, but also you partner with them. So walk us through a little bit about, you know, the open source, the project and those dynamics. >>The project is really healthy. I think we've got through a couple of big transitions over the last few years. We've moved from the original, um, you know, I was on the bootstrap steering committee trying to help the governance model. The full bootstrap committee committee has handed off responsibility to, um, new participants. There's been a lot of growth in the project governance and community governance. Um, I think there's huge credit to the folks on the steering committee today. Folks, part of contributor experience and standardizing and formalizing Kubernetes as its own thing. I think we've really moved into being a community managed project. Um, we've developed a lot of maturity around that and Kubernetes and the folks involved in helping Kubernetes be successful, have actually been able to help others within the CNCF ecosystem and other open source projects outside of CNCF be successful. So that angle is going phenomenally well. >>Uh, contribution is up. I think one of the tension points that we've talked about is, um, Kubernetes is maturing one 19, spent a lot of time on stability. And while there's definitely lots of interesting new things in a few areas like storage, and we have fee to an ingress fee too, coming up on the horizon dual stack, support's been hotly anticipated by a lot of on premise folks looking to make the transition to IPV six. I think we've been a little bit less focused on chasing features and more focused on just making sure that Kubernetes is maturing responsibly. Now that we have a really successful ecosystem of integrators and vendors and, um, you know, unification, the conformance efforts in Kubernetes. Um, there've been some great work. I happened to be involved in the, um, in the architecture conformance definition group, and there's been some amazing participation from, um, uh, from that group of people who've made real strides in growing the testing efforts so that, you know, not only can you look at, um, two different Kubernetes vendors, but you can compare them in meaningful ways. >>That's actually helped us with our test coverage and Kubernetes, there's been a lot of focus on, um, really spending time on making sure that upgrades work well, that we've reduced the flakiness of our test suites and that when a contributor comes into Kubernetes, they're not presented with a confusing, massive instructions, but they have a really clear path to make their first contribution and their next contribution. And then the one after that. So from a project maturity standpoint, I think 2020 has been a great great year for the project. And I want to see that continue. >>Yeah. One of the things we talked quite a bit about, uh, at both red hat summit, as well as, uh, the CubeCon cloud native con Europe, uh, was operators. And, you know, maybe I believe there was some updates also about how operators can work with Google cloud. So can you give us that update? >>Sure. There's been a lot of, um, there's been a lot of growth in both the client tooling and the libraries and the frameworks that make it easy to integrate with Kubernetes. Um, and those integrations are about patterns that, um, make operations teams more productive, but it takes time to develop the domain expertise in, uh, operationalizing large groups of software. So over the last year, um, know the controller runtime project, uh, which is an outgrowth of the Kubernetes Siggy lb machinery. So it's kind of a, an outshoot that's intended to standardize and make it easier to write integrations to Kubernetes that next step of, um, you know, going then pass that red hat's worked, uh, with, um, others in the community around, um, the operator SDK, uh, which unifying that project and trying to get it aligned with others in the ecosystem. Um, almost all of the cloud providers, um, have written operators. >>Google has been an early adopter of the controller and operator pattern, uh, and have continued to put time and effort into helping make the community be successful. And, um, we're really appreciative of everyone who's come together to take some of those ideas from Kubernetes to extend them into, um, whether it's running databases and service on top of Kubernetes or whether it's integrating directly with cloud. Um, most of that work or almost all of that work benefits everybody in the ecosystem. Um, I think there's some future work that we'd like to see around, um, you know, uh, folks, uh, from, um, a number of places have gone even further and tried to boil Kubernetes down into simpler mechanisms, um, that you can integrate with. So a little bit more of a, a beginner's approach or a simplification, a domain specific, uh, operator kind of idea that, um, actually really does accelerate people getting up to speed with, um, you know, building these sorts of integrations, but at the end of the day, um, one of the things that I really see is the increasing integration between the public clouds and their Kubernetes on top of those clouds through capabilities that make everybody better off. >>So whether you're using a managed service, um, you know, on a particular cloud or whether you're running, um, the elements of that managed open source software using an open source operator on top of Kubernetes, um, there's a lot of abstractions that are really productive for admins. You might use the managed service for your production instances, but you want to use, um, throw away, um, database instances for developers. Um, and there's a lot of experimentation going on. So it's almost, it's almost really difficult to say what the most interesting part is. Um, operators is really more of an enabling technology. I'm really excited to see that increasing glue that makes automation and makes, um, you know, dev ops teams, um, more productive just because they can rely increasingly on open source or managed services offerings from, you know, the large cloud providers to work well together. >>Yeah. You had mentioned that we're seeing all the other projects that are tying into Coobernetti's, we're seeing Kubernetes going into broader use cases, things like edge computing, what, from an architectural standpoint, you know, needs to be done to make sure that, uh, Kubernetes can be used, you know, meets the performance, the simplicity, um, in these various use cases. >>That's a, that's a good question. There's a lot of complexity in some areas of what you might do in a large application deployment that don't make sense in edge deployments, but you get advantages from having a reasonably consistent environment. I think one of the challenges everybody is going through is what is that reasonable consistency? What are the tools? You know, one of the challenges obviously is as we have more and more clusters, a lot of the approaches around edge involve, you know, whether it's a single cluster on a single machine and, um, you know, in a fairly beefy, but, uh, remote, uh, computer, uh, that you still need to keep in sync with your application deployment. Um, you might have a different life cycle for, uh, the types of hardware that you're rolling out, you know, whether it's regional or whether it's tied to, whether someone can go out to that particular site that you've been update the software. Sometimes it's connected, sometimes it isn't. So I think a need that is becoming really clear is there's a lot of abstractions missing above Coopernetties. Uh, and everyone's approaching this differently. We've got a get ops and centralized config management. Um, we have, uh, architectures where, you know, you, you boot up and you go check some remote cloud location for what you should be running. Um, I think there's some, some productive obstructions that are >>That, or haven't been, um, >>It haven't been explored sufficiently yet that over the next couple of years, how do you treat a whole bunch of clusters as a pool of compute where you're not really focused on the details of where a cluster is, or how can you define applications that can easily move from your data center out to the edge or back up to the cloud, but get those benefits of Kubernetes, all those places. And >>That >>This is for so early, that what I see in open source and what I see with people deploying this is everyone is approaching this subtly differently, but you can start to see some of those patterns emerge where, um, you need reproducible bundles of applications, things that help can do REL, or you can do with just very simply with Kubernetes. Um, not every edge location needs, um, uh, an ingress controller or a way to move traffic onto that cluster because their job is to generate traffic and send it somewhere else. But then that puts more pressure on, well, you need those where you're feeding that data to your API APIs, whether that's a cloud or something within your something within a private data center, you need, um, enough of commonalities across those clusters and across your applications that you could reason about what's going on. So >>There's a huge amount >>Out of a space here. And I don't think it's just going to be Kubernetes. In fact, I, I want to say, I think we're starting to move to that phase where Kubernetes is just part of the platform that people are building or need to build. And what can we do to build those tools that help you stitch together computer across a lot of footprints, um, parts of applications across a lot of footprints. And there's, there's a bunch of open source projects that are trying to drive to that today. Um, projects like I guess the O and K natives, um, with the work being done with the venting in K native, and obviously the venting is a hugely, um, you know, we talk about edge, we'd almost be remiss, not talk about moving data. And you talk about moving data. Well, you want streams of data and you want to be reacted to data with compute and K native and Istio are both great examples of technologies within the QB ecosystem that are starting to broaden, um, you know, outside of the, well, this is just about one cube cluster to, um, we really need to stitch together a mindset of development, even if we have a reasonably consistent Kubernetes across all those footprints. >>Yeah. Well, Clayton so important. There's so many technologies out there it's becoming about that technology. And it's just a given, it's an underlying piece of it. You know, we don't talk about the internet. We don't talk, you know, as much about Linux anymore. Cause it's just in the fabric of everything we do. And it sounds like we're saying that's where we're getting with Kubernetes. Uh, I'd love to pull on that thread. You mentioned that you're hearing some patterns starting to emerge out there. So when you're talking to enterprises, especially if you're talking 2020, uh, lots of companies, all of a sudden have to really accelerate, uh, you know, those transformational projects that they were doing so that they can move faster and keep up with the pace of change. Uh, so, you know, what should enterprise be, be working on? What feedback are you hearing from customers, but what are some of those themes that you can share and w what, what should everybody else be getting ready for that? >>The most common pattern I think, is that many people still find a need to build, uh, platforms or, um, standardization of how they do application development across fairly large footprints. Um, I think what they're missing, and this is what everyone's kind of building on their own today, that, um, is a real opportunity within the community is, uh, abstract abstractions around a location, not really about clusters or machines, but something broader than that, whether it's, um, folks who need to be resilient across clouds, and whether it's folks who are looking to bring together disparate footprints to accelerate their boot to the cloud, or to modernize their on premise stack. They're looking for abstractions that are, um, productive to say, I don't really want to worry too much about the details of clusters or machines or applications, but I'm talking about services and where they run and that I need to stitch those into. >>Um, I need to stitch those deeply into some environments, but not others. So that pattern, um, has been something that we've been exploring for a long time within the community. So the open service broker project, um, you know, has been a long running effort of trying to genericize one type of interface operators and some of the obstructions and Kubernetes for extending Kubernetes and new dimensions is another. What I'm seeing is that people are building layers on top through continuous deployment, continuous integration, building their own API is building their own services that really hide these details. I think there's a really rich opportunity within open to observe what's going on and to offer some supporting technologies that bridge clouds, bridge locations, what you deal with computed a little bit more of an abstract level, um, and really doubled down on making services run. Well, I think we're kind of ready to make the transition to say officially, it's not just about applications, which is what we've been saying for a long time. >>You know, I've got these applications and I'm moving them, but to flip it around and say, we want to be service focused and services, have a couple of characteristics, the details of where they run are more about the guarantees that you're providing for your customers. Um, we lack a lot of open source tools that make it easier to build and run services, not just to consume as dependencies or run open source software, but what are the things that make our applications more resilient in and of themselves? I think Kubernetes was a good start. Um, I really see organizations struggling with that today. You're going to have multiple locations. You're going to have, um, the need to dramatically move workloads. What are the tools that the whole ecosystem, the open source ecosystem, um, can collaborate on and help accelerate that transition? >>Well, Clayton, you teed up on my last thing. I want to ask you, you know, we're, we're here at the Google cloud show and when you talk about ecosystem, you talk about community, you know, Google and red hat, both very active participants in this community. So, you know, you, you peer you collaborate with a lot of people from Google I'm sure. So give our audience a little bit of insight as to, you know, Google's participation. What, what you've been seeing from them the last couple of years at Google has been a great partner, >>Crazy ecosystem for red hat. Um, we worked really closely with them on Istio and K native and a number of other projects. Um, I, you know, as always, um, I'm continually impressed by the ability of the folks that I've worked with from Google to really take a community focus and to concentrate on actually solving use cases. I think the, you know, there's always the desire to create drama around technology or strategy or business and open source. You know, we're all coming together to work on common goals. I really want to, um, you know, thank the folks that I've worked with at Google over the years. Who've been key participants. They've believed very strongly in enabling users. Um, you know, regardless of, um, you know, business or technology, it's about making sure that we're improving software for everyone. And one of the beauties of working on an open source project like Kubernetes is everyone can get some benefit out of it. And those are really, um, you know, the sum of all of the individual contributions is much larger than what the simple math would apply. And I think that's, um, you know, Kubernetes has been a huge success. I want to see more successes like that. Um, you know, working with Google and others in the open source ecosystem around infrastructure as a service and, you know, this broadening >>Domain of places where we can collaborate to make it easier for developers and operations teams and dev ops and sec ops to just get their jobs done. Um, you know, there's a lot more to do and I think open source is the best way to do that. All right. Well, Clayton Coleman, thank you so much for the update. It's really great to catch up. It was a pleasure. All right. Stay tuned for lots more coverage. The Google cloud next 2020 virtually I'm Stu Miniman. Thank you for watching the cube.

Published Date : Aug 25 2020

SUMMARY :

From around the globe covering Google cloud Borg, a few people working on it, and, you know, just that this project that has just had good, um, you know, foundation in Kubernetes and that the ecosystem is healthy and um, what maybe, you know, the original Kubernetes was very focused on helping you bring in and, uh, you know, each step along the way, I see people sort of broaden their scope about it, how the upstream, uh, works and you know, how, how should customers think about, We've moved from the original, um, you know, I was on the bootstrap steering committee trying to help you know, not only can you look at, um, two different Kubernetes vendors, of our test suites and that when a contributor comes into Kubernetes, they're not presented with a And, you know, maybe I believe there was some updates also about um, you know, going then pass that red hat's worked, uh, with, um, um, you know, building these sorts of integrations, but at the end of the day, um, you know, the large cloud providers to work well together. uh, Kubernetes can be used, you know, meets the performance, the simplicity, um, a lot of the approaches around edge involve, you know, whether it's a single cluster on not really focused on the details of where a cluster is, or how can you define applications that can easily move a private data center, you need, um, enough of commonalities to broaden, um, you know, outside of the, well, this is just about one cube cluster all of a sudden have to really accelerate, uh, you know, those transformational projects that they were doing so a need to build, uh, platforms or, um, So the open service broker project, um, you know, has been a long You're going to have, um, the need to dramatically move workloads. So, you know, you, you peer you collaborate with a lot And those are really, um, you know, the sum of all of the individual contributions is much Um, you know, there's a lot more to do and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Clayton ColemanPERSON

0.99+

ClaytonPERSON

0.99+

GoogleORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

thousandsQUANTITY

0.99+

first contributionQUANTITY

0.99+

twoQUANTITY

0.99+

2020DATE

0.99+

EuropeLOCATION

0.99+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

KubernetesTITLE

0.98+

Red HatORGANIZATION

0.98+

Stu middlemanPERSON

0.98+

OneQUANTITY

0.97+

last yearDATE

0.97+

pandemicsEVENT

0.97+

LinuxTITLE

0.97+

single clusterQUANTITY

0.96+

single machineQUANTITY

0.96+

CNCFORGANIZATION

0.96+

one clusterQUANTITY

0.94+

each stepQUANTITY

0.94+

todayDATE

0.94+

this yearDATE

0.92+

dot 20COMMERCIAL_ITEM

0.91+

IstioORGANIZATION

0.91+

KubernetesORGANIZATION

0.9+

OpenShiftORGANIZATION

0.89+

K nativeORGANIZATION

0.88+

customersQUANTITY

0.88+

Google cloudTITLE

0.88+

next couple of yearsDATE

0.85+

19QUANTITY

0.84+

yearsDATE

0.84+

Google CloudTITLE

0.81+

one cubeQUANTITY

0.81+

lastDATE

0.8+

IPV sixTITLE

0.79+

red hatORGANIZATION

0.77+

'20DATE

0.77+

dot 19COMMERCIAL_ITEM

0.76+

RELTITLE

0.74+

last few yearsDATE

0.68+

Sam Werner, IBM & Brent Compton, Red Hat | KubeCon + CloudNativeCon Europe 2020 – Virtual


 

>>from around the globe. It's the Cube with coverage of Coop Con and Cloud, Native Con Europe 2020 Virtual brought to You by Red Hat, The Cloud Native Computing Foundation and its Ecosystem Partners. >>And welcome back to the Cube's coverage of Cube Con Cloud, Native Con Europe 20 twenties Virtual event. I'm Stew Minimum and and happy to Welcome back to the program, two of our Cube alumni. We're gonna be talking about storage in this kubernetes and container world. First of all, we have Sam Warner. He is the vice president of storage, offering management at IBM, and joining him is Brent Compton, senior director of storage and data architecture at Red Hat and Brent. Thank you for joining us, and we get to really dig in. It's the combined IBM and red hat activity in this space, of course, both companies very active in the space of the acquisition, and so we're excited to hear about what's going going. Ford. Sam. Maybe if we could start with you as the tee up, you know, Both Red Hat and IBM have had their conferences this year. We've heard quite a bit about how you know, Red Hat the solutions they've offered. The open source activity is really a foundational layer for much of what IBM is doing when it comes to storage, you know, What does that mean today? >>First of all, I'm really excited to be virtually at Cube Con this year, and I'm also really excited to be with my colleague Brent from Red Hat. This is, I think, the first time that IBM storage and Red Hat Storage have been able to get together and really articulate what we're doing to help our customers in the context of kubernetes and and also with open shift, the things we're doing there. So I think you'll find, ah, you know, as we talked today, that there's a lot of work we're doing to bring together the core capabilities of IBM storage that been helping enterprises with there core applications for years alongside, Ah, the incredible open source capabilities being developed, you know, by red Hat and how we can bring those together to help customers, uh, continue moving forward with their initiatives around kubernetes and rebuilding their applications to be develop once, deploy anywhere, which runs into quite a few challenges for storage. So, Brennan, I'm excited to talk about all the great things we're doing. Excited about getting to share it with everybody else. A cube con? >>Yes. So of course, containers When they first came out well, for stateless environments and we knew that, you know, we've seen this before. You know, those of us that live through that wave of virtualization, you kind of have a first generation solution. You know what application, What environment and be used. But if you know, as we've seen the huge explosion of containers and kubernetes, there's gonna be a maturation of the stack. Storage is a critical component of that. So maybe upfront if you could bring us up to speed you're steeped in, you know, a long history in this space. You know, the challenges that you're hearing from customers. Uhm And where are we today in 2020 for this? >>Thanks to do the most basic caps out there, I think are just traditional. I'm databases. APS that have databases like a post press, a longstanding APS out there that have databases like DB two so traditional APs that are moving towards a more agile environment. That's where we've seen in fact, our collaboration with IBM and particularly the DB two team. And that's where we've seen is they've gone to a micro services container based architecture we've seen pull from the market place. Say, you know, in addition to inventing new Cloud native APS, we want our tried true and tested perhaps I mean such as DB two, such as MQ. We want those to have the benefits of a red hat, open shift, agile environment. And that's where the collaboration between our group and Sam's group comes in together is providing the storage and data services for those state labs. >>Great, Sam, you know I IBM. You've been working with the storage administrator for a long time. What challenges are they facing when we go to the new architectures is it's still the same people it might There be a different part of the organization where you need to start in delivering these solutions. >>It's a really, really good question, and it's interesting cause I do spend a lot of time with storage administrators and the people who are operating the I T infrastructure. And what you'll find is that the decision maker isn't the i t operations or storage operations. People These decisions about implementing kubernetes and moving applications to these new environments are actually being driven by the business lines, which is, I guess, not so different from any other major technology shift. And the storage administrators now are struggling to keep up. So the business lines would like to accelerate development. They want to move to a developed, once deploy anywhere model, and so they start moving down the path of kubernetes. In order to do that, they start, you know, leveraging middleware components that are containerized and easy to deploy. And then they're turning to the I T infrastructure teams and asking them to be able to support it. And when you talk to the storage administrators, they're trying to figure out how to do some of the basic things that are absolutely core to what they do, which is protecting the data in the event of a disaster or some kind of a cyber attack, being able to recover the data, being able to keep the data safe, ensuring governance and privacy of the data. These things are difficult in any environment, but now you're moving to a completely new world and the storage administrators have ah tough challenge out of them. And I think that's where IBM and Red Hat can really come together with all of our experience and are very broad portfolio with incredibly enterprise hardened storage capabilities to help them move from their more traditional infrastructure to a kubernetes environment. >>Maybe if you could bring us up to date when we look back, it, like open stack of red hat, had a few projects from an open source standpoint to help bolster the open source or storage world in the container world. We saw some of those get boarded over. There's new projects. There's been a little bit of argument as to the various different ways to do storage. And of course, we know storage has never been a single solution. There's lots of different ways to do things, but, you know, where are we with the options out there? What's that? What's what's the recommendation from Red Hat and IBM as to how we should look at that? >>I wanna Bridget question to Sam's earlier comments about the challenges facing the storage admin. So if we start with the word agility, I mean, what is agility mean for it in the data world. We're conscious for agility from an application development standpoint. But if you use the term, of course, we've been used to the term Dev ops. But if we use the term data ops, what does that mean? What does that mean to you in the past? For decades, when a developer or someone deploying production wanted to create new storage or data, resource is typically typically filed a ticket and waited. So in the agile world of open shift in kubernetes, it's everything is self service and on demand or what? What kind of constraints and demands that place on the storage and data infrastructure. So now I'll come back to your questions. Do so yes. At the time, that red hat was, um, very heavily into open stack, Red Hat acquired SEF well acquired think tank and and a majority of the SEF developers who are most active in the community. And now so and that became the de facto software defying storage for open stack. But actually for the last time that we spoke at Coop Con and the Rook project has become very popular there in the CN CF as away effectively to make software defined storage systems like SEF. Simple so effectively. The power of SEF, made simple by rook inside of the open shift operator frame where people want that power that SEF brings. But they want the simplicity of self service on demand. And that's kind of the diffusion. The coming together of traditional software defined storage with agility in a kubernetes world. So rook SEF, open shift container storage. >>Wonderful. And I wonder if we could take that a little bit further. A lot of the discussion these days and I hear it every time I talk to IBM and Red Hat is customers air using hybrid clouds. So obviously that has to have an impact on storage. You know, moving data is not easy. There's a little bit of nuance there. So, you know, how do we go from what you were just talking about into a hybrid environ? >>I guess I'll take that one to start and Brent, please feel free to chime in on it. So, um, first of all, from an IBM perspective, you really have to start at a little bit higher level and at the middleware layer. So IBM is bringing together all of our capabilities everything from analytics and AI. So application, development and, uh, in all of our middleware on and packaging them up in something that we call cloud packs, which are pre built. Catalogs have containerized capabilities that can be easily deployed. Ah, in any open shift environment, which allows customers to build applications that could be deployed both on premises and then within public cloud. So in a hybrid multi cloud environment, of course, when you build that sort of environment, you need a storage and data layer, which allows you to move those applications around freely. And that's where the IBM storage suite for cloud packs was. And we've actually taken the core capabilities of the IBM storage software to find storage portfolio. Um, which give you everything you need for high performance block storage, scale out, um, file storage and object storage. And then we've combined that with the capabilities, uh, that we were just discussing from Red Hat, which including a CS on SEF, which allow you, ah, customer to create a common, agile and automated storage environment both on premises and the cloud giving consistent deployment and the ability to orchestrate the data to where it's needed >>I'll just add on to that. I mean that, as Sam noted and is probably most of you are aware. Hybrid Cloud is at the heart of the IBM acquisition of Red Hat with red hat open shift. The stated intent of red hat open shift is to be to become the default operating environment for the hybrid cloud, so effectively bring your own cloud wherever you run. So that that is at the very heart of the synergy between our companies and made manifest by the very large portfolios of software, which would be at which have been, um, moved to many of which to run in containers and embodied inside of IBM cloud packs. So IBM cloud packs backed by red hat open shift on wherever you're running on premises and in a public cloud. And no, with this storage suite for cloud packs that Sam referred to also having a deterministic experience. That's one of the things as we work, for instance, deeply with the IBM DB two team. One of the things that was critical for them, as they couldn't have they couldn't have their customers when they run on AWS have a completely different experience than when they ran on premises, say, on VM, where our on premises on bare metal critical to the DB two team t give their customers deterministic behavior wherever they can. >>Right? So, Sam, I I think any of our audience that it followed this space have heard Red House story about open shift in how it lives across multiple cloud environments. I'm not sure that everybody is familiar with how much of IBM storage solutions today are really this software driven. So ah, And therefore, you know, if I think about IBM, it's like, okay, and by storage or yes, it can live in the IBM Cloud. But from what I'm hearing from Brent in you and from what I know from previous discussion, this is independent and can live in multiple clouds, leveraging this underlying technology and can leverage the capabilities from those public cloud offers. That right, Sam? >>Yeah, that's right. And you know, we have the most comprehensive portfolio of software defined storage in the industry. Maybe to some, it's ah, it's a well kept secret, but those that use it No, the breadth of the portfolio. We have everything from the highest performing scale out file System Teoh Object store that can scale into the exabytes. We have our block storage as well, which runs within the public clouds and can extend back to your private cloud environment. When we talk to customers about deploying storage for hybrid multi cloud in a container environment, we give them a lot of houses to get there. We give them the ability to leverage their existing san infrastructure through the CS I drivers container storage interface. So our whole, uh, you know, physical on Prem infrastructure supports CS I today and then all the software that runs on our arrays also supports running on top of the public clouds, giving customers then the ability to extend that existing san infrastructure into a cloud environment. And now, with storage suite for cloud packs a sprint described earlier, we give you the ability to build a really agile infrastructure, leveraging the capabilities from Red Hat to give you a fully extensible environment and a common way of managing and deploying both on Prem and in the cloud. So we give you a journey with our portfolio to get from your existing infrastructure. Today, you don't have to throw it out it started with that and build out an environment that goes both on Prem and in the cloud. >>Yeah, Brent, I'm glad that you started with database, cause it's not something that I think most people would think about. You know, in a kubernetes environment, you Do you have any customer examples you might be able to give? Maybe Anonymous? Of course. Just talking about how those mission critical applications can fit into the new modern architect. The >>big banks. I mean, just full stop the big banks. But what I'd add to that So that's kind of frequently they start because applications based on structured data remain at the heart of a lot of enterprises. But I would say workload, category number two, our is all things machine Learning Analytics ai and we're seeing an explosion of adoption within the open shift. And, of course, cloud pack. IBM Cloud private for data, is a key market participant in that machine learning analytic space. So an explosion of the usage of of open shift for those types of workloads I was gonna touch just briefly on an example, going back to our kind of data data pipeline and how it started with databases, but it just it explodes. For instance, data pipeline automation, where you have data coming into your APS that are kubernetes based that our open shift based well, maybe we'll end up inside of Watson Studio inside of IBM ah, cloud pack for data. But along the way, there are a variety of transformations that need to occur. Let's say that you're a big bank. You need Teoh effectively as it comes in. You need to be able to run a CRC to ensure to a test that when when you modify the data, for instance, in a real time processing pipeline that when you pass it on to the next stage that you can guarantee well that you can attest that there's been no tampering of the data. So that's an illustration where it began, very with the basics of basic applications running with structured data with databases. Where we're seeing the state of the industry today is tremendous use of these kubernetes and open shift based architectures for machine learning. Analytics made more simple by data pay data pipeline automation through things like open shift container storage through things like open shift server lis or you have scale double functions and what not? So yeah, it began there. But boy, I tell you what. It's exploded since then. >>Yeah, great to hear not only traditional applications, but as you said so, so much interest. And the need for those new analytics use cases s so it's absolutely that's where it's going. Someone. One other piece of the storage story, of course, is not just that we have state full usage, but talk about data protection, if you could, on how you know things that I think of traditionally my backup restore and like, how does that fit into the whole discussion we've been having? >>You know, when you talk to customers, it's one of the biggest challenges they have honestly. And moving to containers is how do I get the same level of data protection that I use today? Ah, the environments are in many cases, more complex from a data and storage perspective. You want Teoh be able to take application consistent copies of your data that could be recovered quickly, Uh, and in some cases even reused. You can reuse the copies, for they have task for application migration. There's there's lots of or for actually AI or analytics. There's lots of use cases for the data, but a lot of the tools and AP eyes are still still very new in this space. IBM has made, uh, prior, uh, doing data protection for containers. Ah, top priority for our spectrum protect suite. And we provide the capabilities to do application aware snapshots of your storage environment so that a kubernetes developer can actually build in the resiliency they need. As they build applications in a storage administrator can get a pane of glass Ah, and visibility into all of the data and ensure that it's all being protected appropriately and provide things like S L A. So I think it's about, you know, the fact that the early days of communities tended to be stateless. Now that people are moving some of the more mission critical workloads, the data protection becomes just just critical as anything else you do in the environment. So the tools have to catch up. So that's a top priority of ours. And we provide a lot of those capabilities today and you'll see if you watch what we do with our spectrum. Protect suite will continue to provide the capabilities that our customers need to move their mission. Critical applications to a kubernetes environment. >>Alright And Brent? One other question. Looking forward a little bit. We've been talking for the last couple of years about how server lists can plug into this. Ah, higher kubernetes ecosystem. The K Native project is one that I, IBM and Red Hat has been involved with. So for open shift and server lis with I'm sure you're leveraging k native. What is the update? That >>the update is effectively adoption inside of a lot of cases like the big banks, but also other in the talk, uh, the largest companies in other industries as well. So if you take the words event driven architecture, many of them are coming to us with that's kind of top of mind of them is the need to say, you know, I need to ensure that when data first hits my environment, I can't wait. I can't wait for a scheduled batch job to come along and process that data and maybe run an inference. I mean, the classic cases you're ingesting a chest X ray, and you need to immediately run that against an inference model to determine if the patient has pneumonia or code 19 and then kick off another serverless function to anonymous data. Just send back in to retrain your model. So the need. And so you mentioned serverless. And of course, people say, Well, I could I could handle that just by really smart batch jobs, but kind of one of the other parts of server less that sometimes people forget but smart companies are aware of is that server lists is inherently scalable, so zero to end scalability. So as data is coming in, hitting your Kafka bus, hitting your object store, hitting your database and that if you picked up the the community project to be easy, Um, where something hits your relational database and I can automatically trigger an event onto the Kafka bus so that your entire our architecture becomes event >>driven. All right. Well, Sam, let me give you the funding. Let me let you have the final word. Excuse me on the IBM in this space and what you want them to have his takeaways from Cube con 2020 Europe. >>I'm actually gonna talk to I think, the storage administrators, if that's OK, because if you're not involved right now in the kubernetes projects that are happening within your enterprise, uh, they are happening and there will be new challenges. You've got a lot of investments you've made in your existing storage infrastructure. We had IBM and Red Hat can help you take advantage of the value of your existing infrastructure. Uh, the capabilities, the resiliency, the security of built into it with the years. And we can help you move forward into a hybrid, multi cloud environment built on containers. We've got the experience and the capabilities between Red Hat and IBM to help you be successful because it's still a lot of challenges there. But But our experience can help you implement that with the greatest success. Appreciate it. >>Alright, Sam and Brent, Thank you so much for joining. It's been excellent to be able to watch the maturation in this space of the last couple of years. >>Thank you. >>Alright, we'll be back with lots more coverage from Cube Con Cloud, native con Europe 2020 the virtual event. I'm stew Minimum And thank you for watching the Cube. Yeah, yeah, yeah, yeah

Published Date : Aug 18 2020

SUMMARY :

It's the Cube with coverage of Coop Con Maybe if we could start with you as the tee up, you know, Both Red Hat and IBM have the context of kubernetes and and also with open shift, and we knew that, you know, we've seen this before. Say, you know, in addition to inventing it's still the same people it might There be a different part of the organization where you need to start In order to do that, they start, you know, leveraging middleware components help bolster the open source or storage world in the container world. What kind of constraints and demands that place on the storage and data infrastructure. A lot of the discussion these deployment and the ability to orchestrate the data to where it's needed So that that is at the very heart of the synergy between our companies and But from what I'm hearing from Brent in you and from what I leveraging the capabilities from Red Hat to give you a fully extensible environment Yeah, Brent, I'm glad that you started with database, cause it's not something that So an explosion of the usage of of open shift for those types Yeah, great to hear not only traditional applications, but as you said so, so much interest. but a lot of the tools and AP eyes are still still very new in this space. for the last couple of years about how server lists can plug into this. of them is the need to say, you know, I need to ensure that when in this space and what you want them to have his takeaways from Cube con 2020 Europe. Hat and IBM to help you be successful because it's still a lot Alright, Sam and Brent, Thank you so much for joining. 2020 the virtual event.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Sam WarnerPERSON

0.99+

BrentPERSON

0.99+

BrennanPERSON

0.99+

SamPERSON

0.99+

Red HatORGANIZATION

0.99+

twoQUANTITY

0.99+

AWSORGANIZATION

0.99+

Sam WernerPERSON

0.99+

OneQUANTITY

0.99+

2020DATE

0.99+

Red HatORGANIZATION

0.99+

Brent ComptonPERSON

0.99+

CubeORGANIZATION

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

red hatORGANIZATION

0.99+

BothQUANTITY

0.98+

TodayDATE

0.98+

Coop ConORGANIZATION

0.98+

both companiesQUANTITY

0.98+

first generationQUANTITY

0.98+

this yearDATE

0.98+

KubeConEVENT

0.98+

this yearDATE

0.97+

red hatTITLE

0.97+

firstQUANTITY

0.96+

bothQUANTITY

0.96+

KafkaTITLE

0.96+

BridgetPERSON

0.96+

FirstQUANTITY

0.96+

single solutionQUANTITY

0.96+

SEFTITLE

0.95+

red HatORGANIZATION

0.95+

Stew MinimumPERSON

0.95+

CS ITITLE

0.94+

Clayton Coleman, Red Hat | Red Hat Summit 2020


 

>>from around the globe. It's the Cube with digital coverage of Red Hat. Summit 2020 Brought to you by Red Hat. >>Hi, I'm stupid, man. And this is the Cube's coverage of the Red Hat Summit 2020 course. The event this year is digital. We're talking to Red Hat executives, partners and customers where they are around the globe, pulling them in remotely happy to welcome back to the program. One of our Cube alumni on a very important topic, of course, that red hat open shift and joining me is Clayton Coleman. Who's the open shift chief architect with Red Hat. Clayton, thanks so much for joining us. Thank you >>for having me today. >>All right, So before we get into the product, it's probably worthwhile that we talked about you know what's happening in the community and talking specifically, you know, kubernetes the whole cloud, native space. Normally we would have gotten together. I would have seen you at Cube Con Ah, you know, at the end of March. But instead, here we are at the end of April. Looking out, you know, more CN cf events later this year, but first Red Hat Summit is a great open source event and broad community. So would really love your viewpoint as to what's happening in that ecosystem. >>It's been a really interesting year, obviously. Ah, with an open source community, you know, we react to this. Um, like we always react to all the things that go on in open source. People come to the community and sometimes they have more time, and sometimes they have less time. I think just from a community perspective, there's been a lot of people you know. It's reaching out to their colleagues outside of their companies, to their friends and coworkers and all of the different participants in the community. And there's been a lot of people getting together for a little bit of extra time trying todo, you know, connect virtually where they can't connect physically. And it's been it's been great to at least see where we've come this year. We haven't had Cube con and that'll be coming up later this year. But Kubernetes just had the 1 18 release, and I think Kubernetes is moving into that phase where it's a mature, open source project. We've got a lot of the processes down. I'm really happy with the work that the steering committee, um, has gone through. We handed off the last of the bootstrap Steering Committee members hand it off to the new, fully elected steering committee last year, and it's gone absolutely smoothly, which has been phenomenal on the The core project is trying to be a little bit more stable and to focus on closing out those loose ends being a little bit more conservative to change. And at the same time, the ecosystem has really exploded in a number of directions, as as Kubernetes becomes more of a bedrock technology for, um, enterprises and individuals and startups and everything in between. We've really seen a huge amount of of innovation in the space, and every year it just gets bigger and bigger. There's a lot of exciting projects that >>I >>have never even talk to somebody on the Kubernetes project. But they have made and build and, uh, and solve problems for their environments without us ever having to be involved, which I think it's success. >>Yeah, Clayton, you know, one of the challenges when you talk to practitioners out there is just keeping up with the pace of change. Can really be challenging. Something we really saw acutely was Docker was rolling out updates every six weeks. Most customers aren't going to be able to change fast enough to keep up with things you love your view point both is toe really what the CN CF says, as well as how Red Hat thinks of products. So you talked about you know, kubernetes 1.18. My understanding, even Google isn't yet packaging and offering that version there. So there's a lag between things. And as we start talking about managing across lots of clusters, how does Red Hat think of this? How should customers think about this? How do we make sure that we're, you know, staying secure and keeping updated on things without getting run over by the constant treadmill of >>change? That the interesting part about kubernetes Is it so much more than just that core project? You know, no matter what any of us in the in the core kubernetes project or in the products that red hat that build around open shift and layers on top, there's a There's a whole ecosystem of components that most people think of this fundamental to accomplishing building applications deploying them, running them, Whether it's their continuous integration pipelines or it's their monitoring stacks, we really as communities has become a little bit more conservative. >>Um, I >>think we really nail down our processes for taking that change from the community, testing it. You know, we run tens of thousands of automation tests a week on the latest and greatest kubernetes code, given time to soak, and we did it together with all those pieces of the ecosystem and then make sure that they work well together. And I've noticed over the last two years that the rate of oops we missed that in KUBERNETES 1 17 that by the time someone saw it, people are already using that that started to go down for us, it really hasn't been about the pace of keeping up with the upstream. But it's about making sure that we can responsibly pull together all the other ecosystem components that are still have much newer and a little bit. How do we say, Ah, they are then the exciting phase of their development while still giving ah predictable, reliable update stream. I would say that the challenges that most people are going to see is how they bring together all those pieces. And that's something that, on open shift, we think of as our goal is to help pull together all the pieces of this ecosystem, Um, and to make some choices for customers that makes sense and to give them flexibility where it's not clear yet what the right choice might be or where different people could reasonably disagree. And I'm really excited. I feel like we've got our We have a release cadence down and we're shipping the latest Cube after it's had time to quickly review, and I think we've gotten better and better at that. So I'm really proud of the team on Red Hat and how they've worked within the community so that everybody benefits from that in that testing of that stability. >>Great. I'd like to teach here, you dig in a little bit on the application side what's happening from the work loads that customers are using? Ah, what other innovations happening around that space? And how is Red Hat really helping? Really, The the infrastructure team and the developer team work even closer together, like Red Hat has done for a long time. >>This is This is a great question. I say There's two key, um, two key groups coming together. People are bringing substantial important critical production workloads, and they expect things both to just work, but also to be able to understand it. And they're making the transition. Ah, lot of folks I talked to were making the transition from previous systems they've got. They've been running open shift for a while, or they've been running kubernetes for a while, and they're getting ready to move, um, a significant portion of their applications over. And so, you know, in the early days of any project, you get the exciting Greenfield development and you get to go play with new technologies. But as you start moving your 1st 1 and then 10 and then 100 of your core business applications from the EMS or from bare metal into containers, you're taking advantage of that technology in a responsible way. And so the the expectations on us as engineers and community members is to really make sure that we're closing out the little stuff. You know, no bug is too small, but it can't trip up someone's production applications. So seeing a lot of that whether it's something new and exciting like, Um uh, model is a service or ai workloads or whether it's traditional big enterprise transaction processing. APS on the other side on that development, um, model I think we're starting to see phase to our community is 2.0, in the community, which is people are really leveraging the flexibility and the power of containers, things that aren't necessarily new to people who had. We got into containers early and had a chance to go through a couple of iterations. But now people are starting to find patterns that up level development teams, so being able to run applications the same way on a local machine as in a production environment. Well, most production environments are there now, and so people are really having toe. They're having to go through all of their tools and saying, Well, does this process that works for an individual developer also work when I want to move it there, my production or staging environments to production, and so on. New projects like K native and tectonic, which are kubernetes native, that's just one part of the ecosystem around development. On top of kubernetes, there's tons of exciting projects out there from companies that have adopted the full stack of kubernetes. They built it into their mindset, this idea of flexible infrastructure, and we're seeing this explosion of new ways where kubernetes is really just a detail, and containers are just the detail and the fact that it's running this little thing called Docker down at the heart of it. Nobody talks about anymore, and so that that transition has been really exciting. I think there's a lot that we're trying to do to help developers and administrators see eye to eye. And a lot of it's learning from the customers and users out there who really paved the way the which is the open source way. It's learning from others and helping others benefit from that. >>Yeah, I think you bring up a really important point we've been saying for a couple of years. Now that you know KUBERNETES should get to the point where it's boring and boring in a way also cause it's gonna be baked in everywhere we saw from basically customers just taking the code, really spending a lot of their own things by building the stack to, of course, lots of customers have used open shift over the year to If I'm adopting Public Cloud more and more, they're using those services from that standpoint. Can you talk a bit about how Red Hat is really integrating with public clouds? And you know your architectural technical philosophy on that? And how might that be? Differ from some other companies that you might call a little bit more, you know, Cloud of Jason, as opposed to being deeply integrated with the public cloud. >>The interesting thing about Kubernetes is that while it was developed on top of the clouds, it wasn't really built from Day one assuming a cloud underneath it. And I think that was an opportunity that we really missed. And to be fair, we had to make the thing work first before we depended on these unreliable clouds. You know, when we started, the clouds were really hitting their stride on stability and reliability, and people were it was the hot was becoming the obvious choice to some of what we've tried to do is take flexible infrastructure is a given, um, assume that the things that the cloud provides should be programmed for the for the benefit of the developer and the application, and I think that's a that's a key trend is we're not using the cloud because our administration teams want us. We're using the cloud because it makes us more powerful developers. That enables new scenarios. It shortens the the time between idea reality. What we have done in open shift is we've really built around The idea of open shift running on a cloud should take advantage of that cloud to an extreme degree, which is infrastructure could be flexible. The machines in that cluster need to come and go according to the demands of the applications on top of it. So giving a little bit more power to the cluster and taking a little bit of way from the cloud I'm. But that benefits. That also needs to benefit that those who are running on premise because I think, as you noted, our goal is you want this ubiquitous kubernetes environment everywhere, and the operations teams and the development teams and the Dev Ops teams in between need to have a consistent environment and so you can do this on the cloud. But you don't have that flexibility on premise. You've lost something. And so what we've tried to do as well is to think about those ideas that are what we think of as quote unquote cloud native that starts with a mutable operating systems. It starts with everything being declarative and working backwards from, you know, I wanna have 15 machines and then the cluster or controllers on the cluster say, Oh, well, you know, one of the machines has gone bad. Let's replace it on the cloud. You ask for a new I'm cloud infrastructure provider or you ask the the cloud a p i for a new machine, and then you replace it automatically, and no one knows any better on premise. We'd love to do the same thing with both bare metal virtualization on top of kubernetes. So we have that flexibility to say you may not have all of the options, but we should certainly be able to say, Oh, well, this hardware is bad or the machine stopped, so let's reboot it, and there's a lot of that same mindset that could be applied. We think that'll, um if you need virtualization, you can always use it. But virtualization is a layer on top benefits from some of the same things that all the other extensions and applications on top of kubernetes competitive trump. So trying to pay that layer and make sure that you have flexible, reliable storage on premise through our SEF and red hat storage products, which are built on top of the cluster exactly like virtualization, is both on top of the cluster. So you get cloud native storage mixed in working with those teams toe. Take those operational best practices. You know there's well, I think one of the things that interests me is no. 1 20 years ago, who was running an early version of SEF wouldn't have some approach to run these very large things that scales organizations like CERN have been using SEF for over a decade at extremely large scales. Some of what our mindset is we think it's time to bake some of that knowledge actually into our software for a very long time. We've kind of been building out and adding more and more software, but we always left the automation and the the knowledge about how that software supposed to be run to the side. And so by taking that and we talked about operators. Kubernetes really enshrines. This principle is taking that idea, taking some of that operational knowledge into the software we ship. Um, though that software can rely on kubernetes open shift tries to hide the details of the infrastructure underneath and our goal. I think in the long run it will just make everybody's lives easier. I shouldn't have to ship you a SEF admin for you to be successful. And we think we think there's a lot more room here that's really gonna improve how operations teams work, that the software that they use day to day. >>So Clinton you mentioned virtualization is one of the topics in there. Of course, virtualization is very prevalent in a customer's data center environment today. Red Hat open shift, oftentimes in data centers, is sitting on BM ware environments. Of course. Recently, VM Ware announced that they have kubernetes baked into the solution, and red hat has open shift with red hat virtualization. Maybe, you know, without going into too much depth, and you probably have breakouts and white papers on this. But you know what kind of decision point should customers be thinking about when they're deciding? Do I do this in bare metal. Do I do it in virtualization? What are some of the, you know, just high level trade offs there when they need to make those decisions, >>I think it's, um I think the 1st 1 is Virtualization is a mature technology. It's a known quantity for many organizations, and so those who are comfortable with virtualization, I'd say, like any responsible, uh, architecture engineering team. You don't want to stop using something that's working well just because you can. And a lot of what I would see as the transition that companies on is for some organizations without a big investment in virtualization. They don't see the need for it anymore, except as maybe a technical detail of how they isolate insecure workloads. One of the great things about virtualization technology that we're all aware of over the last couple years is it creates a boundary between work loads and the underlying environment. That doesn't mean that the underlying environment and containers can't be as secure or benefit from those same techniques. And so we're starting to see that in the community, this kind of spectrum of virtualization all the way from the big traditional virtualization to very streamlined, stripped down virtualization wrappers around containers. Um, like some of the cloud providers use for their application environments. So I'm really excited about the open source. Community is touching each of these points on the spectrum. Some of our goals are if you're happy with your infrastructure provider, we want to work well with, and that's kind of the pragmatic of everyone's on a different step in that journey. The benefit of containers is no matter how fast you make of VM, it's never gonna be quite as fast, is it containers. And it's never gonna be quite as easy for a developer to run on their laptop. And I think working through this is there's still a lot of work that we as a community to do around, making it easier for developers to build containers and test them locally in smaller environments. But all of that flexibility can still benefit from virtualization under later or virtualization used as an isolation technology. So projects like Kata and some of the work that's being done in the open source community around projects like firecracker taking the same, um, open source ideas and remixing them a different points gives us a lot of flexibility. So I would say, um, I'm actually less interested in virtualization then all of the other technologies that are application centric and at the heart of it, a VM isn't really a developer centric idea. It's specifically an administrative concept that benefits the administrator, and developers can take advantage of it. But I think all of the capabilities that you think of when you think about building an application like scaling out and making sure patches are applied, being able to roll back separating your configuration on then all of the hundreds of other levels of complexity that will add around that like service MASH and the ability to gracefully tolerate failures in your database. These were where I think, um, virtualization needs to work with the platform rather than being something that dominates how we think about the platform. It's application first, not being first. >>Yeah, no, you're absolutely right that the critique I've always given, you know for a number of years now is if you look at virtualization, the promise was, let's take that old application that probably should have been updated and just shove it in a VM and never think about it again. That's not doing good things for the user. So if I look at that at one end of the spectrum away at the other end of the spectrum, trying not to think about infrastructure, you mentioned K native s 01 of the things that you know I've been digging in tryingto learn more about at Red Hat Summit has really been the open shift server lists. So give us the update on that piece. Um, you know, that's obviously very different discussion than what we were just having from a virtualization standpoint. Eso How does open shift look at server lists? How does that tie into what? You know, if I'm doing server, listen, Amazon versus you know some of the other open source options for serverless. How should I be thinking about that? >>There's a lot of great choices on the spectrum out there. I think one of the interesting things and I love the word spectrum here because cane native kind of sits in a spot where it tries to be, as the name says, it tries to be as kubernetes native as possible, which lets you tap into some of those additional capabilities when you need it. And one of the things I've always appreciate it is the more restrictive framework is usually the better. It is doing that one thing and doing it really well. We learned this with rails. We learned this with no Js. And as people have built over the years, the idea of simple development platforms. The core function idea is a great simple idea, but sometimes you need to break out of that. You need extra flexibility or your application needs to run longer or slow Start is actually an issue. One of the things I think is most interesting about K native and I see comers and user. I think this way it's a good point. Um, that gives you some of the flexibility of kubernetes and a lot of the simplicity of, um, the functions is a service, but I think that there's going to be an inevitable set of use cases that tie into that which are simpler where open organization has a very opinionated way of running applications, and I think that flexibility will really benefit K native. Whereas some of the more opinionated remarks around server lists lose a little bit of that. So that's one dimension that I still think a native is well positioned to kind of capture the broadest possible audience, which for kubernetes and Containers was kind of our mindset. We wanted to solve enough of the problems that you can solve. You can run all your software. We don't have to solve all those problems to such a level that there's endless complexity, although we've been accused of having endless complexity and Cooper days before, but just trying to think through what are the problems that everyone's going to have to give them a way out? I'm at the same time for us, when we think about prioritization functions is service about integration. It's about taking applications and connecting them, connecting them through kubernetes. And so it really depends on identity and access to data and tying that into your cloud environment. If you're running on top of a cloud or tying it into your back end databases, if your on premise, >>I >>think that is where the ecosystem is still working to bring together and standardize some of those pieces in kubernetes or on top of Kubernetes. What I'm really excited about is the team as much. You know, there's been this core community effort to get a native to a G, a quality. Alongside that, the open shift serverless team has been trying to make it a dramatically simpler action. If you have kubernetes and open shift, it's a one click action to get started with, Um Kay native and just like any other technology. How accessible it is determines how easy users find it to get started and to build the applications they need. So for us, it's not just about the core technology. It's about someone who's not familiar with Serverless or not familiar with kubernetes. Bring up an editor and build a function and then deploy it on top of open shift. See it scale out like a normal kubernetes application, not having to know about pods or persistent volumes or notes. And so these air, these are some of the steps. I've been really proud that the team's done. I think there's a huge amount of innovation that will happen this year and next year, as the maturity of the kubernetes ecosystem really grows up, we'll start to see standardized technologies, for I'm sharing identity across multiple clouds across multiple environments. It's no good if you've got these applications on the cloud that need to tie into your corporate L dap. But you can't connect your corporate held up to the cloud. And so your applications need 1/3 identity system. Nobody wants 1/3 identity system. And so, working through some of this thing where the challenges I think that hybrid organizations are already facing and our job is just to work with them in the open source communities and with the cloud providers partner with them and open source so that the technologies in kubernetes fit very well into whatever environment they run it. Alright, >>well, Clayton, really appreciate all the updates there. I know the community is definitely looking forward to digging through some of the breakout sessions reading all the new announcements. And, of course, we look forward to seeing you on the team participating in many of the kubernetes related events happening later this >>year. That's right. It's ah, gonna be a good year. >>All right. Thanks so much for joining us. I'm still Minuteman and as always thank you for watching you. >>Yeah, yeah, yeah, yeah

Published Date : Apr 29 2020

SUMMARY :

Summit 2020 Brought to you by Red Hat. Who's the open shift chief architect with Red Hat. All right, So before we get into the product, it's probably worthwhile that we talked about you We handed off the last of the bootstrap Steering Committee members hand it off to the new, have never even talk to somebody on the Kubernetes project. going to be able to change fast enough to keep up with things you love your view point both in the products that red hat that build around open shift and layers on top, there's it really hasn't been about the pace of keeping up with the upstream. I'd like to teach here, you dig in a little bit on the application side what's And a lot of it's learning from the customers and users out there who really And you know your architectural technical philosophy on that? on the cluster say, Oh, well, you know, one of the machines has gone bad. What are some of the, you know, just high level trade offs the ability to gracefully tolerate failures in your database. the things that you know I've been digging in tryingto learn more about at Red Hat Summit has really the functions is a service, but I think that there's going to be an inevitable and open source so that the technologies in kubernetes fit very well into I know the community is definitely looking forward to digging It's ah, gonna be a good year. I'm still Minuteman and as always thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ClaytonPERSON

0.99+

15 machinesQUANTITY

0.99+

Red HatORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

ClintonPERSON

0.99+

CERNORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

100QUANTITY

0.99+

last yearDATE

0.99+

red hatORGANIZATION

0.99+

Clayton ColemanPERSON

0.99+

10QUANTITY

0.99+

next yearDATE

0.99+

two key groupsQUANTITY

0.99+

VM WareORGANIZATION

0.99+

one clickQUANTITY

0.99+

two keyQUANTITY

0.99+

CubeORGANIZATION

0.99+

Summit 2020EVENT

0.99+

end of AprilDATE

0.98+

Red Hat SummitEVENT

0.98+

SEFTITLE

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

OneQUANTITY

0.98+

firstQUANTITY

0.98+

end of MarchDATE

0.97+

this yearDATE

0.97+

one partQUANTITY

0.97+

Red Hat Summit 2020EVENT

0.97+

one dimensionQUANTITY

0.97+

later this yearDATE

0.96+

todayDATE

0.96+

eachQUANTITY

0.93+

KubernetesTITLE

0.93+

Day oneQUANTITY

0.93+

hundredsQUANTITY

0.92+

KayPERSON

0.92+

one endQUANTITY

0.91+

20 years agoDATE

0.91+

one thingQUANTITY

0.91+

KataTITLE

0.91+

1st 1QUANTITY

0.91+

red hatTITLE

0.89+

CN CFORGANIZATION

0.87+

over a decadeQUANTITY

0.86+

tens of thousands of automation testsQUANTITY

0.85+

last two yearsDATE

0.84+

MinutemanPERSON

0.82+

KubernetesORGANIZATION

0.82+

CubeCOMMERCIAL_ITEM

0.82+

every six weeksQUANTITY

0.81+

1/3QUANTITY

0.79+

cfEVENT

0.75+

Steering CommitteeORGANIZATION

0.75+

last couple yearsDATE

0.74+

K native sORGANIZATION

0.74+

a weekQUANTITY

0.73+

BMORGANIZATION

0.68+

kubernetesTITLE

0.66+

later thisDATE

0.63+

Doug Davis, IBM | KubeCon + CloudNativeCon EU 2019


 

>> live from Barcelona, Spain. It's the key covering Cook Con Cloud, Native Con Europe twenty nineteen by Red Hat, The Cloud, Native Computing Foundation and Ecosystem Partners. >> Welcome back to the Cubes Live coverage of Cloud Native Con Cube Khan, twenty nineteen I'm student of my co host is Corey Quinn and happy to welcome back to the program. Doug Davis, who's a senior technical staff member and PM of a native and happens to be employed by IBM. Thanks so much for joining. Thanks for inviting me. Alright. So, Corey, I got really excited when he saw this Because server lists, uh, is something that, you know he's been doing for a while. I've been poking in, trying to understand all the pieces have done marvelous conflict couple of times and, you know, I guess, I guess layout for our audience a little bit, you know, Kay native. You know, I look at it kind of a bridging the solution, but, you know, we're talking. It's not the, you know, you know, containers or server. Listen, you know, we understand that world, they're spectrums, and there's overlap. So maybe is that is a set up. You know what is the service. Working groups, you know, Charter, Right. So >> the service Working Group is a Sand CF working group. It was originally started back in mid two thousand seventeen by the technical recite committee in Cincy. They basically wanted know what is service all about his new technology is that some of these get involved with stuff like that. So they started up the service working group and our main mission was just doing some investigation. And so the output of this working group was a white paper. Basically describing serval is how it compares with the other as is out there. What is the good use cases for when to use? It went out through it. Common architectures, basically just explaining what the heck is going on in that space. And then we also produced a landscape document basically laying out what's out there from a proprietors perspective as well is open source perspective. And then the third piece was at the tail end of the white paper set of recommendations for the TOC or seen staff in general. What should they do? Do next and basic came down to three different things. One was education. We want to be educate the community on what services, when it's appropriate >> stuff like that >> to what should wait. I'm sorry I'm getting somebody thinks my head recommendations. What other projects we pull into the CNC f others other service projects, you know, getting encouraged in the joint to grow the community. And, third, >> what should we >> do around improbability? Because obviously, when it comes to open source standards of stuff like that, we want in our ability portability, stuff like that. And one of the low hang your food so they identified was, Well, service seems to be all about events. So there's something inventing space we can do and we recognize well, if we could help the processing of events as it moves from Point A to point B, that might help people in terms of middleware in terms of routing, of events, filtering events, stuff like that. And so that's how these convents project that started. Right? And so that's where most of service working group members are nowadays. Is cloud events working or project, and they're basically divine, Eva said. Specification around cloud events, and you kind of think of it as defining metadata to add to your current events because we're not going to tell you. Oh, here's yet another one size fits all cloud of in format, right? It's Take your current events. Sprinkle a little extra metadata in there just to help routing. And that's really what it's all about. >> One of the first things people say about server list is quoted directly from the cover of Missing the Point magazine Server list Runs on servers. Wonderful. Thank you for your valuable contribution. Go away slightly less naive is, I think, an approach, and I've seen a couple of times so far at this conference. When talking to people that they think of it in terms of functions as a service of being able to take arbitrary code and run it. I have a wristwatch I can run arbitrary code on. That's not really the point. It's, I think you're right. It's talking more about the event model and what that unlocks As your application. Mohr less starts to become more self aware. Are you finding that acceptance of that point is taking time to take root? >> Yeah, I think what's interesting is when we first are looking. A serval is, I think, very a lot of people did think of service equals function of the service, and that's all it was. I think what we're finding now is this this mode or people are more open to the idea of sort of as you. I think you're alluding to merging of these worlds because we look at the functionality of service offers things like event base, which really only means is the messages coming in? It just happens to look like an event. Okay, fine. Mrs comes in you auto scale based upon, you know, loaded stuff like that scale down to zero is a one of the key. Thought it was really like all these other things are all these features. Why should you limit those two service? Why not a past platform? Why not? Container is a service. Why would you want those just for one little as column? And so my goal with things like a native though I'm glad you mentioned it is because I think Canada does try to span those, and I'm hoping it kind of merges them altogether and says, Look, I don't care what you call it. Use this piece of technology because it does what you need to do If you want to think of it as a pass. Go for I don't care. This guy over here he wants think that is a FAZ Great. It's the same piece of technology. Does the feature do what you need? Yes or no? Ignore that, nor the terminology around it more than anything else. >> So I agree. Ueda Good, Great discussion with the user earlier and he said from a developer standpoint, I actually don't want to think too much about which one of these pass I go down. I want to reduce the friction for them and make it easy. So you know, how does K native help us move towards that? You know, ideal >> world, right? And I think so fine. With what I said earlier, One of the things I think a native does, aside from trying to bridge all the various as columns is I also look a K native as a simplification of communities because as much as everybody here loves communities, it is kind of complicated, right? It is not the easiest thing in the world to use, and it kind of forced you to be a nightie expert which almost goes against the direction we were headed. When you think of Cloud Foundry stuff like that where it's like, Hey, you don't worry about this something, we're just give us your code, right? Cos well says, No, you gotta know about networks, Congress on values, that everything else it's like, I'm sorry, isn't this going the wrong way? Well, Kania tries to back up a little, say, give you all the features of Cooper Netease, but in a simplified platform or a P I experience that you can get similar Tokat. Foundry is Simo, doctor and stuff, but gives you all the benefits of communities. But the important thing is if for some reason you need to go around K native because it's a little too simplified or opinionated, you could still go around it to get to the complicated stuff. And it's not like you're leaving that a different world or you're entering a different world because it's the same infrastructure they could. This stuff that you deploy on K native can integrate very nicely with the stuff you deploy through vanilla communities if you have to. So it is really nice emerging these two worlds, and I'm I'm really excited by that. >> One thing that I found always strange about server list is a first. It was defined by what it's not and then quickly came to be defined almost by its constraints. If you take a look at public cloud offerings around this, most notably a ws land other there, many others it comes down well. You can only run it for experience, time or on Lee runs in certain run times, or it's something the cold starts become a problem. I think that taking a viewpoint from that perspective artificially hobbles what this might wind up on locking down the road just because these constraints move. And right now it might be a bit of a toy. I don't think it will be as it because it needs to become more capable. The big value proposition that I keep hearing around server listen I've mostly bought into has been that it's about business logic and solving the things that Air corps to your business and not even having to think about infrastructure. Where do you stand on that >> viewpoint? I completely agree. I think a lot of the limitations you see today are completely artificial I kind of understand why they're there, because the way things have progressed, But again, it's one reason I excited like a native is because a lot of those limitations aren't there. Now. Kay native doesn't have its own set of limitations. And personally, I do want to try to remove those. Like I said, I would love it if K native, aside from the service features it offers up, became these simplified incriminate his experience. So if you think about what you could do with Coronet is right, you can deploy a pod and they can run forever until the system decides to crash. For some reason, right, why not do that with a native and you can't stay with a native? Technically, I have demos that I've been running here where I set the men scale the one it lives forever, and teenager doesn't care right? And so deploying an application through K native communities. I don't care that it's the same thing to me. And so, yes, I do want to merge in those two worlds. I wantto lower those constraints as long as you keep it a simplified model and support the eighty to ninety percent of those use cases that it's actually meant to address. Leave the hard stuff for going around it a little. >> Alright, So, Doug, you know, it's often times, you know, we get caught in this bubble of arguing over, you know? You know what we call it, how the different pieces are. Yesterday you had a practitioner Summit four server list. So what? I want to hear his You know, whats the practitioners of you put What are they excited about? What are they using today and what are the things that they're asking for? Help it become, you know, Maur were usable and useful for them in the future. >> So in full disclosure, we actually kind of a quiet audience, so they weren't very vocal. But what little I did here is they seemed very excited by K native and I think a lot of it was because we were just talking about sort of the merging of the worlds because I do think there is still some confusion around, as you said, when to use one versus the other. And I think a native is helping to bring those together. And I did hear some excitement around that in terms of what people actually expect from us going the future. I don't know the honest They didn't actually say a whole lot there. I had my own personal opinion, and lot of is what already stayed in terms of emerging. Stop having me pick a technology or pick a terminology, right? Let me just pick technology gets my job done and hopefully that one will solve a lot of my needs. But for the most part, I think it was really more about Kenya than anything else yesterday. >> I think like Lennox before it. Any technology? At some point you saw this with virtual ization with cloud, with containers with Cooper Netease. And now we're starting to seriously with server lists where some of its most vocal proponents are also so the most obnoxious in that they're looking at this from a perspective of what's your problem? I'm not even going to listen to the answer. The solution is filling favorite technology here. So to that end today, what workloads air not appropriate for surveillance in your >> mind? Um, so this is hardly the answer because I have the IBM Army running through my head because what's interesting is. I do hear people talk about service is good for this and not this or you can date. It was good for this and not this. And I hear those things, and I'm not sure I actually buy it right. I actually think that the only limitations that I've seen in terms of what you should not run on time like he needed or any of the platform is whatever that platform actually finds you, too. So, for example, on eight of us, they may have time limited in terms of how long you can run. If that's a problem for you, don't use it to me. That's not an artifact of service. That's artifact of that particular choice of how the implement service with K native they don't have that problem. You could let it run forever if you want. So in terms of what workloads or good or bad, I honestly I don't have a good answer for that because I don't necessary by some of the the stories I'm hearing, I personally think, try to run everything you can through something like Cain native, and then when it fails, go someplace else is the same story had when containers first came around, they would say, You know when to use viens roses containers. My go to answer was, always try containers first. Your life would be a whole lot easier when it doesn't work, then look at the other things because I don't want to. I don't want to try to pigeonhole something like surly or K native and say, Oh, don't even think about it for these things because it may actually worked just fine for you, right? I don't want people to believe negative hype in a way that makes sense, >> and that's very fair. I tend to see most of the constraints around. This is being implementation details of specific providers and that that will dictate answers to that question. I don't want to sound like I'm coming after you, and that's very thoughtful of measured >> thank you Usual response back. Teo >> I'LL give you the tough one. The critical guy had in Seattle when I looked at K Native is there's a lot of civilised options out there yet, but when I talked to users, the number one out there is a ws lambda, and number two is probably as your functions and as of Seattle, neither of those was fully integrated since then. I talked a little startup called I Believe his Trigger Mash that that has made some connections between Lambda on K Native. And there was an announcement a couple of weeks ago, Kedia or Keita? That's azure and some kind of future to get Teo K native. So it feels like it's a maturity thing. And, you know, what can you tell us about, you know, the big cloud guys on Felicia? Google's involved IBM Red Hat on and you know Oracle are involved in K Native. So where do those big cloud players? Right? >> So from my perspective, what I think Kenya has going for it over the others is one A lot of other guys do run on Cooper Netease. I feel like they're sort of like communities as well as everything else, like some of them can run. Incriminate is Dr anything else, and so they're not necessary. Tightly integrated and leveraging the carbonates features the way Kay native is doing, and I think that's a little bit unique right there. But the other thing that I think K native has going for it is the community around it. I think people were doing were noticing. Is that what you said? There's a lot of other players out there and his heart feel the choose and what? I think Google did a great job of this sort of bringing the community together and said, Look, can we stop bickering and develop a sort of common infrastructure like communities is that we can all then base our surveillance platforms on, and I think that rallying cry to bring the community together across a common base is something a little bit unique for K native. When you compare it with the others, I think that's a big draw for people. Least from my perspective. I know it from IBM Zzzz Well, because community is a big thing for us, obviously. >> Okay, so will there be a bridge to those other cloud players soon as their road map? For that, >> we think a native itself. Yeah, I am not sure I can answer that one, because I'm not sure I heard a lot of of talk about bridging per se. I know that when you talk about things like getting events from other platforms and stuff, obviously, through the eventing side of a native. We do. But from a serving perspective, I'm not sure I hold her old water. From that perspective, you have to be >> honest. All right, Well, Doug Davis, we're done for This one really appreciate all the updates there. And I definitely look forward, Teo, seeing the progress that the servant working group continues to do, so thank you so much. Thank you for having me. Alright for Corey Quinn. I'm stupid and will be back with more coverage here on the Cube. Thanks for watching.

Published Date : May 22 2019

SUMMARY :

It's the key covering Cook Con It's not the, you know, you know, containers or server. And so the output of this working group was a white paper. others other service projects, you know, getting encouraged in the joint to grow the community. and you kind of think of it as defining metadata to add to your current events because we're not going to tell you. Thank you for your valuable contribution. Does the feature do what you need? So you know, how does K native help us move towards It is not the easiest thing in the world to use, and it kind of forced you that it's about business logic and solving the things that Air corps to your business and not even having to think I don't care that it's the same thing to me. Alright, So, Doug, you know, it's often times, you know, we get caught in this bubble And I did hear some excitement around that in terms of what people actually expect At some point you saw this with virtual in terms of what you should not run on time like he needed or any of the platform is whatever that platform I tend to see most of the constraints around. thank you Usual response back. And, you know, what can you tell us about, Is that what you said? I know that when you talk about things like getting And I definitely look forward, Teo, seeing the progress that the servant working

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Doug DavisPERSON

0.99+

CoreyPERSON

0.99+

EvaPERSON

0.99+

Corey QuinnPERSON

0.99+

IBMORGANIZATION

0.99+

SeattleLOCATION

0.99+

OracleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

third pieceQUANTITY

0.99+

K NativeORGANIZATION

0.99+

KayPERSON

0.99+

eightQUANTITY

0.99+

TeoPERSON

0.99+

eightyQUANTITY

0.99+

Barcelona, SpainLOCATION

0.99+

DougPERSON

0.99+

IBM ArmyORGANIZATION

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

Missing the PointTITLE

0.99+

yesterdayDATE

0.98+

YesterdayDATE

0.98+

CongressORGANIZATION

0.98+

two serviceQUANTITY

0.98+

KubeConEVENT

0.98+

two worldsQUANTITY

0.98+

KaniaPERSON

0.98+

Ecosystem PartnersORGANIZATION

0.98+

zeroQUANTITY

0.98+

IBM Red HatORGANIZATION

0.97+

CincyLOCATION

0.97+

oneQUANTITY

0.97+

firstQUANTITY

0.96+

ninety percentQUANTITY

0.96+

one reasonQUANTITY

0.96+

thirdQUANTITY

0.95+

Cooper NeteaseORGANIZATION

0.93+

KenyaLOCATION

0.93+

MohrPERSON

0.92+

Native Computing FoundationORGANIZATION

0.91+

surlyPERSON

0.91+

One thingQUANTITY

0.91+

KeitaPERSON

0.9+

Cube KhanPERSON

0.9+

CloudORGANIZATION

0.9+

K nativePERSON

0.89+

Cloud FoundryORGANIZATION

0.87+

twenty nineteenQUANTITY

0.86+

LennoxORGANIZATION

0.85+

The Cloud,ORGANIZATION

0.85+

CoronetORGANIZATION

0.83+

CloudNativeCon EU 2019EVENT

0.83+

Kay nativePERSON

0.82+

fourQUANTITY

0.82+

point BOTHER

0.81+

FeliciaPERSON

0.78+

couple of weeks agoDATE

0.77+

KenyaORGANIZATION

0.75+

three different thingsQUANTITY

0.74+

CainPERSON

0.73+

Cook ConEVENT

0.73+

two thousand seventeenQUANTITY

0.72+

CubesORGANIZATION

0.7+

KORGANIZATION

0.69+

CubeCOMMERCIAL_ITEM

0.69+

K nativeORGANIZATION

0.69+

Ueda GoodPERSON

0.69+

K nativePERSON

0.67+

lambdaORGANIZATION

0.67+

Native Con EuropeEVENT

0.67+

coupleQUANTITY

0.67+

Cooper NeteaseORGANIZATION

0.66+

Doug Davis, IBM | KubeCon + CloudNativeCon EU 2019


 

>> about >> fifteen live from basically about a room that is a common club native con Europe twenty nineteen by Red Hat, The >> Cloud, Native Computing Foundation and Ecosystem Partners. >> Welcome back to the Cubes. Live coverage of Cloud Native Con Cube Khan, twenty nineteen I'm stupid in my co host is Corey Quinn and having a welcome back to the program, Doug Davis, who's a senior technical staff member and PM of a native. And he happens to be employed by IBM. Thanks so much for joining. Thanks for inviting me. Alright, So Corey got really excited when he saw this because server Lis is something that you know he's been doing for a while. I've been poking in, trying to understand all the pieces have done marvelous conflict couple of times and, you know, I guess, I guess layout for our audience a little bit, you know, k native. You know, I look at it kind of a bridging a solution, but, you know, we're talking. It's not the, you know, you know, containers or server lists. And, you know, we understand that world. They're spectrums and there's overlap. So maybe as that is a set up, you know, What is the surveillance working groups? You know, Charter. Right. So >> the service Working Group is a Sand CF working group. It was originally started back in mid two thousand seventeen by the technical recite committee in Cincy. They basically wanted know what is service all about his new technology is that some of these get involved with stuff like that. So they started up the service working group and our main mission was just doing some investigation. And so the output of this working group was a white paper. Basically describing serval is how it compares with the other as is out there. What is the good use cases for when to use that went out through it? Common architectures, basically just explaining what the heck is going on in that space. And then we also produced a landscape document basically laying out what's out there from a proprietors perspective as well is open source perspective. And then the third piece was at the tail end of the white paper set of recommendations for the TOC or seen stuff in general. What do they do next? And basic came down to three different things. One was education. We want to be educate the community on what services when it's appropriate stuff like that. Two. What should wait? I'm sorry I'm getting somebody Thinks my head recommendations. What other projects we pull into the CNC f others other service projects, you know, getting encouraged in the joint to grow the community. And third, what should we do around improbability? Because obviously, when it comes to open source standards of stuff like that, we want in our ability, portability stuff like that and one of the low hang your food should be identified was, well, service seems to be all about events. So there's something inventing space we could do, and we recognize well, if we could help the processing of events as it moves from Point A to point B, that might help people in terms of middleware in terms of routing, of events, filtering events, stuff like that. And so that's how these convents project that started. Right? And so that's where most of service working group members are nowadays. Is cod events working or project, and they're basically divine, Eva said specification around cloud events, and you kind of think of it as defining metadata to add to your current events because we're not going to tell you. Oh, here's yet another one size fits all cloud of in format, right? It's Take your current events. Sprinkle a little extra metadata in there just to help routing. And that's really what it's all about. >> One of the first things people say about server list is quoted directly from the cover of Missing the Point magazine Server list Runs on servers. Wonderful. Thank you for your valuable contribution. Go away slightly less naive is, I think, an approach, and I've seen a couple of times so far at this conference. When talking to people that they think of it in terms of functions as a service of being able to take arbitrary code and running, I have a wristwatch I can run arbitrary code on. That's not really the point. It's, I think you're right. It's talking more about the event model and what that unlocks As your application. Mohr less starts to become more self aware. Are you finding that acceptance of that viewpoint is taking time to take root? >> Yeah, I think what's interesting is when we first are looking. A serval is, I think, very a lot of people did think of service equals function of the service, and that's all it was. I think what we're finding now is this this mode or people are more open to the idea of sort of as you. I think you're alluding to merging of these worlds because we look at the functionality of service offers, things like event based, which really only means is the messages coming in? It just happens to look like an event. Okay, fine. Mrs comes in you auto scale based upon, you know, loaded stuff like that scale down to zero is a the monkey thought it was really like all these other things are all these features. Why should you limit those two service? Why not a past platform? Why not? Container is a service. Why would you want those just for one little as column? And so my goal with things like a native though I'm glad you mentioned it is because I think he does try to span those, and I'm hoping it kind of merges them altogether and says, Look, I don't care what you call it. Use this piece of technology because it does what you need to do. If you want to think of it as a pass, go for I don't care. This guy over here he wants think that is a FAZ Great. It's the same piece of technology. Does the feature do what you need? Yes or no? Ignore that, nor the terminology around it more than anything >> else. So I agree. Ueda Good, Great discussion with the user earlier and he said from a developer standpoint, I actually don't want to think too much about which one of these pass I go down. I want to reduce the friction for them and make it easy. So you know, how does K native help us move towards that? You know, ideal >> world, right? And I think so fine. With what I said earlier, One of the things I think a native does, aside from trying to bridge all the various as columns is I also look a K native as a simplification of communities because as much as everybody here loves communities, it is kind of complicated, right? It is not the easiest thing in the world to use, and it kind of forced you to be a nightie expert which almost goes against the direction we were headed. When you think of Cloud Foundry stuff like that where it's like, Hey, you don't worry about this something, we're just give us your code, right? Cos well says No, you gotta know about Network Sing Gris on values that everything else it's like, I'm sorry, isn't this going the wrong way? Well, Kania tries to back up a little, say, give you all the features of Cooper Netease, but in a simplified platform or a P I experience that you can get similar Tokat. Foundry is Simo, doctor and stuff, but gives you all the benefits of communities. But the important thing is if for some reason you need to go around K native because it's a little too simplified or opinionated, you could still go around it to get to the complicated stuff. And it's not like you're leaving that a different world or you're entering a different world because it's the same infrastructure they could stuff that you deploy on. K Native can integrate very nicely with the stuff you deploy through vanilla communities if you have to. So it is really nice emerging these two worlds, and I'm I'm really excited by that. >> One thing that I found always strange about server list is at first it was defined by what it's not and then quickly came to be defined almost by its constraints. If you take a look at public cloud offerings around this, most notably a ws land other there, many others it comes down well. You can only run it for experience time or it only runs in certain run times. Or it's something the cold starts become a problem. I think that taking a viewpoint from that perspective artificially hobbles what this might wind up on locking down the road just because these constraints move. And right now it might be a bit of a toy. I don't think it will be as it because it needs to become more capable. The big value proposition that I keep hearing around server listen I've mostly bought into has been that it's about business logic and solving the things that Air Corps to your business and not even having to think about infrastructure. Where do you stand on that >> viewpoint? I completely agree. I think a lot of the limitations you see today are completely artificial. I kind of understand why they're there, because the way things have progressed. But again, that's one reason I excited like a native is because a lot of those limitations aren't there. Now, Kay native doesn't have its own set of limitations. And personally, I do want to try to remove those. Like I said, I would love it if K native, aside from the serval ISS features it offers up, became these simplified, incriminate his experience. So if you think about what you could do with Coronet is right, you could deploy a pod and they can run forever until the system decides to crash. For some reason, right, why not do that with a native and you can't stay with a native? Technically, I have demos that I've been running here where I set the men scale the one it lives forever, and teenager doesn't care right? And so deploying an application through K native communities. I don't care that it's the same thing to me. And so, yes, I do want to merge in those two worlds. I wantto lower those constraints as long as you keep it a simplified model and support the eighty to ninety percent of those use cases that it's actually meant to address. Leave the hard stuff for going around it a little. >> Alright, So, Doug, you know, it's often times, you know, we get caught in this bubble of arguing over, you know? You know what we call it, how the different pieces are. Yesterday you had a practitioner Summit four server list. So what? I want to hear his You know, whats the practitioners of you put What are they excited about? What are they using today and what are the things that they're asking for? Help it become, you know, Maur were usable and useful for them in the future. >> So in full disclosure, we actually kind of a quiet audience, so they weren't very vocal. But what little I did here is they seem very excited by K native and I think a lot of it was because we were just talking about that sort of merging of the worlds because I do think there is still some confusion around, as you said when you use one verse of the other and I think a native is helping to bring those together. And I did hear some excitement around that in terms of what people actually expect from us going in the future. I don't know. Be honest. They didn't actually say a whole lot there. I had my own personal opinion, and lot of years would already stayed in terms of emerging. Stop having me pick a technology or pick a terminology, right? Let me just pick the technology. It gets my job done and hopefully that one will solve a lot of my needs. But for the most parts, I think it was really more about Kaneda than anything else. Yesterday, >> I think like Lennox before it. Any technology? At some point you saw this with virtual ization with cloud, with containers with Cooper Netease. And now we're starting to Syria to see with server lists where some of its most vocal proponents are also the most obnoxious in that they're looking at this from a perspective of what's your problem? I'm not even going to listen to the answer. The absolution is filling favorite technology here. So to that end today, what workloads air not appropriate for surveillance in your mind? >> Um, >> so this is hardly an answer because I have the IBM Army running through my head because what's interesting is I do hear people talk about service is good for this and not this or you can date. It is good for this and not this. And I hear those things, and I'm not sure I actually buy it right. I actually think that the only limitations that I've seen in terms of what you should not run on time like he needed or any of the platform is whatever that platform actually finds you, too. So, for example, on eight of us, they may have time limited in terms of how long you can run. If that's a problem for you, don't use it to me. That's not an artifact of service. That's artifact of that particular choice of how the implement service with K native they don't have that problem. You could let it run forever if you want. So in terms of what workloads or good or bad, I honestly I don't have a good answer for that because I don't necessary by some of the the stories I'm hearing, I personally think, try to run everything you can through something like Cain native, and then when it fails, go someplace else is the same story had when containers first came around. They would say, You know when to use BMS vs Containers. My go to answer was, always try containers first. Your life will be a whole lot easier when it doesn't work, then look at the other things because I don't want to. I don't want to try to pigeonhole something like surly or K native and say, Oh, don't even think about it for these things because it may actually worked just fine for you, right? I don't want people to believe negative hype in a way that makes sense, >> and that's very fair. I tend to see most of the constraints around. This is being implementation details of specific providers and that that will dictate answers to that question. I don't want to sound like I'm coming after you, and that's very thoughtful of measured with >> thank you. That's the usual response back. So don't >> go. I'Ll give you the tough one critical guy had in Seattle. Okay, when I looked at K Native is there's a lot of civilised options out there yet, but when I talked to users, the number one out there is a ws Lambda, and number two is probably as your functions. And as of Seattle, neither of those was fully integrated. Since then, I talk to a little startup called Believers Trigger Mash, that that has made some connections between Lambda Ah, and a native. And there was an announcement a couple of weeks ago, Kedia or Keita? That's azure and some kind of future to get Teo K native. So it feels like it's a maturity thing. And, you know, what can you tell us about, you know, the big cloud guys on Felicia? Google's involved IBM Red Hat on and you know Oracle are involved in K Native. So where do those big cloud players? Right? >> So from my perspective, what I think Kenya has going for it over the others is one A lot of other guys do run on Cooper Netease. I feel like they're sort of like communities as well as everything else, like some of them can run. Incriminate is Dr anything else, and so they're not necessary, tightly integrated and leveraging the community's features the way Kay Native is doing. And I think that's a little bit unique right there. But the other thing that I think K native has going for it is the community around it? I think people were doing were noticing. Is that what you said? There's a lot of other players out there, and it's hard for people to choose. And what? I think Google did a great job of this sort of bringing the community together and said, Look, can we stop bickering and develop a sort of common infrastructure? Like Who Burnett is is that we can all then base our surveillance platforms on, and I think that rallying cry to bring the community together across a common base is something a little bit unique for K native. When you compare it with the others, I think that's a big draw for people. Least from my perspective. I know it from IBM Zzzz Well, because community is a big thing for us, >> obviously. Okay, so will there be a bridge to those other cloud players soon as their road map? For that, >> we think a native itself. Yeah, I am not sure I can answer that one, because I'm not sure I heard a lot of talk about bridging per se. I know that when you talk about things like getting events from other platforms and stuff. Obviously, through the eventing side of a native we do went from a serving perspective. I'm not sure I hold her old water. From that perspective, you have >> to be honest. All right, Well, Doug Davis, we're done for This one. Really appreciate all the updates there. And I definitely look forward, Teo, seeing the progress that the servant working group continues to do, so thank you so much. Thank you for having me. Alright for Corey Quinn. I'm stupid and will be back with more coverage here on the Cube. Thanks for watching.

Published Date : May 21 2019

SUMMARY :

So maybe as that is a set up, you know, What is the surveillance working groups? you know, getting encouraged in the joint to grow the community. Thank you for your valuable contribution. Does the feature do what you need? So you know, how does K native But the important thing is if for some reason you need to go around K that it's about business logic and solving the things that Air Corps to your business and not even having to think I don't care that it's the same thing to me. Alright, So, Doug, you know, it's often times, you know, we get caught in this bubble And I did hear some excitement around that in terms of what people actually expect At some point you saw this with virtual I honestly I don't have a good answer for that because I don't necessary by some of the the I don't want to sound like I'm coming after you, That's the usual response back. And, you know, what can you tell us about, Is that what you said? Okay, so will there be a bridge to those other cloud players soon as their road map? I know that when you talk about things like getting And I definitely look forward, Teo, seeing the progress that the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Doug DavisPERSON

0.99+

Corey QuinnPERSON

0.99+

IBMORGANIZATION

0.99+

SeattleLOCATION

0.99+

CoreyPERSON

0.99+

OracleORGANIZATION

0.99+

EvaPERSON

0.99+

Red HatORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

third pieceQUANTITY

0.99+

Air CorpsORGANIZATION

0.99+

TeoPERSON

0.99+

K NativeORGANIZATION

0.99+

eightyQUANTITY

0.99+

DougPERSON

0.99+

eightQUANTITY

0.99+

IBM ArmyORGANIZATION

0.99+

Ecosystem PartnersORGANIZATION

0.99+

Missing the PointTITLE

0.99+

YesterdayDATE

0.99+

KubeConEVENT

0.99+

OneQUANTITY

0.99+

firstQUANTITY

0.99+

Cloud, Native Computing FoundationORGANIZATION

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

fifteenQUANTITY

0.99+

TwoQUANTITY

0.98+

two worldsQUANTITY

0.98+

SyriaLOCATION

0.98+

thirdQUANTITY

0.98+

IBM Red HatORGANIZATION

0.98+

two serviceQUANTITY

0.98+

one reasonQUANTITY

0.98+

CincyLOCATION

0.98+

zeroQUANTITY

0.97+

KayPERSON

0.97+

ninety percentQUANTITY

0.96+

K nativeORGANIZATION

0.96+

Believers Trigger MashORGANIZATION

0.96+

Kay NativePERSON

0.95+

One thingQUANTITY

0.95+

EuropeLOCATION

0.95+

point BOTHER

0.95+

Cooper NeteaseORGANIZATION

0.94+

MohrPERSON

0.93+

twentyQUANTITY

0.93+

KaniaPERSON

0.93+

threeQUANTITY

0.91+

one verseQUANTITY

0.89+

KediaPERSON

0.89+

Point AOTHER

0.88+

couple of weeks agoDATE

0.87+

KeitaPERSON

0.83+

fourQUANTITY

0.82+

K NativePERSON

0.81+

CloudNativeCon EU 2019EVENT

0.79+

KenyaORGANIZATION

0.79+

two thousand seventeenQUANTITY

0.78+

Ueda GoodPERSON

0.78+

K nativePERSON

0.76+

coupleQUANTITY

0.76+

Teo K nativePERSON

0.75+

LambdaTITLE

0.75+

twenty nineteenQUANTITY

0.75+

Cloud FoundryORGANIZATION

0.75+

LennoxPERSON

0.74+

CoronetORGANIZATION

0.73+

FeliciaPERSON

0.71+

Cube KhanPERSON

0.71+

K nativeORGANIZATION

0.7+

Network Sing GrisORGANIZATION

0.67+

NeteaseORGANIZATION

0.65+

surlyPERSON

0.64+

ConORGANIZATION

0.64+

Keynote | Red Hat Summit 2019 | DAY 2 Morning


 

>> Ladies and gentlemen, please welcome Red Hat President Products and Technologies. Paul Cormier. Boring. >> Welcome back to Boston. Welcome back. And welcome back after a great night last night of our opening with with Jim and talking to certainly saw ten Jenny and and especially our customers. It was so great last night to hear our customers in how they set their their goals and how they met their goals. All possible because certainly with a little help from red hat, but all possible because of because of open source. And, you know, sometimes we have to all due that has set goals. And I'm going to talk this morning about what we as a company and with community, have set for our goals along the way. And sometimes you have to do that. You know, audacious goals. It can really change the perception of what's even possible. And, you know, if I look back, I can't think of anything, at least in my lifetime, that's more important. Or such a big golden John F. Kennedy setting the gold to the American people to go to the moon. I believe it or not, I was really, really only three years old when he said that, honestly. But as I grew up, I remember the passion around the whole country and the energy to make that goal a reality. So let's sort of talk about in compare and contrast, a little bit of where we are technically at that time, you know, tto win and to beat and winning the space race and even get into the space race. There was some really big technical challenges along the way. I mean, believe it or not. Not that long ago. But even But back then, math Malik mathematical calculations were being shifted from from brilliant people who we trusted, and you could look in the eye to A to a computer that was programmed with the results that were mostly printed out. This this is a time where the potential of computers was just really coming on the scene and, at the time, the space race at the time of space race it. It revolved around an IBM seventy ninety, which was one of the first transistor based computers. It could perform mathematical calculations faster than even the most brilliant mathematicians. But just like today, this also came with many, many challenges And while we had the goal of in the beginning of the technique and the technology to accomplish it, we needed people so dedicated to that goal that they would risk everything. And while it may seem commonplace to us today to trust, put our trust in machines, that wasn't the case. Back in nineteen sixty nine, the seven individuals that made up the Mercury Space crew were putting their their lives in the hands of those first computers. But on Sunday, July twentieth, nineteen sixty nine, these things all came together. The goal, the technology in the team and a human being walked on the moon. You know, if this was possible fifty years ago, just think about what Khun B. Accomplished today, where technology is part of our everyday lives. And with technology advances at an ever increasing rate, it's hard to comprehend the potential that sitting right at our fingertips every single day, everything you know about computing is continuing to change. Today, let's look a bit it back. A computing In nineteen sixty nine, the IBM seventy ninety could process one hundred thousand floating point operations per second, today's Xbox one that sitting in most of your living rooms probably can process six trillion flops. That's sixty million times more powerful than the original seventy ninety that helped put a human being on the moon. And at the same time that computing was, that was drastically changed. That this computing has drastically changed. So have the boundaries of where that computing sits and where it's been where it lives. At the time of the Apollo launch, the computing power was often a single machine. Then it moved to a single data center, and over time that grew to multiple data centers. Then with cloud, it extended all the way out to data centers that you didn't even own or have control of. But but computing now reaches far beyond any data center. This is also referred to as the edge. You hear a lot about that. The Apollo's, the Apollo's version of the Edge was the guidance system, a two megahertz computer that weighed seventy pounds embedded in the capsule. Today, today the edge is right here on my wrist. This apple watch weighs just a couple of ounces, and it's ten ten thousand times more powerful than that seventy ninety back in nineteen sixty nine But even more impactful than computing advances, combined with the pervasive availability of it, are the changes and who in what controls those that similar to social changes that have happened along the way. Shifting from mathematicians to computers, we're now facing the same type of changes with regards to operational control of our computing power. In its first forms. Operational control was your team, your team within your control? In some cases, a single person managed everything. But as complexity grows, our team's expanded, just like in the just like in the computing boundaries, system integrators and public cloud providers have become an extension of our team. But at the end of the day, it's still people that are still making all the decisions going forward with the progress of things like a I and software defined everything. It's quite likely that machines will be managing machines, and in many cases that's already happening today. But while the technology at our finger tips today is so impressive, the pace of changing complexity of the problems we aspire to solve our equally hard to comprehend and they are all intertwined with one another learning from each other, growing together faster and faster. We are tackling problems today on a global scale with unsinkable complexity beyond anyone beyond what any one single company or even one single country Khun solve alone. This is why open source is so important. This is why open source is so needed today in software. This is why open sources so needed today, even in the world, to solve other types of complex problems. And this is why open source has become the dominant development model which is driving the technology direction. Today is to bring two brother to bring together the best innovation from every corner of the planet. Toe fundamentally change how we solve problems. This approach and access the innovation is what has enabled open source To tackle The challenge is big challenges, like creating the hybrid cloud like building a truly open hybrid cloud. But even today it's really difficult to bridge the gap of the innovation. It's available in all in all of our fingertips by open source development, while providing the production level capabilities that are needed to really dip, ploy this in the enterprise and solve RIA world business problems. Red Hat has been committed to open source from the very, very beginning and bringing it to solve enterprise class problems for the last seventeen plus years. But when we built that model to bring open source to the enterprise, we absolutely knew we couldn't do it halfway tow harness the innovation. We had to fully embrace the model. We made a decision very early on. Give everything back and we live by that every single day. We didn't do crazy crazy things like you hear so many do out there. All this is open corps or everything below. The line is open and everything above the line is closed. We didn't do that, and we gave everything back Everything we learned in the process of becoming an enterprise class technology company. We gave it all of that back to the community to make better and better software. This is how it works. And we've seen the results of that. We've all seen the results of that and it could only have been possible within open source development model we've been building on the foundation of open source is most successful Project Lennox in the architecture of the future hybrid and bringing them to the Enterprise. This is what made Red Hat, the company that we are today and red hats journey. But we also had the set goals, and and many of them seemed insert insurmountable at the time, the first of which was making Lennox the Enterprise standard. And while this is so accepted today, let's take a look at what it took to get there. Our first launch into the Enterprise was rail two dot one. Yes, I know we two dot one, but we knew we couldn't release a one dato product. We knew that and and we didn't. But >> we didn't want to >> allow any reason why anyone of any customer anyone shouldn't should look past rail to solve their problems as an option. Back then, we had to fight every single flavor of Unix in every single account. But we were lucky to have a few initial partners and Big Eyes v partners that supported Rehl out of the gate. But while we had the determination, we knew we also had gaps in order to deliver on our on our priorities. In the early days of rail, I remember going to ask one of our engineers for a past rehl build because we were having a customer issue on it on an older release. And then I watched in horror as he rifled through his desk through a mess of CDs and magically came up and said, I found it here It is told me not to worry that the build this was he thinks this was the bill. This was the right one, and at that point I knew that despite the promise of Lennox, we had a lot of work ahead of us. The not only convinced the world that Lennox was secure, stable, an enterprise ready, but also to make that a reality. But we did. And today this is our reality. It's all of our reality. From the Enterprise Data Center standard to the fastest computers on the planet, Red Hat Enterprise, Lennox has continually risen to the challenge and has become the core foundation that many mission critical customers run and bet their business on. And an even bigger today Lennox is the foundation of which practically every single technology initiative is built upon. Lennox is not only standard toe build on today, it's the standard for innovation that builds around it. That's the innovation that's driving the future as well. We started our story with rail two dot one, and here we are today, seventeen years later, announcing rally as we did as we did last night. It's specifically designed for applications to run across the open hybrid. Clyde Cloud. Railed has become the best operating simp system for on premise all the way out to the cloud, providing that common operating model and workload foundation on which to build hybrid applications. Let's take it. Let's take a look at how far we've come and see this in action. >> Please welcome Red Hat Global director of developer experience, burst Sutter with Josh Boyer, Timothy Kramer, Lars Carl, it's Key and Brent Midwood. All right, we have some amazing things to show you. In just a few short moments, we actually have a lot of things to show you. And actually, Tim and Brandt will be with us momentarily. They're working out a few things in the back because we have a lot of this is gonna be a live demonstration, some incredible capabilities. Now you're going to see clear innovation inside the operating system where we worked incredibly hard to make it vast cities. You're free to manage many, many machines. I want you thinking about that as we go to this process. Now, also, keep in mind that this is the basis our core platform for everything we do here. Red hat. So it is an honor for me to be able to show it to you live on stage today. And so I recognize the many of you in the audience right now. Her hand's on systems administrators, systems, architect, citizens, engineers. And we know that you're under ever growing pressure to deliver needed infrastructure. Resource is ever faster, and that is a key element to what you're thinking about every day. Well, this has been a core theme, and our design decisions find red Odd Enterprise Lennox eight and intelligent operating system, which is making it fundamentally easier for you manage machines that scale. So hold what you're about to see next. Feels like a new superpower and and that redhead azure force multiplier. So first, let me introduce you to a large. He's totally my limits guru. >> I wouldn't call myself a girl, but I I guess you could say that I want to bring Lennox and light meant to more people. >> Okay, Well, let's let's dive in. And we're not about the clinic's eight. >> Sure. Let me go. And Morgan, >> wait a >> second. There's windows. >> Yeah, way Build the weft Consul into Really? That means that for the first time, you can log in from any device including your phone or this standard windows laptop. So you just go ahead and and to my Saturday lance credentials here. >> Okay, so now >> you're putting >> your limits password and over the web. >> Yeah, that might sound a bit scary at first, but of course, we're using the latest security tech by T. L s on dh csp on. Because that's the standard Lennox off site. You can use everything that you used to like a stage keys, OTP, tokens and stuff like this. >> Okay, so now I see the council right here. I love the dashboard overview of the system, but what else can you tell us about this council? >> Right? Like right here. You see the load of the system, some some of its properties. But you can also dive into logs everything that you're used to from the command line, right? Or lookit, services. This's all the services I've running, can start and stuff them and enable >> OK, I love that feature right there. So what about if I have to add a whole new application to this environment? >> Good that you're bringing that up. We build a new future into hell called application streams. Which the way for you to install different versions of your half stack that are supported I'LL show you with Youngmin a command line. But since Windows doesn't have a proper terminal, I'll just do it in the terminal that we built into the Web console Since the browser, I can even make this a bit bigger. Go to, for example, to see the application streams that we have for Poskus. Ijust do module list and I see you know we have ten and nine dot six Both supported tennis a default on defy enable ninety six Now the next time that I installed prescribes it will pull all their lady towards from them at six. >> Ok, so this is very cool. I see two verses of post Chris right here What tennis to default. That is fantastic and the application streams making that happen. But I'm really kind of curious, right? I loved using know js and Java. So what about multiple versions of those? >> Yeah, that's exactly the idea way. Want to keep up with the fast moving ecosystems off programming language? Isn't it a business? >> Okay, now, But I have another key question. I know some people were thinking it right now. What about Python? >> Yeah. In fact, in a minimum and still like this, python gives you command. Not fact. Just have to type it correctly. You can't just install which everyone you want two or three or whichever your application needs. >> Okay, Well, that is I've been burned on that one before. Okay, so no actual. Have a confession for all you guys. Right here. You guys keep this amongst yourselves. Don't let Paul No, I'm actually not a linnet systems administrator. I'm an application developer, an application architect, And I recently had to go figure out how to extend the file system. This is for real. And I'm going to the rat knowledge base and looking up things like, you know, PV create VD, extend resized to f s. And I have to admit, that's hard, >> right? I've opened the storage space for you right here, where you see an overview of your storage. And the council has made for people like you as well not only for people that I knew that when you two lunatics, right? It's if you're running, you're running some of the commands only, you know, some of the time you don't remember them. So, for example, I haven't felt twosome here. That's a little bit too small. Let me just throw it. It's like, you know, dragging this lighter. It calls all the command in the background for you. >> Oh, that is incredible. Is that simple? Just drag and drop. That is fantastic. Well, so I actually, you know, we'll have another question for you. It looks like now this linen systems administration is no longer a dark heart involving arcane commands typed into a black terminal. Like using when those funky ergonomic keyboards you know I'm talking about right? Do >> you know a lot of people, including me and people in the audience like that dark out right? And this is not taking any of that away. It's on additional tool to bring limits to more people. >> Okay, well, that is absolute fantastic. Thank you so much for that Large. And I really love him installing everything is so much easier, including a post gra seeker and, of course, the python that we saw right there. So now I want to change gears for a second because I actually have another situation that I'm always dealing with. And that is every time I want to build a new Lenox system, not only I don't want to have to install those commands again and again, it feels like I'm doing it over and over. So, Josh, how would I create a golden image? One VM image that can use and we have everything pre baked in? >> Yeah, absolutely. But >> we get that question all the time. So really includes image builder technology. Image builder technology is actually all of our hybrid cloud operating system image tools that we use to build our own images and rolled up in a nice, easy to integrate new system. So if I come here in the web console and I go to our image builder tab, it brings us to blueprints, right? Blueprints or what we used to actually control it goes into our golden image. Uh, and I heard you and Lars talking about post present python. So I went and started typing here. So it brings us to this page, but you could go to the selected components, and you can see here I've created a blueprint that has all the python and post press packages in it. Ah, and the interesting thing about this is it build on our existing kickstart technology. But you can use it to deploy that whatever cloud you want. And it's saved so that you don't actually have to know all the various incantations from Amazon toe azure to Google, whatever it's all baked in on. When you do this, you can actually see the dependencies that get brought in as well. Okay. Should we create one life? Yes, please. All right, cool. So if we go back to the blueprints page and we click create blueprint Let's, uh let's make a developer brute blueprint here. So we click great, and you can see here on the left hand side. I've got all of my content served up by Red Hat satellite. We have a lot of great stuff, and really, But we can go ahead and search. So we'LL look for post grows and you know, it's a developer image at the client for some local testing. Um, well, come in here and at the python bits. Probably the development package. We need a compiler if we're going to actually build anything. So look for GCC here and hey, what's your favorite editor? >> A Max, Of course, >> Max. All right. Hey, Lars, about you. I'm more of a person. You Maxim v I All right, Well, if you want to prevent a holy war in your system, you can actually use satellite to filter that out. But we're going to go ahead and Adam Ball, sweetie, I'm a fight on stage. So wait, just point and click. Let the graphical one. And then when we're all done, we just commit our changes, and our image is ready to build. >> Okay, So this VM image we just created right now from that blueprint this is now I can actually go out there and easily deploys of deploy this across multiple cloud providers. And as well as this on stage are where we have right now. >> Yeah, absolutely. We can to play on Amazon as your google any any infrastructure you're looking for so you can really hit your Clyburn hybrid cloud operating system images. >> Okay. All right, listen, we >> just go on, click, create image. Uh, we can select our different types here. I'm gonna go ahead and create a local VM because it's available image, and maybe they want to pass it around or whatever, and I just need a few moments for it to build. >> Okay? So while that's taking a few moments, I know there's another key question in the minds of the audience right now, and you're probably thinking I love what I see. What Right eye right hand Priceline say. But >> what does it >> take to upgrade from seven to eight? So large can you show us and walk us through an upgrade? >> Sure, this's my little Thomas Block that I set up. It's powered by what Chris and secrets over, but it's still running on seven six. So let's upgrade that jump over to my house fee on satellite on. You see all my relate machines here, including the one I showed you what Consul on before. And there is that one with my sun block and there's a couple others. Let me select those as well. This one on that one. Just go up here. Schedule remote job. And she was really great. And hit Submit. I made it so that it makes the booms national before. So if anything was wrong Kans throwback! >> Okay, okay, so now it's progressing. Here, >> it's progressing. Looks like it's running. Doing >> live upgrade on stage. Uh, >> seems like one is failing. What's going on here? Okay, we checked the tree of great Chuck. Oh, yeah, that's the one I was playing around with Butter fest backstage. What? Detective that and you know, it doesn't run the Afghan cause we don't support operating that. >> Okay, so what I'm hearing now? So the good news is, we were protected from possible failed upgrade there, So it sounds like these upgrades are perfectly safe. Aiken, basically, you know, schedule this during a maintenance window and still get some sleep. >> Totally. That's the idea. >> Okay, fantastic. All right. So it looks like upgrades are easy and perfectly safe. And I really love what you showed us there. It's good point. Click operation right from satellite. Ok, so Well, you know, we were checking out upgrades. I want to know Josh. How those v ems coming along. >> They went really well. So you were away for so long. I got a little bored and I took some liberties. >> What do you mean? >> Well, the image Bill And, you know, I decided I'm going to go ahead and deploy here to this Intel machine on stage Esso. I have that up and running in the web. Counsel. I built another one on the arm box, which is actually pretty fast, and that's up and running on this. Our machine on that went so well that I decided to spend up some an Amazon. So I've got a few instances here running an Amazon with the web console accessible there as well. On even more of our pre bill image is up and running an azure with the web console there. So the really cool thing about this bird is that all of these images were built with image builder in a single location, controlling all the content that you want in your golden images deployed across the hybrid cloud. >> Wow, that is fantastic. And you might think that so we actually have more to show you. So thank you so much for that large. And Josh, that is fantastic. Looks like provisioning bread. Enterprise Clinic Systems ate a redhead. Enterprise Enterprise. Rhetta Enterprise Lennox. Eight Systems is Asian ever before, but >> we have >> more to talk to you about. And there's one thing that many of the operations professionals in this room right now no, that provisioning of'em is easy, but it's really day two day three, it's down the road that those viens required day to day maintenance. As a matter of fact, several you folks right now in this audience to have to manage hundreds, if not thousands, of virtual machines I recently spoke to. Gentleman has to manage thirteen hundred servers. So how do you manage those machines? A great scale. So great that they have now joined us is that it looks like they worked things out. So now I'm curious, Tim. How will we manage hundreds, if not thousands, of computers? >> Welbourne, one human managing hundreds or even thousands of'em says, No problem, because we have Ansel automation. And by leveraging Ansel's integration into satellite, not only can we spin up those V em's really quickly, like Josh was just doing, but we can also make ongoing maintenance of them really simple. Come on up here. I'm going to show you here a satellite inventory and his red hat is publishing patches. Weaken with that danceable integration easily apply those patches across our entire fleet of machines. Okay, >> that is fantastic. So he's all the machines can get updated in one fell swoop. >> He sure can. And there's one thing that I want to bring your attention to today because it's brand new. And that's cloud that red hat dot com And here, a cloud that redhead dot com You can view and manage your entire inventory no matter where it sits. Of Redhead Enterprise Lennox like on Prem on stage. Private Cloud or Public Cloud. It's true Hybrid cloud management. >> OK, but one thing. One thing. I know that in the minds of the audience right now. And if you have to manage a large number servers this it comes up again and again. What happens when you have those critical vulnerabilities that next zero day CV could be tomorrow? >> Exactly. I've actually been waiting for a while patiently for you >> to get to the really good stuff. So >> there's one more thing that I wanted to let folks know about. Red Hat Enterprise. The >> next eight and some features that we have there. Oh, >> yeah? What is that? >> So, actually, one of the key design principles of relate is working with our customers over the last twenty years to integrate all the knowledge that we've gained and turn that into insights that we can use to keep our red hat Enterprise Lennox servers running securely, inefficiently. And so what we actually have here is a few things that we could take a look at show folks what that is. >> OK, so we basically have this new feature. We're going to show people right now. And so one thing I want to make sure it's absolutely included within the redhead enterprise in that state. >> Yes. Oh, that's Ah, that's an announcement that we're making this week is that this is a brand new feature that's integrated with Red Hat Enterprise clinics, and it's available to everybody that has a red hat enterprise like subscription. So >> I believe everyone in this room right now has a rail subscriptions, so it's available to all of them. >> Absolutely, absolutely. So let's take a quick look and try this out. So we actually have. Here is a list of about six hundred rules. They're configuration security and performance rules. And this is this list is growing every single day, so customers can actually opt in to the rules that are most that are most applicable to their enterprises. So what we're actually doing here is combining the experience and knowledge that we have with the data that our customers opt into sending us. So customers have opted in and are sending us more data every single night. Then they actually have in total over the last twenty years via any other mechanism. >> Now there's I see now there's some critical findings. That's what I was talking about. But it comes to CVS and things that nature. >> Yeah, I'm betting that those air probably some of the rail seven boxes that we haven't actually upgraded quite yet. So we get back to that. What? I'd really like to show everybody here because everybody has access to this is how easy it is to opt in and enable this feature for real. Okay, let's do that real quick, so I gotta hop back over to satellite here. This is the satellite that we saw before, and I'll grab one of the hosts and we can use the new Web console feature that's part of Railly, and via single sign on I could jump right from satellite over to the Web console. So it's really, really easy. And I'LL grab a terminal here and registering with insights is really, really easy. Is one command troops, and what's happening right now is the box is going to gather some data. It's going to send it up to the cloud, and within just a minute or two, we're gonna have some results that we can look at back on the Web interface. >> I love it so it's just a single command and you're ready to register this box right now. That is super easy. Well, that's fantastic, >> Brent. We started this whole series of demonstrations by telling the audience that Red Hat Enterprise Lennox eight was the easiest, most economical and smartest operating system on the planet, period. And well, I think it's cute how you can go ahead and captain on a single machine. I'm going to show you one more thing. This is Answerable Tower. You can use as a bell tower to managing govern your answerable playbook, usage across your entire organization and with this. What I could do is on every single VM that was spun up here today. Opt in and register insights with a single click of a button. >> Okay, I want to see that right now. I know everyone's waiting for it as well, But hey, you're VM is ready. Josh. Lars? >> Yeah. My clock is running a little late now. Yeah, insights is a really cool feature >> of rail. And I've got it in all my images already. All >> right, I'm doing it all right. And so as this playbook runs across the inventory, I can see the machines registering on cloud that redhead dot com ready to be managed. >> OK, so all those onstage PM's as well as the hybrid cloud VM should be popping in IRC Post Chris equals Well, fantastic. >> That's awesome. Thanks to him. Nothing better than a Red Hat Summit speaker in the first live demo going off script deal. Uh, let's go back and take a look at some of those critical issues affecting a few of our systems here. So you can see this is a particular deanna's mask issue. It's going to affect a couple of machines. We saw that in the overview, and I can actually go and get some more details about what this particular issue is. So if you take a look at the right side of the screen there, there's actually a critical likelihood an impact that's associated with this particular issue. And what that really translates to is that there's a high level of risk to our organization from this particular issue. But also there's a low risk of change. And so what that means is that it's really, really safe for us to go ahead and use answerable to mediate this so I can grab the machines will select those two and we're mediate with answerable. I can create a new playbook. It's our maintenance window, but we'LL do something along the lines of like stuff Tim broke and that'LL be our cause. We name it whatever we want. So we'Ll create that playbook and take a look at it, and it's actually going to give us some details about the machines. You know what, what type of reboots Efendi you're going to be needed and what we need here. So we'LL go ahead and execute the playbook and what you're going to see is the outputs goingto happen in real time. So this is happening from the cloud were affecting machines. No matter where they are, they could be on Prem. They could be in a hybrid cloud, a public cloud or in a private cloud. And these things are gonna be remediated very, very easily with answerable. So it's really, really awesome. Everybody here with a red hat. Enterprise licks Lennox subscription has access to this now, so I >> kind of want >> everybody to go try this like, we really need to get this thing going and try it out right now. But >> don't know, sent about the room just yet. You get stay here >> for okay, Mr. Excitability, I think after this keynote, come back to the red hat booth and there's an optimization section. You can come talk to our insights engineers. And even though it's really easy to get going on your own, they can help you out. Answer any questions you might have. So >> this is really the start of a new era with an intelligent operating system and beauty with intelligence you just saw right now what insights that troubles you. Fantastic. So we're enabling systems administrators to manage more red in private clinics, a greater scale than ever before. I know there's a lot more we could show you, but we're totally out of time at this point, and we kind of, you know, when a little bit sideways here moments. But we need to get off the stage. But there's one thing I want you guys to think about it. All right? Do come check out the in the booth. Like Tim just said also in our debs, Get hands on red and a prize winning state as well. But really, I want you to think about this one human and a multitude of servers. And if you remember that one thing asked you upfront. Do you feel like you get a new superpower and redhead? Is your force multiplier? All right, well, thank you so much. Josh and Lars, Tim and Brent. Thank you. And let's get Paul back on stage. >> I went brilliant. No, it's just as always, >> amazing. I mean, as you can tell from last night were really, really proud of relate in that coming out here at the summit. And what a great way to showcase it. Thanks so much to you. Birth. Thanks, Brent. Tim, Lars and Josh. Just thanks again. So you've just seen this team demonstrate how impactful rail Khun b on your data center. So hopefully hopefully many of you. If not all of you have experienced that as well. But it was super computers. We hear about that all the time, as I just told you a few minutes ago, Lennox isn't just the foundation for enterprise and cloud computing. It's also the foundation for the fastest super computers in the world. In our next guest is here to tell us a lot more about that. >> Please welcome Lawrence Livermore National Laboratory. HPC solution Architect Robin Goldstone. >> Thank you so much, Robin. >> So welcome. Welcome to the summit. Welcome to Boston. And thank thank you so much for coming for joining us. Can you tell us a bit about the goals of Lawrence Livermore National Lab and how high high performance computing really works at this level? >> Sure. So Lawrence Livermore National >> Lab was established during the Cold War to address urgent national security needs by advancing the state of nuclear weapons, science and technology and high performance computing has always been one of our core capabilities. In fact, our very first supercomputer, ah Univac one was ordered by Edward Teller before our lab even opened back in nineteen fifty two. Our mission has evolved since then to cover a broad range of national security challenges. But first and foremost, our job is to ensure the safety, security and reliability of the nation's nuclear weapons stockpile. Oh, since the US no longer performs underground nuclear testing, our ability to certify the stockpile depends heavily on science based science space methods. We rely on H P C to simulate the behavior of complex weapons systems to ensure that they can function as expected, well beyond their intended life spans. That's actually great. >> So are you really are still running on that on that Univac? >> No, Actually, we we've moved on since then. So Sierra is Lawrence Livermore. Its latest and greatest supercomputer is currently the Seconds spastic supercomputer in the world and for the geeks in the audience, I think there's a few of them out there. We put up some of the specs of Syrah on the screen behind me, a couple of things worth highlighting our Sierra's peak performance and its power utilisation. So one hundred twenty five Pata flops of performance is equivalent to about twenty thousand of those Xbox one excess that you mentioned earlier and eleven point six megawatts of power required Operate Sierra is enough to power around eleven thousand homes. Syria is a very large and complex system, but underneath it all, it starts out as a collection of servers running Lin IX and more specifically, rail. >> So did Lawrence. Did Lawrence Livermore National Lab National Lab used Yisrael before >> Sierra? Oh, yeah, most definitely. So we've been running rail for a very long time on what I'll call our mid range HPC systems. So these clusters, built from commodity components, are sort of the bread and butter of our computer center. And running rail on these systems provides us with a continuity of operations and a common user environment across multiple generations of hardware. Also between Lawrence Livermore in our sister labs, Los Alamos and Sandia. Alongside these commodity clusters, though, we've always had one sort of world class supercomputer like Sierra. Historically, these systems have been built for a sort of exotic proprietary hardware running entirely closed source operating systems. Anytime something broke, which was often the Vander would be on the hook to fix it. And you know, >> that sounds >> like a good model, except that what we found overtime is most the issues that we have on these systems were either due to the extreme scale or the complexity of our workloads. Vendors seldom had a system anywhere near the size of ours, and we couldn't give them our classified codes. So their ability to reproduce our problem was was pretty limited. In some cases, they've even sent an engineer on site to try to reproduce our problems. But even then, sometimes we wouldn't get a fix for months or else they would just tell us they weren't going to fix the problem because we were the only ones having it. >> So for many of us, for many of us, the challenges is one of driving reasons for open source, you know, for even open source existing. How has how did Sierra change? Things are on open source for >> you. Sure. So when we developed our technical requirements for Sierra, we had an explicit requirement that we want to run an open source operating system and a strong preference for rail. At the time, IBM was working with red hat toe add support Terrell for their new little Indian power architecture. So it was really just natural for them to bid a red. A rail bay system for Sierra running Raylan Cyril allows us to leverage the model that's worked so well for us for all this time on our commodity clusters any packages that we build for X eighty six, we can now build those packages for power as well as our market texture using our internal build infrastructure. And while we have a formal support relationship with IBM, we can also tap our in house colonel developers to help debug complex problems are sys. Admin is Khun now work on any of our systems, including Sierra, without having toe pull out their cheat sheet of obscure proprietary commands. Our users get a consistent software environment across all our systems. And if the security vulnerability comes out, we don't have to chase around getting fixes from Multan slo es fenders. >> You know, you've been able, you've been able to extend your foundation from all the way from X eighty six all all the way to the extract excess Excuse scale supercomputing. We talk about giving customers all we talked about it all the time. A standard operational foundation to build upon. This isn't This isn't exactly what we've envisioned. So So what's next for you >> guys? Right. So what's next? So Sierra's just now going into production. But even so, we're already working on the contract for our next supercomputer called El Capitan. That's scheduled to be delivered the Lawrence Livermore in the twenty twenty two twenty timeframe. El Capitan is expected to be about ten times the performance of Sierra. I can't share any more details about that system right now, but we are hoping that we're going to be able to continue to build on a solid foundation. That relish provided us for well over a decade. >> Well, thank you so much for your support of realm over the years, Robin. And And thank you so much for coming and tell us about it today. And we can't wait to hear more about El Capitan. Thank you. Thank you very much. So now you know why we're so proud of realm. And while you saw confetti cannons and T shirt cannons last night, um, so you know, as as burned the team talked about the demo rail is the force multiplier for servers. We've made Lennox one of the most powerful platforms in the history of platforms. But just as Lennox has become a viable platform with access for everyone, and rail has become viable, more viable every day in the enterprise open source projects began to flourish around the operating system. And we needed to bring those projects to our enterprise customers in the form of products with the same trust models as we did with Ralph seeing the incredible progress of software development occurring around Lennox. Let's let's lead us to the next goal that we said tow, tow ourselves. That goal was to make hybrid cloud the default enterprise for the architecture. How many? How many of you out here in the audience or are Cesar are? HC sees how many out there a lot. A lot. You are the people that our building the next generation of computing the hybrid cloud, you know, again with like just like our goals around Lennox. This goals might seem a little daunting in the beginning, but as a community we've proved it time and time again. We are unstoppable. Let's talk a bit about what got us to the point we're at right right now and in the work that, as always, we still have in front of us. We've been on a decade long mission on this. Believe it or not, this mission was to build the capabilities needed around the Lenox operating system to really build and make the hybrid cloud. When we saw well, first taking hold in the enterprise, we knew that was just taking the first step. Because for a platform to really succeed, you need applications running on it. And to get those applications on your platform, you have to enable developers with the tools and run times for them to build, to build upon. Over the years, we've closed a few, if not a lot of those gaps, starting with the acquisition of J. Boss many years ago, all the way to the new Cuban Eddie's native code ready workspaces we launched just a few months back. We realized very early on that building a developer friendly platform was critical to the success of Lennox and open source in the enterprise. Shortly after this, the public cloud stormed onto the scene while our first focus as a company was done on premise in customer data centers, the public cloud was really beginning to take hold. Rehl very quickly became the standard across public clouds, just as it was in the enterprise, giving customers that common operating platform to build their applications upon ensuring that those applications could move between locations without ever having to change their code or operating model. With this new model of the data center spread across so many multiple environments, management had to be completely re sought and re architected. And given the fact that environments spanned multiple locations, management, real solid management became even more important. Customers deploying in hybrid architectures had to understand where their applications were running in how they were running, regardless of which infrastructure provider they they were running on. We invested over the years with management right alongside the platform, from satellite in the early days to cloud forms to cloud forms, insights and now answerable. We focused on having management to support the platform wherever it lives. Next came data, which is very tightly linked toe applications. Enterprise class applications tend to create tons of data and to have a common operating platform foyer applications. You need a storage solutions. That's Justus, flexible as that platform able to run on premise. Just a CZ. Well, as in the cloud, even across multiple clouds. This let us tow acquisitions like bluster, SEF perma bitch in Nubia, complimenting our Pratt platform with red hat storage for us, even though this sounds very condensed, this was a decade's worth of investment, all in preparation for building the hybrid cloud. Expanding the portfolio to cover the areas that a customer would depend on to deploy riel hybrid cloud architectures, finding any finding an amplifying the right open source project and technologies, or filling the gaps with some of these acquisitions. When that necessarily wasn't available by twenty fourteen, our foundation had expanded, but one big challenge remained workload portability. Virtual machine formats were fragmented across the various deployments and higher level framework such as Java e still very much depended on a significant amount of operating system configuration and then containers happened containers, despite having a very long being in existence for a very long time. As a technology exploded on the scene in twenty fourteen, Cooper Netease followed shortly after in twenty fifteen, allowing containers to span multiple locations and in one fell swoop containers became the killer technology to really enable the hybrid cloud. And here we are. Hybrid is really the on ly practical reality in way for customers and a red hat. We've been investing in all aspects of this over the last eight plus years to make our customers and partners successful in this model. We've worked with you both our customers and our partners building critical realm in open shift deployments. We've been constantly learning about what has caused problems and what has worked well in many cases. And while we've and while we've amassed a pretty big amount of expertise to solve most any challenge in in any area that stack, it takes more than just our own learning's to build the next generation platform. Today we're also introducing open shit for which is the culmination of those learnings. This is the next generation of the application platform. This is truly a platform that has been built with our customers and not simply just with our customers in mind. This is something that could only be possible in an open source development model and just like relish the force multiplier for servers. Open shift is the force multiplier for data centers across the hybrid cloud, allowing customers to build thousands of containers and operate them its scale. And we've also announced open shift, and we've also announced azure open shift. Last night. Satya on this stage talked about that in depth. This is all about extending our goals of a common operating platform enabling applications across the hybrid cloud, regardless of whether you run it yourself or just consume it as a service. And with this flagship release, we are also introducing operators, which is the central, which is the central feature here. We talked about this work last year with the operator framework, and today we're not going to just show you today. We're not going to just show you open shift for we're going to show you operators running at scale operators that will do updates and patches for you, letting you focus more of your time and running your infrastructure and running running your business. We want to make all this easier and intuitive. So let's have a quick look at how we're doing. Just that >> painting. I know all of you have heard we're talking to pretend to new >> customers about the travel out. So new plan. Just open it up as a service been launched by this summer. Look, I know this is a big quest for not very big team. I'm open to any and all ideas. >> Please welcome back to the stage. Red Hat Global director of developer Experience burst Sutter with Jessica Forrester and Daniel McPherson. All right, we're ready to do some more now. Now. Earlier we showed you read Enterprise Clinic St running on lots of different hardware like this hardware you see right now And we're also running across multiple cloud providers. But now we're going to move to another world of Lennox Containers. This is where you see open shift four on how you can manage large clusters of applications from eggs limits containers across the hybrid cloud. We're going to see this is where suffer operators fundamentally empower human operators and especially make ups and Deb work efficiently, more efficiently and effectively there together than ever before. Rights. We have to focus on the stage right now. They're represent ops in death, and we're gonna go see how they reeled in application together. Okay, so let me introduce you to Dan. Dan is totally representing all our ops folks in the audience here today, and he's telling my ops, comfort person Let's go to call him Mr Ops. So Dan, >> thanks for with open before, we had a much easier time setting up in maintaining our clusters. In large part, that's because open shit for has extended management of the clusters down to the infrastructure, the diversity kinds of parent. When you take >> a look at the open ship console, >> you can now see the machines that make up the cluster where machine represents the infrastructure. Underneath that Cooper, Eddie's node open shit for now handles provisioning Andy provisioning of those machines. From there, you could dig into it open ship node and see how it's configured and monitor how it's behaving. So >> I'm curious, >> though it does this work on bare metal infrastructure as well as virtualized infrastructure. >> Yeah, that's right. Burn So Pa Journal nodes, no eternal machines and open shit for can now manage it all. Something else we found extremely useful about open ship for is that it now has the ability to update itself. We can see this cluster hasn't update available and at the press of a button. Upgrades are responsible for updating. The entire platform includes the nodes, the control plane and even the operating system and real core arrests. All of this is possible because the infrastructure components and their configuration is now controlled by technology called operators. Thes software operators are responsible for aligning the cluster to a desired state. And all of this makes operational management of unopened ship cluster much simpler than ever before. All right, I >> love the fact that all that's been on one console Now you can see the full stack right all way down to the bare metal right there in that one console. Fantastic. So I wanted to scare us for a moment, though. And now let's talk to Deva, right? So Jessica here represents our all our developers in the room as my facts. He manages a large team of developers here Red hat. But more importantly, she represents our vice president development and has a large team that she has to worry about on a regular basis of Jessica. What can you show us? We'LL burn My team has hundreds of developers and were constantly under pressure to deliver value to our business. And frankly, we can't really wait for Dan and his ops team to provisioned the infrastructure and the services that we need to do our job. So we've chosen open shift as our platform to run our applications on. But until recently, we really struggled to find a reliable source of Cooper Netease Technologies that have the operational characteristics that Dan's going to actually let us install through the cluster. But now, with operator, How bio, we're really seeing the V ecosystem be unlocked. And the technology's there. Things that my team needs, its databases and message cues tracing and monitoring. And these operators are actually responsible for complex applications like Prometheus here. Okay, they're written in a variety of languages, danceable, but that is awesome. So I do see a number of options there already, and preaches is a great example. But >> how do you >> know that one? These operators really is mature enough and robust enough for Dan and the outside of the house. Wilbert, Here we have the operator maturity model, and this is going to tell me and my team whether this particular operator is going to do a basic install if it's going to upgrade that application over time through different versions or all the way out to full auto pilot, where it's automatically scaling and tuning the application based on the current environment. And it's very cool. So coming over toothy open shift Consul, now we can actually see Dan has made the sequel server operator available to me and my team. That's the database that we're using. A sequel server. That's a great example. So cynics over running here in the cluster? But this is a great example for a developer. What if I want to create a new secret server instance? Sure, we're so it's as easy as provisioning any other service from the developer catalog. We come in and I can type for sequel server on what this is actually creating is, ah, native resource called Sequel Server, and you can think of that like a promise that a sequel server will get created. The operator is going to see that resource, install the application and then manage it over its life cycle, KAL, and from this install it operators view, I can see the operators running in my project and which resource is its managing Okay, but I'm >> kind of missing >> something here. I see this custom resource here, the sequel server. But where the community's resource is like pods. Yeah, I think it's cool that we get this native resource now called Sequel Server. But if I need to, I can still come in and see the native communities. Resource is like your staple set in service here. Okay, that is fantastic. Now, we did say earlier on, though, like many of our customers in the audience right now, you have a large team of engineers. Lost a large team of developers you gotta handle. You gotta have more than one secret server, right? We do one for every team as we're developing, and we use a lot of other technologies running on open shift as well, including Tomcat and our Jenkins pipelines and our dough js app that is gonna actually talk to that sequel server database. Okay, so this point we can kind of provisions, Some of these? Yes. Oh, since all of this is self service for me and my team's, I'm actually gonna go and create one of all of those things I just said on all of our projects, right Now, if you just give me a minute, Okay? Well, right. So basically, you're going to knock down No Jazz Jenkins sequel server. All right, now, that's like hundreds of bits of application level infrastructure right now. Live. So, Dan, are you not terrified? Well, I >> guess I should have done a little bit better >> job of managing guests this quota and historically just can. I might have had some conflict here because creating all these new applications would admit my team now had a massive back like tickets to work on. But now, because of software operators, my human operators were able to run our infrastructure at scale. So since I'm long into the cluster here as the cluster admin, I get this view of pods across all projects. And so I get an idea of what's happening across the entire cluster. And so I could see now we have four hundred ninety four pods already running, and there's a few more still starting up. And if I scroll to the list, we can see the different workloads Jessica just mentioned of Tomcats. And no Gs is And Jenkins is and and Siegel servers down here too, you know, I see continues >> creating and you have, like, close to five hundred pods running >> there. So, yeah, filters list down by secret server, so we could just see. Okay, But >> aren't you not >> running going around a cluster capacity at some point? >> Actually, yeah, we we definitely have a limited capacity in this cluster. And so, luckily, though, we already set up auto scale er's And so because the additional workload was launching, we see now those outer scholars have kicked in and some new machines are being created that don't yet have noticed. I'm because they're still starting up. And so there's another good view of this as well, so you can see machine sets. We have one machine set per availability zone, and you could see the each one is now scaling from ten to twelve machines. And the way they all those killers working is for each availability zone, they will. If capacities needed, they will add additional machines to that availability zone and then later effect fast. He's no longer needed. It will automatically take those machines away. >> That is incredible. So right now we're auto scaling across multiple available zones based on load. Okay, so looks like capacity planning and automation is fully, you know, handle this point. But I >> do have >> another question for year logged in. Is the cluster admin right now into the console? Can you show us your view of >> operator suffer operators? Actually, there's a couple of unique views here for operators, for Cluster admits. The first of those is operator Hub. This is where a cluster admin gets the ability to curate the experience of what operators are available to users of the cluster. And so obviously we already have the secret server operator installed, which which we've been using. The other unique view is operator management. This gives a cluster I've been the ability to maintain the operators they've already installed. And so if we dig in and see the secret server operator, well, see, we haven't set up for manual approval. And what that means is if a new update comes in for a single server, then a cluster and we would have the ability to approve or disapprove with that update before installs into the cluster, we'LL actually and there isn't upgrade that's available. Uh, I should probably wait to install this, though we're in the middle of scaling out this cluster. And I really don't want to disturb Jessica's application. Workflow. >> Yeah, so, actually, Dan, it's fine. My app is already up. It's running. Let me show it to you over here. So this is our products application that's talking to that sequel server instance. And for debugging purposes, we can see which version of sequel server we're currently talking to. Its two point two right now. And then which pod? Since this is a cluster, there's more than one secret server pod we could be connected to. Okay, I could see right there the bounder screeners they know to point to. That's the version we have right now. But, you know, >> this is kind of >> point of software operators at this point. So, you know, everyone in this room, you know, wants to see you hit that upgrade button. Let's do it. Live here on stage. Right, then. All >> right. All right. I could see where this is going. So whenever you updated operator, it's just like any other resource on communities. And so the first thing that happens is the operator pot itself gets updated so we actually see a new version of the operator is currently being created now, and what's that gets created, the overseer will be terminated. And that point, the new, softer operator will notice. It's now responsible for managing lots of existing Siegel servers already in the environment. And so it's then going Teo update each of those sickle servers to match to the new version of the single server operator and so we could see it's running. And so if we switch now to the all projects view and we filter that list down by sequel server, then we should be able to see us. So lots of these sickle servers are now being created and the old ones are being terminated. So is the rolling update across the cluster? Exactly a So the secret server operator Deploy single server and an H A configuration. And it's on ly updates a single instance of secret server at a time, which means single server always left in nature configuration, and Jessica doesn't really have to worry about downtime with their applications. >> Yeah, that's awesome dance. So glad the team doesn't have to worry about >> that anymore and just got I think enough of these might have run by Now, if you try your app again might be updated. >> Let's see Jessica's application up here. All right. On laptop three. >> Here we go. >> Fantastic. And yet look, we're We're into two before we're onto three. Now we're on to victory. Excellent on. >> You know, I actually works so well. I don't even see a reason for us to leave this on manual approval. So I'm going to switch this automatic approval. And then in the future, if a new single server comes in, then we don't have to do anything, and it'll be all automatically updated on the cluster. >> That is absolutely fantastic. And so I was glad you guys got a chance to see that rolling update across the cluster. That is so cool. The Secret Service database being automated and fully updated. That is fantastic. Alright, so I can see how a software operator doesn't able. You don't manage hundreds if not thousands of applications. I know a lot of folks or interest in the back in infrastructure. Could you give us an example of the infrastructure >> behind this console? Yeah, absolutely. So we all know that open shift is designed that run in lots of different environments. But our teams think that as your redhead over, Schiff provides one of the best experiences by deeply integrating the open chief Resource is into the azure console, and it's even integrated into the azure command line toll and the easy open ship man. And, as was announced yesterday, it's now available for everyone to try out. And there's actually one more thing we wanted to show Everyone related to open shit, for this is all so new with a penchant for which is we now have multi cluster management. This gives you the ability to keep track of all your open shift environments, regardless of where they're running as well as you can create new clusters from here. And I'll dig into the azure cluster that we were just taking a look at. >> Okay, but is this user and face something have to install them one of my existing clusters? >> No, actually, this is the host of service that's provided by Red hat is part of cloud that redhead that calm and so all you have to do is log in with your red hair credentials to get access. >> That is incredible. So one console, one user experience to see across the entire hybrid cloud we saw earlier with Red update. Right and red embers. Thank Satan. Now we see it for multi cluster management. But home shift so you can fundamentally see. Now the suffer operators do finally change the game when it comes to making human operators vastly more productive and, more importantly, making Devon ops work more efficiently together than ever before. So we saw the rich ice vehicle system of those software operators. We can manage them across the Khyber Cloud with any, um, shift instance. And more importantly, I want to say Dan and Jessica for helping us with this demonstration. Okay, fantastic stuff, guys. Thank you so much. Let's get Paul back out here >> once again. Thanks >> so much to burn his team. Jessica and Dan. So you've just seen how open shift operators can help you manage hundreds, even thousands of applications. Install, upgrade, remove nodes, control everything about your application environment, virtual physical, all the way out to the cloud making, making things happen when the business demands it even at scale, because that's where it's going to get. Our next guest has lots of experience with demand at scale. and they're using open source container management to do it. Their work, their their their work building a successful cloud, First platform and there, the twenty nineteen Innovation Award winner. >> Please welcome twenty nineteen Innovation Award winner. Cole's senior vice president of technology, Rich Hodak. >> How you doing? Thanks. >> Thanks so much for coming out. We really appreciate it. So I guess you guys set some big goals, too. So can you baby tell us about the bold goal? Helped you personally help set for Cole's. And what inspired you to take that on? Yes. So it was twenty seventeen and life was pretty good. I had no gray hair and our business was, well, our tech was working well, and but we knew we'd have to do better into the future if we wanted to compete. Retails being disrupted. Our customers are asking for new experiences, So we set out on a goal to become an open hybrid cloud platform, and we chose Red had to partner with us on a lot of that. We set off on a three year journey. We're currently in Year two, and so far all KP eyes are on track, so it's been a great journey thus far. That's awesome. That's awesome. So So you Obviously, Obviously you think open source is the way to do cloud computing. So way absolutely agree with you on that point. So So what? What is it that's convinced you even more along? Yeah, So I think first and foremost wait, do we have a lot of traditional IAS fees? But we found that the open source partners actually are outpacing them with innovation. So I think that's where it starts for us. Um, secondly, we think there's maybe some financial upside to going more open source. We think we can maybe take some cost out unwind from these big fellas were in and thirdly, a CZ. We go to universities. We started hearing. Is we interviewed? Hey, what is Cole's doing with open source and way? Wanted to use that as a lever to help recruit talent. So I'm kind of excited, you know, we partner with Red Hat on open shift in in Rail and Gloucester and active M Q and answerable and lots of things. But we've also now launched our first open source projects. So it's really great to see this journey. We've been on. That's awesome, Rich. So you're in. You're in a high touch beta with with open shift for So what? What features and components or capabilities are you most excited about and looking forward to what? The launch and you know, and what? You know what? What are the something maybe some new goals that you might be able to accomplish with with the new features. And yeah, So I will tell you we're off to a great start with open shift. We've been on the platform for over a year now. We want an innovation award. We have this great team of engineers out here that have done some outstanding work. But certainly there's room to continue to mature that platform. It calls, and we're excited about open shift, for I think there's probably three things that were really looking forward to. One is we're looking forward to, ah, better upgrade process. And I think we saw, you know, some of that in the last demo. So upgrades have been kind of painful up until now. So we think that that that will help us. Um, number two, A lot of our open shift workloads today or the workloads. We run an open shifts are the stateless apse. Right? And we're really looking forward to moving more of our state full lapse into the platform. And then thirdly, I think that we've done a great job of automating a lot of the day. One stuff, you know, the provisioning of, of things. There's great opportunity o out there to do mohr automation for day two things. So to integrate mohr with our messaging systems in our database systems and so forth. So we, uh we're excited. Teo, get on board with the version for wear too. So, you know, I hope you, Khun, we can help you get to the next goals and we're going to continue to do that. Thank you. Thank you so much rich, you know, all the way from from rail toe open shift. It's really exciting for us, frankly, to see our products helping you solve World War were problems. What's you know what? Which is. Really? Why way do this and and getting into both of our goals. So thank you. Thank you very much. And thanks for your support. We really appreciate it. Thanks. It has all been amazing so far and we're not done. A critical part of being successful in the hybrid cloud is being successful in your data center with your own infrastructure. We've been helping our customers do that in these environments. For almost twenty years now, we've been running the most complex work loads in the world. But you know, while the public cloud has opened up tremendous possibilities, it also brings in another type of another layer of infrastructure complexity. So what's our next goal? Extend your extend your data center all the way to the edge while being as effective as you have been over the last twenty twenty years, when it's all at your own fingertips. First from a practical sense, Enterprises air going to have to have their own data centers in their own environment for a very long time. But there are advantages of being able to manage your own infrastructure that expand even beyond the public cloud all the way out to the edge. In fact, we talked about that very early on how technology advances in computer networking is storage are changing the physical boundaries of the data center every single day. The need, the need to process data at the source is becoming more and more critical. New use cases Air coming up every day. Self driving cars need to make the decisions on the fly. In the car factory processes are using a I need to adapt in real time. The factory floor has become the new edge of the data center, working with things like video analysis of a of A car's paint job as it comes off the line, where a massive amount of data is on ly needed for seconds in order to make critical decisions in real time. If we had to wait for the video to go up to the cloud and back, it would be too late. The damage would have already been done. The enterprise is being stretched to be able to process on site, whether it's in a car, a factory, a store or in eight or nine PM, usually involving massive amounts of data that just can't easily be moved. Just like these use cases couldn't be solved in private cloud alone because of things like blatant see on data movement, toe address, real time and requirements. They also can't be solved in public cloud alone. This is why open hybrid is really the model that's needed in the only model forward. So how do you address this class of workload that requires all of the above running at the edge? With the latest technology all its scale, let me give you a bit of a preview of what we're working on. We are taking our open hybrid cloud technologies to the edge, Integrated with integrated with Aro AM Hardware Partners. This is a preview of a solution that will contain red had open shift self storage in K V M virtual ization with Red Hat Enterprise Lennox at the core, all running on pre configured hardware. The first hardware out of the out of the gate will be with our long time. Oh, am partner Del Technologies. So let's bring back burn the team to see what's right around the corner. >> Please welcome back to the stage. Red Hat. Global director of developer Experience burst Sutter with Kareema Sharma. Okay, We just how was your Foreign operators have redefined the capabilities and usability of the open hybrid cloud, and now we're going to show you a few more things. Okay, so just be ready for that. But I know many of our customers in this audience right now, as well as the customers who aren't even here today. You're running tens of thousands of applications on open chef clusters. We know that disappearing right now, but we also know that >> you're not >> actually in the business of running terminators clusters. You're in the business of oil and gas from the business retail. You're in a business transportation, you're in some other business and you don't really want to manage those things at all. We also know though you have lo latest requirements like Polish is talking about. And you also dated gravity concerns where you >> need to keep >> that on your premises. So what you're about to see right now in this demonstration is where we've taken open ship for and made a bare metal cluster right here on this stage. This is a fully automated platform. There is no underlying hyper visor below this platform. It's open ship running on bare metal. And this is your crew vanities. Native infrastructure, where we brought together via mes containers networking and storage with me right now is green mush arma. She's one of her engineering leaders responsible for infrastructure technologies. Please welcome to the stage, Karima. >> Thank you. My pleasure to be here, whether it had summit. So let's start a cloud. Rid her dot com and here we can see the classroom Dannon Jessica working on just a few moments ago From here we have a bird's eye view ofthe all of our open ship plasters across the hybrid cloud from multiple cloud providers to on premises and noticed the spare medal last year. Well, that's the one that my team built right here on this stage. So let's go ahead and open the admin console for that last year. Now, in this demo, we'LL take a look at three things. A multi plaster inventory for the open Harbor cloud at cloud redhead dot com. Second open shift container storage, providing convert storage for virtual machines and containers and the same functionality for cloud vert and bare metal. And third, everything we see here is scuba unit is native, so by plugging directly into communities, orchestration begin common storage. Let working on monitoring facilities now. Last year, we saw how continue native actualization and Q Bert allow you to run virtual machines on Cabinet is an open shift, allowing for a single converge platform to manage both containers and virtual machines. So here I have this dark net project now from last year behead of induced virtual machine running it S P darknet application, and we had started to modernize and continue. Arise it by moving. Parts of the application from the windows began to the next containers. So let's take a look at it here. I have it again. >> Oh, large shirt, you windows. Earlier on, I was playing this game back stage, so it's just playing a little solitaire. Sorry about that. >> So we don't really have time for that right now. Birds. But as I was saying, Over here, I have Visions Studio Now the window's virtual machine is just another container and open shift and the i d be service for the virtual machine. It's just another service in open shift open shifts. Running both containers and virtual machines together opens a whole new world of possibilities. But why stop there? So this here be broadened to come in. It is native infrastructure as our vision to redefine the operation's off on premises infrastructure, and this applies to all matters of workloads. Using open shift on metal running all the way from the data center to the edge. No by your desk, right to main benefits. Want to help reduce the operation casts And second, to help bring advance good when it is orchestration concept to your infrastructure. So next, let's take a look at storage. So open shift container storage is software defined storage, providing the same functionality for both the public and the private lads. By leveraging the operator framework, open shift container storage automatically detects the available hardware configuration to utilize the discs in the most optimal vein. So then adding my note, you don't have to think about how to balance the storage. Storage is just another service running an open shift. >> And I really love this dashboard quite honestly, because I love seeing all the storage right here. So I'm kind of curious, though. Karima. What kind of storage would you What, What kind of applications would you use with the storage? >> Yeah, so this is the persistent storage. To be used by a database is your files and any data from applications such as a Magic Africa. Now the A Patrick after operator uses school, been at this for scheduling and high availability, and it uses open shift containers. Shortest. Restore the messages now Here are on premises. System is running a caf co workload streaming sensor data on DH. We want toe sort it and act on it locally, right In a minute. A place where maybe we need low latency or maybe in a data lake like situation. So we don't want to send the starter to the cloud. Instead, we want to act on it locally, right? Let's look at the griffon a dashboard and see how our system is doing so with the incoming message rate of about four hundred messages for second, the system seems to be performing well, right? I want to emphasize this is a fully integrated system. We're doing the testing An optimization sze so that the system can Artoo tune itself based on the applications. >> Okay, I love the automated operations. Now I am a curious because I know other folks in the audience want to know this too. What? Can you tell us more about how there's truly integrated communities can give us an example of that? >> Yes. Again, You know, I want to emphasize everything here is managed poorly by communities on open shift. Right. So you can really use the latest coolest to manage them. All right. Next, let's take a look at how easy it is to use K native with azure functions to script alive Reaction to a live migration event. >> Okay, Native is a great example. If actually were part of my breakout session yesterday, you saw me demonstrate came native. And actually, if you want to get hands on with it tonight, you can come to our guru night at five PM and actually get hands on like a native. So I really have enjoyed using K. Dated myself as a software developer. And but I am curious about the azure functions component. >> Yeah, so as your functions is a function is a service engine developed by Microsoft fully open source, and it runs on top of communities. So it works really well with our on premises open shift here. Right now, I have a simple azure function that I already have here and this azure function, you know, Let's see if this will send out a tweet every time we live My greater Windows virtual machine. Right. So I have it integrated with open shift on DH. Let's move a note to maintenance to see what happens. So >> basically has that via moves. We're going to see the event triggered. They trigger the function. >> Yeah, important point I want to make again here. Windows virtue in machines are equal citizens inside of open shift. We're investing heavily in automation through the use of the operator framework and also providing integration with the hardware. Right, So next, Now let's move that note to maintain it. >> But let's be very clear here. I wanna make sure you understand one thing, and that is there is no underlying virtual ization software here. This is open ship running on bear. Meddle with these bare metal host. >> That is absolutely right. The system can automatically discover the bare metal hosts. All right, so here, let's move this note to maintenance. So I start them Internets now. But what will happen at this point is storage will heal itself, and communities will bring back the same level of service for the CAFTA application by launching a part on another note and the virtual machine belive my great right and this will create communities events. So we can see. You know, the events in the event stream changes have started to happen. And as a result of this migration, the key native function will send out a tweet to confirm that could win. It is native infrastructure has indeed done the migration for the live Ian. Right? >> See the events rolling through right there? >> Yeah. All right. And if we go to Twitter? >> All right, we got tweets. Fantastic. >> And here we can see the source Nord report. Migration has succeeded. It's a pretty cool stuff right here. No. So we want to bring you a cloud like experience, but this means is we're making operational ease a fuse as a top goal. We're investing heavily in encapsulating management knowledge and working to pre certify hardware configuration in working with their partners such as Dell, and they're dead already. Note program so that we can provide you guidance on specific benchmarks for specific work loads on our auto tuning system. >> All right, well, this is tow. I know right now, you're right thing, and I want to jump on the stage and check out the spare metal cluster. But you should not right. Wait After the keynote didn't. Come on, check it out. But also, I want you to go out there and think about visiting our partner Del and their booth where they have one. These clusters also. Okay, So this is where vmc networking and containers the storage all come together And a Kurban in his native infrastructure. You've seen right here on this stage, but an agreement. You have a bit more. >> Yes. So this is literally the cloud coming down from the heavens to us. >> Okay? Right here, Right now. >> Right here, right now. So, to close the loop, you can have your plaster connected to cloud redhead dot com for our insights inside reliability engineering services so that we can proactively provide you with the guidance through automated analyses of telemetry in logs and help flag a problem even before you notice you have it Beat software, hardware, performance, our security. And one more thing. I want to congratulate the engineers behind the school technology. >> Absolutely. There's a lot of engineers here that worked on this cluster and worked on the stack. Absolutely. Thank you. Really awesome stuff. And again do go check out our partner Dale. They're just out that door I can see them from here. They have one. These clusters get a chance to talk to them about how to run your open shift for on a bare metal cluster as well. Right, Kareema, Thank you so much. That was totally awesome. We're at a time, and we got to turn this back over to Paul. >> Thank you. Right. >> Okay. Okay. Thanks >> again. Burned, Kareema. Awesome. You know, So even with all the exciting capabilities that you're seeing, I want to take a moment to go back to the to the first platform tenant that we learned with rail, that the platform has to be developer friendly. Our next guest knows something about connecting a technology like open shift to their developers and part of their company. Wide transformation and their ability to shift the business that helped them helped them make take advantage of the innovation. Their Innovation award winner this year. Please, Let's welcome Ed to the stage. >> Please welcome. Twenty nineteen. Innovation Award winner. BP Vice President, Digital transformation. Ed Alford. >> Thanks, Ed. How your fake Good. So was full. Get right into it. What we go you guys trying to accomplish at BP and and How is the goal really important in mandatory within your organization? Support on everyone else were global energy >> business, with operations and over seventy countries. Andi. We've embraced what we call the jewel challenge, which is increasing the mind for energy that we have as individuals in the world. But we need to produce the energy with fuel emissions. It's part of that. One of our strategic priorities that we >> have is to modernize the whole group on. That means simplifying our processes and enhancing >> productivity through digital solutions. So we're using chlo based technologies >> on, more importantly, open source technologies to clear a community and say, the whole group that collaborates effectively and efficiently and uses our data and expertise to embrace the jewel challenge and actually try and help solve that problem. That's great. So So how did these heart of these new ways of working benefit your team and really the entire organ, maybe even the company as a whole? So we've been given the Innovation Award for Digital conveyor both in the way it was created and also in water is delivering a couple of guys in the audience poll costal and brewskies as he they they're in the team. Their teams developed that convey here, using our jail and Dev ops and some things. We talk about this stuff a lot, but actually the they did it in a truly our jail and develops we, um that enabled them to experiment and walking with different ways. And highlight in the skill set is that we, as a group required in order to transform using these approaches, we can no move things from ideation to scale and weeks and days sometimes rather than months. Andi, I think that if we can take what they've done on DH, use more open source technology, we contain that technology and apply across the whole group to tackle this Jill challenge. And I think that we use technologists and it's really cool. I think that we can no use technology and open source technology to solve some of these big challenges that we have and actually just preserve the planet in a better way. So So what's the next step for you guys at BP? So moving forward, we we are embracing ourselves, bracing a clothed, forced organization. We need to continue to live to deliver on our strategy, build >> over the technology across the entire group to address the jewel >> challenge and continue to make some of these bold changes and actually get into and really use. Our technology is, I said, too addresses you'LL challenge and make the future of our planet a better place for ourselves and our children and our children's children. That's that's a big goal. But thank you so much, Ed. Thanks for your support. And thanks for coming today. Thank you very much. Thank you. Now comes the part that, frankly, I think his best part of the best part of this presentation We're going to meet the type of person that makes all of these things a reality. This tip this type of person typically works for one of our customers or with one of with one of our customers as a partner to help them make the kinds of bold goals like you've heard about today and the ones you'll hear about Maura the way more in the >> week. I think the thing I like most about it is you feel that reward Just helping people I mean and helping people with stuff you enjoy right with computers. My dad was the math and science teacher at the local high school. And so in the early eighties, that kind of met here, the default person. So he's always bringing in a computer stuff, and I started a pretty young age. What Jason's been able to do here is Mohr evangelize a lot of the technologies between different teams. I think a lot of it comes from the training and his certifications that he's got. He's always concerned about their experience, how easy it is for them to get applications written, how easy it is for them to get them up and running at the end of the day. We're a loan company, you know. That's way we lean on accounting like red. That's where we get our support front. That's why we decided to go with a product like open shift. I really, really like to product. So I went down. The certification are out in the training ground to learn more about open shit itself. So my daughter's teacher, they were doing a day of coding, and so they asked me if I wanted to come and talk about what I do and then spend the day helping the kids do their coding class. The people that we have on our teams, like Jason, are what make us better than our competitors, right? Anybody could buy something off the shelf. It's people like him. They're able to take that and mold it into something that then it is a great offering for our partners and for >> customers. Please welcome Red Hat Certified Professional of the Year Jason Hyatt. >> Jason, Congratulations. Congratulations. What a what a big day, huh? What a really big day. You know, it's great. It's great to see such work, You know that you've done here. But you know what's really great and shows out in your video It's really especially rewarding. Tow us. And I'm sure to you as well to see how skills can open doors for for one for young women, like your daughters who already loves technology. So I'd liketo I'd like to present this to you right now. Take congratulations. Congratulations. Good. And we I know you're going to bring this passion. I know you bring this in, everything you do. So >> it's this Congratulations again. Thanks, Paul. It's been really exciting, and I was really excited to bring my family here to show the experience. It's it's >> really great. It's really great to see him all here as well going. Maybe we could you could You guys could stand up. So before we leave before we leave the stage, you know, I just wanted to ask, What's the most important skill that you'LL pass on from all your training to the future generations? >> So I think the most important thing is you have to be a continuous learner you can't really settle for. Ah, you can't be comfortable on learning, which I already know. You have to really drive a continuous Lerner. And of course, you got to use the I ninety. Maxwell. Quite. >> I don't even have to ask you the question. Of course. Right. Of course. That's awesome. That's awesome. And thank you. Thank you for everything, for everything that you're doing. So thanks again. Thank you. You know what makes open source work is passion and people that apply those considerable talents that passion like Jason here to making it worked and to contribute their idea there. There's back. And believe me, it's really an impressive group of people. You know you're family and especially Berkeley in the video. I hope you know that the redhead, the certified of the year is the best of the best. The cream of the crop and your dad is the best of the best of that. So you should be very, very happy for that. I also and I also can't wait. Teo, I also can't wait to come back here on this stage ten years from now and present that same award to you. Berkeley. So great. You should be proud. You know, everything you've heard about today is just a small representation of what's ahead of us. We've had us. We've had a set of goals and realize some bold goals over the last number of years that have gotten us to where we are today. Just to recap those bold goals First bait build a company based solely on open source software. It seems so logical now, but it had never been done before. Next building the operating system of the future that's going to run in power. The enterprise making the standard base platform in the op in the Enterprise Olympics based operating system. And after that making hybrid cloud the architecture of the future make hybrid the new data center, all leading to the largest software acquisition in history. Think about it around us around a company with one hundred percent open source DNA without. Throughout. Despite all the fun we encountered over those last seventeen years, I have to ask, Is there really any question that open source has won? Realizing our bold goals and changing the way software is developed in the commercial world was what we set out to do from the first day in the Red Hat was born. But we only got to that goal because of you. Many of you contributors, many of you knew toe open source software and willing to take the risk along side of us and many of partners on that journey, both inside and outside of Red Hat. Going forward with the reach of IBM, Red hat will accelerate. Even Mohr. This will bring open source general innovation to the next generation hybrid data center, continuing on our original mission and goal to bring open source technology toe every corner of the planet. What I what I just went through in the last hour Soul, while mind boggling to many of us in the room who have had a front row seat to this overto last seventeen plus years has only been red hats. First step. Think about it. We have brought open source development from a niche player to the dominant development model in software and beyond. Open Source is now the cornerstone of the multi billion dollar enterprise software world and even the next generation hybrid act. Architecture would not even be possible without Lennox at the core in the open innovation that it feeds to build around it. This is not just a step forward for software. It's a huge leap in the technology world beyond even what the original pioneers of open source ever could have imagined. We have. We have witnessed open source accomplished in the last seventeen years more than what most people will see in their career. Or maybe even a lifetime open source has forever changed the boundaries of what will be possible in technology in the future. And in the one last thing to say, it's everybody in this room and beyond. Everyone outside continue the mission. Thanks have a great sum. It's great to see it

Published Date : May 11 2019

SUMMARY :

Ladies and gentlemen, please welcome Red Hat President Products and Technologies. Kennedy setting the gold to the American people to go to the moon. that point I knew that despite the promise of Lennox, we had a lot of work ahead of us. So it is an honor for me to be able to show it to you live on stage today. And we're not about the clinic's eight. And Morgan, There's windows. That means that for the first time, you can log in from any device Because that's the standard Lennox off site. I love the dashboard overview of the system, You see the load of the system, some some of its properties. So what about if I have to add a whole new application to this environment? Which the way for you to install different versions of your half stack that That is fantastic and the application streams Want to keep up with the fast moving ecosystems off programming I know some people were thinking it right now. everyone you want two or three or whichever your application needs. And I'm going to the rat knowledge base and looking up things like, you know, PV create VD, I've opened the storage space for you right here, where you see an overview of your storage. you know, we'll have another question for you. you know a lot of people, including me and people in the audience like that dark out right? much easier, including a post gra seeker and, of course, the python that we saw right there. Yeah, absolutely. And it's saved so that you don't actually have to know all the various incantations from Amazon I All right, Well, if you want to prevent a holy war in your system, you can actually use satellite to filter that out. Okay, So this VM image we just created right now from that blueprint this is now I can actually go out there and easily so you can really hit your Clyburn hybrid cloud operating system images. and I just need a few moments for it to build. So while that's taking a few moments, I know there's another key question in the minds of the audience right now, You see all my relate machines here, including the one I showed you what Consul on before. Okay, okay, so now it's progressing. it's progressing. live upgrade on stage. Detective that and you know, it doesn't run the Afghan cause we don't support operating that. So the good news is, we were protected from possible failed upgrade there, That's the idea. And I really love what you showed us there. So you were away for so long. So the really cool thing about this bird is that all of these images were built So thank you so much for that large. more to talk to you about. I'm going to show you here a satellite inventory and his So he's all the machines can get updated in one fell swoop. And there's one thing that I want to bring your attention to today because it's brand new. I know that in the minds of the audience right now. I've actually been waiting for a while patiently for you to get to the really good stuff. there's one more thing that I wanted to let folks know about. next eight and some features that we have there. So, actually, one of the key design principles of relate is working with our customers over the last twenty years to integrate OK, so we basically have this new feature. So And this is this list is growing every single day, so customers can actually opt in to the rules that are most But it comes to CVS and things that nature. This is the satellite that we saw before, and I'll grab one of the hosts and I love it so it's just a single command and you're ready to register this box right now. I'm going to show you one more thing. I know everyone's waiting for it as well, But hey, you're VM is ready. Yeah, insights is a really cool feature And I've got it in all my images already. the machines registering on cloud that redhead dot com ready to be managed. OK, so all those onstage PM's as well as the hybrid cloud VM should be popping in IRC Post Chris equals Well, We saw that in the overview, and I can actually go and get some more details about what this everybody to go try this like, we really need to get this thing going and try it out right now. don't know, sent about the room just yet. And even though it's really easy to get going on and we kind of, you know, when a little bit sideways here moments. I went brilliant. We hear about that all the time, as I just told Please welcome Lawrence Livermore National Laboratory. And thank thank you so much for coming for But first and foremost, our job is to ensure the safety, and for the geeks in the audience, I think there's a few of them out there. before And you know, Vendors seldom had a system anywhere near the size of ours, and we couldn't give them our classified open source, you know, for even open source existing. And if the security vulnerability comes out, we don't have to chase around getting fixes from Multan slo all the way to the extract excess Excuse scale supercomputing. share any more details about that system right now, but we are hoping that we're going to be able of the data center spread across so many multiple environments, management had to be I know all of you have heard we're talking to pretend to new customers about the travel out. Earlier we showed you read Enterprise Clinic St running on lots of In large part, that's because open shit for has extended management of the clusters down to the infrastructure, you can now see the machines that make up the cluster where machine represents the infrastructure. Thes software operators are responsible for aligning the cluster to a desired state. of Cooper Netease Technologies that have the operational characteristics that Dan's going to actually let us has made the sequel server operator available to me and my team. Okay, so this point we can kind of provisions, And if I scroll to the list, we can see the different workloads Jessica just mentioned Okay, But And the way they all those killers working is Okay, so looks like capacity planning and automation is fully, you know, handle this point. Is the cluster admin right now into the console? This gives a cluster I've been the ability to maintain the operators they've already installed. So this is our products application that's talking to that sequel server instance. So, you know, everyone in this room, you know, wants to see you hit that upgrade button. And that point, the new, softer operator will notice. So glad the team doesn't have to worry about that anymore and just got I think enough of these might have run by Now, if you try your app again Let's see Jessica's application up here. And yet look, we're We're into two before we're onto three. So I'm going to switch this automatic approval. And so I was glad you guys got a chance to see that rolling update across the cluster. And I'll dig into the azure cluster that we were just taking a look at. all you have to do is log in with your red hair credentials to get access. So one console, one user experience to see across the entire hybrid cloud we saw earlier with Red Thanks so much to burn his team. of technology, Rich Hodak. How you doing? center all the way to the edge while being as effective as you have been over of the open hybrid cloud, and now we're going to show you a few more things. You're in the business of oil and gas from the business retail. And this is your crew vanities. Well, that's the one that my team built right here on this stage. Oh, large shirt, you windows. open shift container storage automatically detects the available hardware configuration to What kind of storage would you What, What kind of applications would you use with the storage? four hundred messages for second, the system seems to be performing well, right? Now I am a curious because I know other folks in the audience want to know this too. So you can really use the latest coolest to manage And but I am curious about the azure functions component. and this azure function, you know, Let's see if this will We're going to see the event triggered. So next, Now let's move that note to maintain it. I wanna make sure you understand one thing, and that is there is no underlying virtual ization software here. You know, the events in the event stream changes have started to happen. And if we go to Twitter? All right, we got tweets. No. So we want to bring you a cloud like experience, but this means is I want you to go out there and think about visiting our partner Del and their booth where they have one. Right here, Right now. So, to close the loop, you can have your plaster connected to cloud redhead These clusters get a chance to talk to them about how to run your open shift for on a bare metal Thank you. rail, that the platform has to be developer friendly. Please welcome. What we go you guys trying to accomplish at BP and and How is the goal One of our strategic priorities that we have is to modernize the whole group on. So we're using chlo based technologies And highlight in the skill part of this presentation We're going to meet the type of person that makes And so in the early eighties, welcome Red Hat Certified Professional of the Year Jason Hyatt. So I'd liketo I'd like to present this to you right now. to bring my family here to show the experience. before we leave before we leave the stage, you know, I just wanted to ask, What's the most important So I think the most important thing is you have to be a continuous learner you can't really settle for. And in the one last thing to say, it's everybody in this room and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Adam BallPERSON

0.99+

JessicaPERSON

0.99+

Josh BoyerPERSON

0.99+

PaulPERSON

0.99+

Timothy KramerPERSON

0.99+

DanPERSON

0.99+

JoshPERSON

0.99+

JimPERSON

0.99+

TimPERSON

0.99+

IBMORGANIZATION

0.99+

JasonPERSON

0.99+

Lars CarlPERSON

0.99+

Kareema SharmaPERSON

0.99+

WilbertPERSON

0.99+

Jason HyattPERSON

0.99+

BrentPERSON

0.99+

LenoxORGANIZATION

0.99+

Rich HodakPERSON

0.99+

Ed AlfordPERSON

0.99+

tenQUANTITY

0.99+

Brent MidwoodPERSON

0.99+

Daniel McPhersonPERSON

0.99+

Jessica ForresterPERSON

0.99+

LennoxORGANIZATION

0.99+

LarsPERSON

0.99+

Last yearDATE

0.99+

RobinPERSON

0.99+

DellORGANIZATION

0.99+

KarimaPERSON

0.99+

hundredsQUANTITY

0.99+

seventy poundsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

John F. KennedyPERSON

0.99+

AnselORGANIZATION

0.99+

oneQUANTITY

0.99+

Edward TellerPERSON

0.99+

last yearDATE

0.99+

TeoPERSON

0.99+

KareemaPERSON

0.99+

MicrosoftORGANIZATION

0.99+

todayDATE

0.99+

PythonTITLE

0.99+

seven individualsQUANTITY

0.99+

BPORGANIZATION

0.99+

ten ten thousand timesQUANTITY

0.99+

BostonLOCATION

0.99+

ChrisPERSON

0.99+

Del TechnologiesORGANIZATION

0.99+

pythonTITLE

0.99+

TodayDATE

0.99+

thousandsQUANTITY

0.99+

Robin GoldstonePERSON

0.99+

Steve Speicher, Red Hat | Red Hat Summit 2019


 

>> live from Boston, Massachusetts. It's the queue covering your red. Have some twenty nineteen brought to you by bread. >> Welcome back to the Cube and our continuing coverage here. The Red had summit. This is six time around for us. Fifth time for stew minimum. So he still gets almost the perfect attendance. Goldmark. First time for me. So still have a lot of catching up to do. Stewed minimum. John Walls and Steve Spiker now joins us. He is the senior principal product manager. Developer tools, Red Hat and Steve. Good afternoon to you. Thanks for joining us. Thanks for having me. Let's just talk about first off development in general. I mean, there's a lot of give and take there, right? You're tryingto listen. What air? The needs. Where the deficiencies, Where can the improvements be made? But how much do you drive that on your side and how much do listen and respond to do what? You're here for. The community. >> Yeah, we do a little bit of both. And so a lot of it is responding to the community, and that's one of the areas that Red has really excelled. Is taking what's popular, what's working upstream and helping moving along make it a stable pot product or stable solution that developers can use. But we also have a certain agenda or certain platforms that we want to present. So we start from, like, various run times to actually contain our platforms. And so we want to have to kind of drive some of that initiatives on our own to help Dr Phil that because we hear it from customers a lot, it's like things you're doing are great. But like there's all these projects that need to come together sort of a product or unified experience. And so we spent a lot of our time China bring those things together as a way to help developers do those different task and also focus across like not just a job run times which we have a lot of job. >> So you might have it. You might have an in product in mind, right? And you realize that there might be a gap in terms of development, so you encourage or you try to bridged that gap a little bit. To get to that in product is that you're saying Yeah, >> so we do a lot of things to help build the pieces so that people can sometimes build their own experiences. They want. In the end, developers control kind of their own destiny, their own set of tools and a lot of customers have their own unique requirements, even like some tools they develop in house for loans, kind of regulatory reasons and other things. And so we have two, one build the pieces but also stitched the pieces together to help them have that kind of out of the box experience. Because some some customers really don't want to do that. They just want to say one kind of a turnkey solution. But then we may need to make some adjustments here and there. >> Yeah, but by Steve, you know, it's it's funny. It rhymes for me with what I saw, you know, fifteen, twenty years ago with Lennox. A lot of changes, a lot of pieces. I want to take advantage of it. But you know, a boy can somebody help me with this and you know that that's of course. Red hat rode that way pretty well right Today, cupidity is even more sprawling. There's so many different projects. There's so many pieces boy. It is complicated on DH. Therefore, how do we take advantage of that? What do I need to know? What can my platform a vendor do for me so that I don't have to manage that? Yeah, I love you. Spanned on that gives us a little bit of comparing trash. You know what's the same? What's different? Yeah, and so >> there's different aspects. I think the developer experience one thing that we talked about. It's like it just works sometimes. So, like it's if it's Cooper days. We've spent a lot time making sure it's hardened and works well. So you're not like debugging it, spending time on things that waist development time. Instead, That way, folks let on that. We also look at how we can build abstraction layers on top of that. So we built a Seelye tool called Rodeo, which is a developed, streamlined developer experience for open shift, and it's really focused on open ship. That way, that developer really just can focus on their application. They could deploy it, taken quickly, work on the changes before they commit to get, and then they can then also have a similar experience in the browser with things like Eclipse Jr Code or Dick workspaces are I got commercial offering behind that and that takes actually using the platform itself to do development, which is really, really super cool so that you can have an idea and the browser. You can also have the workspace like you're all your dependencies, like everything you would normally have on your laptop now don't need to worry about. It's now containerized and quickly spun up as a way to do development. And it's really a thing that enterprises really enjoy because they get like, quick satisfaction, like they get the stuff off the proprietary code off the death up there using their container platform, and it's building the same way they would build when they >> deployed my backgrounds on the infrastructure side. And the whole reason we have infrastructure to be able to run our abs and the Holy Grail we've wanted is you know, not not my developers. I shouldn't need to think about the stuff underneath, right? We looked at virtual ization. We look a container ization. You know, the nirvana of server lists, as they call it, is that I shouldn't have to think about that you know how we doing? Because at the end of the day, and I talked to users like Oh, jeez, well, I need to worry. What if something breaks? I need to understand the security for my environment. You know what you're seeing and talking to customers about it from there. Stop development. Yes, so they're able >> tto. It's like here's different stories, like Tool, Factor act. So it's like if you stay in certain parameters, you can have a lot of success, and that's still kind of true today. Survivalist kind of takes that to the next level, where you can really just have a predefined either a function spectacle o two and then things are really easy, and you don't have to worry about various aspects. But even though you look at the various vendors when you're working with different functions, it's even complex like, Oh, I need to provide the security on you. Make sure he sees a wire together. How do I log these things? How do I debugged when things across this mesh go wrong? And so it's like it's getting getting better. But there's still a lot of work to do to continue to improve that, and you will see a lot of innovation happening in that area, especially the work that we're working on. >> What kind of given take do you have in terms of what? Not only what is that community learning from you and the tools that you're providing them? But what are you getting back from that other than, you know, advancing a project or whatever, in terms of expertise, in terms of understanding, maybe a new wayto to build a different mouse trap. You know that someone comes up with an interesting idea. You're like, >> Wow, I >> didn't take that. Yeah, I think that's >> where, like the partnerships we've had with various companies before you go off starting out with Cooper Netease Anything in the Cave Native project last year. And that really took a different way of looking at serve elicit, moving it forward to say, Yeah, this is this is a different way. We thought about how we would do this on Cooper nowadays, even kind of like you abstract that ap I away. And it's like it's just to keep native of survivalists and then Karina use this kind of implementation detail behind that even and So that's really interesting to see things like that. And then also the recent work announcements with Microsoft and the azure functions where people like they maybe, you know, into the event sources there they would make sure that were close. That they're doing the functions are building, are running on Cooper Netease and our communities is is open shift. So it's really kind of completing the life cycle. >> So what if we could just step act, you know, if you talk about communities and open ships specifically, you've got you know, you've got partnership with Google and they've got the geeky and Antos stuff. You've got partnership for the Amazon, you know, they've got a ks, these things, they're not fully seamless and interoperable. It's, you know, I usually hear some confusion in the marketplace as to, you know, communities can run lots of places, but all the various you know, if you choose an implementation well, that your implementation and you should run that everywhere. Not I can't take all the various implementations and they're not inter swappable. So maybe you could help expand on that A little bit is toe, you know what's the goal? Where are we with this maturity here and you know, where do we need it to get? Because, you know, boy, it definitely is a little bit complicated. Least, you know, from the seat that I sit in Yeah. So it's >> somewhat complex, I think, goes back to your early days talking about letting she's like you would say you have an application that could run anywhere. They have Lenox this kind of truth. You know, there's always like certain security settings or packages you have enabled. That just holds true for elected Kuban aged world as well. You can lock it down a certain way. You could open it up a certain way. And so you see a lot of content that's delivered, assuming certain like privileges I have on the system and other systems that don't allow it. And so I think, more and more we see through the standardization something we could study in conformance testing. It really helps people like No, we want our getting in their hands. On an instance. It's really, you know, a full fledged communities or the part that they care about the most is working out well. And so I see that gave me the evil by also see tools that kind of abstract, even more so like a native, is a mentioned sort of survivalist workloads or functions themselves and then even house tool in kind of works. On top of that, like Natively understands the platform, that platform and those requirements to move those applications across the different systems because we have a lot of customers who run open ship communities as well as like many other good bearnaise kind of instances that so they have. We have this requirement to make sure we stay conforming, allow them to make sure the were closer portable, and it's an important part to move forward. So I still think there's a lot of work to be done toe to make these things a smoother processes. It's a lot of interesting things going on, though, >> So any interesting tens with workloads that's one of things we always look at is, you know, um, I just taking the old workloads. Am I doing them in a new place Or, you know, are there new new workloads and anything jumping out at you from customers that you talk to? >> Yes. So the way talk I know I mentioned several See multiple times a whole idea around this auto scaling. And Lou only losing your uplink your resources when you need to is a big deal. So we see a lot more and more of those kind of small function, single purpose things that are occurring up until, like, machine learning. Big data. It just continues. A GPU resource is we talked about running a V EMS and cos when I first heard this, like four years ago, I laughed out loud, and I really don't know. Their seriousness is something that happens. And, yeah, it's becoming mainstream now. So now kind of everything kind of fits within the current. You know, orchestrator of those workloads. >> You're not laughing anymore, right? No. No, because there's someone areas in which your concerns are certainly understandable securities. One of those a lot of attention being paid automation these days, right? And a lot of opportunity there. Is there one, or are there a couple areas where you say this is kind of where we have maybe greener pastures in terms of providing developers with really unusual tools are really more sophisticated, more complex or effective tools than than in any other area where you could use that kind of a boost. >> Yeah, I think there's a lot of things, but one thing that I see in this area is still a lot of fragmentation, like I'm not sure if I see you like this kind of a single way that things work, seeing a lot of great work, like with the Microsoft GS code tooling pieces. And I'm just saying that from an abstraction way to bring certain things together. Nice work going with Microsoft, the committee's plugging for there, and we were collaborating with them on that to extend it for some of the open shift use cases. But that just kind of moves, I think Mohr to beat the developers where they're at and will continue to invest across the different set of tools like I do, the more you keep up with these list of all these tools in the ecosystem. Everytime I present it, someone says, I don't know about those, but here's Maura that I didn't know about it, so this is just continues to grow and people continue to innovate, and I think it just think it's exciting because we continue Teo to evolve it. So I know think there's much in the way of kind of narrowing down on a smaller set of things. I think it's going to continue to expand in the sense. >> Speaking of expansion at Microsoft build yesterday, there was announcement of beloved Taquito K d a. A ce your functions with open shift. Help us parts a little bit. What, what that is. >> Yeah. So what that's about is really taking Thea's your functions and allowing those workloads to Warren on open ship because they're targeted towards Cuba. Netease and, of course, open ship those grenades distribution. So it allows that to happen. There's also that it's a unique auto scaler that kind of allows workload to be more surveillance run. So then also it's it ties into some of the azure event sorts of soul like thee. The message Cuba and bus Kafka that's there. And so now you can wire in yours your pieces, you can run it across here. Either hosted is your or on open shift with those of your function. >> Okay, just to clarify this is today separate from the K Native Initiative that you were talking about earlier. >> Yes, that's right. So this is touching on some of those points and the idea behind this project. This liken early preview announcer was like showing some progress, but they're looking in wiring in some of the chips. Start the kidney of serving pieces to allow running in those applications on open ship, but also the need of events sources. So you can take combination events and triggers your functions and do some of these exciting things. >> Can I ask you, you're doing sessions here at this show? You know how many of the people here you know, talking about survivalists and looking at that bleeding edge or there? There are other technologies that you find them spending a little bit more time in the tooling. It's >> a wide range. I'm really shocked by what some of the customers are like. Bleeding edge Kate made. It was like, Oh, you know, we saw whatever zero dot three release out there with this, and we'd really like this auto scaling capability because we're spending a lot of money running these applications that are not doing anything, So we like the better auto scaler that's out there. The others are really just like trying to understand more about container technology. I was just talking to Jen one early after a session. He's like, This is what we're trying to do. We need to contain your eyes applications. How do I build a CIA pipeline around it? So it's a It's a wide range of things you see here. Well, >> you certainly at the center of this inspiration, the innovation of the industry. I know you're an exciting place, and it's kind of something new every day for you. Probably right. >> Oh, it is. Yeah. Especially when these big conference and announcements come >> out. Gear up, right? Yeah, Exactly. Good job, Steve. Thank you for joining us here. We appreciate the time and wish you well down the road. >> Take me much. Enjoyed being on >> you, Steve Spiker from Red Hat. Joining this here for the first time on the Q. Good to have you, Steve. Good Have you with us as we continue our coverage from Boston. But the Red Hat Summit

Published Date : May 7 2019

SUMMARY :

Have some twenty nineteen brought to you by bread. But how much do you drive that on your side and how much do listen and respond to And so a lot of it is responding to the community, So you might have it. And so we have two, one build the pieces but also stitched the pieces together to it. But you know, a boy can somebody help me with this and you know that that's of course. the platform itself to do development, which is really, really super cool so that you can have an idea to be able to run our abs and the Holy Grail we've wanted is you know, not not my developers. So it's like if you stay in certain parameters, What kind of given take do you have in terms of what? Yeah, I think that's We thought about how we would do this on Cooper nowadays, even kind of like you abstract that ap I away. So what if we could just step act, you know, if you talk about communities and open ships specifically, And so you see a lot of content that's delivered, So any interesting tens with workloads that's one of things we always look at is, you know, um, So we see a lot more and more of those kind of small function, single purpose things that are occurring up until, Is there one, or are there a couple areas where you say this is kind so this is just continues to grow and people continue to innovate, and I think it just think it's exciting because we continue Taquito K d a. A ce your functions with open shift. And so now you can wire in yours your pieces, So you can take combination events and triggers You know how many of the people here you know, It was like, Oh, you know, we saw whatever zero dot three release out there with this, you certainly at the center of this inspiration, Oh, it is. We appreciate the time and wish you well down the road. Enjoyed being on Good Have you with us as we continue our coverage from Boston.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Steve SpikerPERSON

0.99+

twoQUANTITY

0.99+

Red HatORGANIZATION

0.99+

BostonLOCATION

0.99+

John WallsPERSON

0.99+

KarinaPERSON

0.99+

CIAORGANIZATION

0.99+

yesterdayDATE

0.99+

Boston, MassachusettsLOCATION

0.99+

last yearDATE

0.99+

First timeQUANTITY

0.99+

oneQUANTITY

0.99+

Fifth timeQUANTITY

0.99+

Steve SpeicherPERSON

0.99+

todayDATE

0.99+

first timeQUANTITY

0.98+

CubaLOCATION

0.98+

TodayDATE

0.98+

one thingQUANTITY

0.98+

K Native InitiativeORGANIZATION

0.98+

four years agoDATE

0.98+

RodeoTITLE

0.98+

firstQUANTITY

0.97+

six timeQUANTITY

0.97+

Eclipse Jr CodeTITLE

0.97+

GoldmarkORGANIZATION

0.97+

bothQUANTITY

0.97+

Red Hat SummitEVENT

0.97+

RedORGANIZATION

0.96+

KatePERSON

0.95+

Cooper NeteaseORGANIZATION

0.93+

Taquito KPERSON

0.93+

Red Hat Summit 2019EVENT

0.92+

PhilPERSON

0.91+

MauraPERSON

0.91+

OneQUANTITY

0.91+

zero dotORGANIZATION

0.9+

twenty nineteenQUANTITY

0.9+

CaveLOCATION

0.9+

single wayQUANTITY

0.88+

WarrenPERSON

0.87+

DrPERSON

0.86+

DickTITLE

0.84+

CooperORGANIZATION

0.84+

coupleQUANTITY

0.81+

fifteen, twenty years agoDATE

0.8+

Jen onePERSON

0.77+

KafkaPERSON

0.75+

stewPERSON

0.74+

single purposeQUANTITY

0.71+

LenoxORGANIZATION

0.71+

FactorTITLE

0.7+

ChinaLOCATION

0.69+

SeelyeTITLE

0.61+

MohrPERSON

0.56+

LennoxPERSON

0.5+

manyQUANTITY

0.47+

KubanORGANIZATION

0.46+

CubeORGANIZATION

0.39+

threeQUANTITY

0.38+

TheaORGANIZATION

0.33+

Pali Bhat, Google Cloud | Google Cloud Next 2019


 

live from San Francisco it's the cube covering Google cloud next 19 taught to you by Google cloud and its ecosystem partners hello everyone welcome back to the cubes live coverage here in San Francisco the Moscone Center for the Google clouds conference is called Google next 2019 I'm Chevrolet my costume in omim de Ville ante is also here doing interviews our next guest is probably Bob who's the VP of product and design for server lists at Google probably great to see you thanks for coming on thank you for having me so you'd be a you're the VP of Product you got the keys to the kingdom on the roadmap you're seeing all the announcements obviously server lists cloud run was announced cloud code was mentioned on stage that's going to come out tomorrow so code build run this is DevOps this is actually happening yeah you know what super exciting is that we've we're finally solving the problem for customers and taking a customer centric view of this I'll start off with a little bit of the journey we took to get here right as we were talking to customers they kept coming back to three things that they wanted from us the first thing they wanted was agility they understand that you know cloud could give them great cost savings but they also wanted to be able to move faster and innovate right the second bit they wanted was having the flexibility to be hybrid and multi-cloud super important especially to our largest customers and then the third piece was they've really struggled with his journey to cloud and they wanted our partnership to make it a much more seamless and non-deceptive journey so as we talk to them about these three things right we came back to the drawing board and said hey what are the products that we can build to make their journey to be more cloud native and more agile much more seamless and future-proofed that much better right so we came back to the drawing board and came up with three products that you talked about this now the first was we looked at developers and their journeys and we said look they're building in traditional ideas like IntelliJ or vs code optimized for local development right and they're not writing a lick of Yama they're right for kubernetes and we said okay how can we take those environments and help those development teams build cloud native apps really really easily so really just turbocharging their cloud native development so bill cloud code which extends their local ids and lets them deploy to remote clusters so they can get full debugging full deployment building its integrated in the cloud build and they get the full kubernetes a development environment right in place so cloud build was released earlier you got enhancements of that so news the hard news here is enhancements to cloud build cloud code as new announce here yeah cloud run announced today that's right so this is the new this is the new hard news that's right so bottom line what does it mean for a developer so like I didn't enterprise so I'm a cio I'm a site C so I'm gonna be putting all my eggs in the cloud basket I've still gonna run the on Prem day is gonna be critical to my strategy it's this early day set up time or are you guys thinking it's more about the setup or more the life cycle of CI CD pipelining all the way to application deployment a great question John so I think where we are in this journey is that enterprises have started off with something that's the most basic cloud ready workloads that have been lifted and shifted we now see the next wave of workloads this is the 80% of workloads that are still on premise we see them start to get cloud ready and cloud native and the way that their enterprises are gonna do that is by building on top of the standards we've created like kubernetes and sto and key native and what cloud cold and build and run and of course Anthes that we talked off this morning as well these are great managed solutions from Google fully managed solutions from Google that let you get cloud native fast all right Polly wonder if you can help us you know spin through I see a disconnect in the market so you know Google showed great leadership in the container space and of course kubernetes we came out of Google and when I look at like cloud run okay it's helping to connect that and Kay native to kubernetes in service when I talk to a lot of the developers and service it's not the infrastructure moving up the stack it's they didn't want to even think about it it's right built in the cloud that's right I focus on the application I don't even think about that so I've got this big gap as to you know on premises forget it I don't never want to touch it or think about it and you know the one of the reasons you know there's the term server list would put it to the side but now if I need one is this environment I don't want to think about it and we know hybrid is a reality but there's this big disconnect as to what kind of developer are you or you a DevOps person that came from an infrastructure background or are you just building apps today yeah yeah yeah we're definitely seeing that from our customers right so one thing that we hear all the time is developers don't want to just not think about infrastructure they actually want the managed service and the platform they're building on to think about the infrastructure and optimize it for them so it's not this program will infrastructure it it's cloud run programming the infrastructure for you so you don't have to do it and I think increasingly you're gonna see products like cloud run and anthos and cloud code let developers focus just on code because that's what they want to do right I don't ever seen a developer say I really want to write a Yama file or I want to set up more configuration parameters right so I think we're gonna get to the place where you have developers being able to focus on cold and all of the rest of this being taken care of by platforms like code and run and anthos automation becomes key I mean Jennifer Lynn's demo I thought was very game-changing because she made the comment developers can focus on their code and agility not access permissions and all the configuration management that goes on under the you guys gonna provide that in an automatic programmable way we're gonna believe he is and she kind of teased out service missions so service missions kind of point in the future which is app developers are gonna still need to be aware of maybe not aware of what cloud run how to manage those sirs as they come stand up and get pulled down dynamically yeah how do you view that because this has become a gonna become complex is that gonna be automated is that where cloud run comes in you expand on this whole impact of service meshes because that's the next level that's right that's right so if you think about key native it's built on kubernetes and it forms the kind of triad with sto as well right and what a product like cloud run does is it lets you not have to think about that because at the end of the day we don't want developers to have to think about K native what cloud run is it takes care of the K native portability and compatibility for you and all you do is focus on the code itself right so ultimately we want developers to focus on their applications but I will say this right we do care about another important constituent which is all of those folks who've already got an apps built out there can those workloads be serviced as well and that's part of the problem we're trying to solve it that's an operational thing all right so let's take a step back here so server list actually fanfare has been great we're seeing a lot of traction people are enamored by it because functions as a service has been very compelling whether it's retail managing you know that spiked loads and becomes we see some some use cases where it's like you know really an amazing thing where is it limiting what is the next level growth for server list where do you see you mention workloads and we see people deploying functions and being happy with it are there limitations with serverless how does it go to the next level can you take a minute to describe the current state of server lists and what's coming around the corner now so great question the first thing I'll say is that there's a ton of developers who come up to us every day and tell us cloud functions is awesome right and they really like functions as a service they like the event-driven approach to it they like the service full approach but several is provides love the programming model that's great but there's an another large contingent of developers who tell us look this is super constraining for what I want to do I don't get to choose the libraries I want you're forcing me into a particular programming model can you give me more flexibility and what they see every day is the flexibility that containers provide especially on kubernetes right and what we've tried to do with cloud run is try to bridge those worlds where you get all of the flexibility that you want right that you get with containers but then combine it with what what you really want with the operational model which is service right so you pay only for what you use and of course you get the agility of service as well now one thing that we've noticed heard some great stories about this is a customer of ours Veolia which is one of the early adopters of cloud run and they've been partnering with us we thank them for it they are running a complex workload you talked about retail what Veolia does is they're large French multinational they do energy water and environmental services these are things that need to be highly reliable very complex and these are workloads that have existed for ages right and what viola is doing is using cloud run to run that complex workload but in a service in a service full way running in a service fashion all right take a minute explain what's a complex workload for your definition what is a simple workload because guys again we love functions Stu and I always talk about how great it is but what's that what's the D mark line when when does something become complex by your standards where you guys are addressing they could think describe the characteristics of a complex workload so the first thing is does the workload require flexibility right meaning are their custom workloads sometimes even legacies C++ or C applications do they need to pull that functionality in as well right do they need to pull random artifacts from across the enterprise to combine it and sometimes these are things that have been built over 20 years ago they're really critical mission critical pieces of software that need to be able to trigger and run right and can we actually take that flexibility but also combine in with a highly reliable environment right so were close like New Orleans there is no downtime right they need to be up 24 by 7 for 365 days of the year right so that flexibility plus that level of reliability is what we look at when we look at complexes so you're getting into complex systems where you got some code may be written in a mainframe COBOL in C++ we mentioned that was my jamm what kind of old dating myself but that was state-of-the-art back in the 90s so I'm running an agile job maybe of standing up cloud native but I need a use software and data from a system that's where is that where the container piece comes that ku burning it on either kubernetes but cloud run also supports docker so let's say you're running it in a docker container all you need is a docker container image and we can host that workload on program yeah Polly help us understand where where Google kind of what what's the same one what's different compared to the other service offerings out there just what I've heard feedback the last year or two is you know the great thing about server list is it's really easy to get started I've talked to marketing people that have no coding background that you know can get off and running it but doing complex mission-critical stuff yeah like we understand you know there is no magic wand NIT no silver bullet to make it easy but you know what do you see as Google's role in in this broader marketplace and you know where does open-source fit into that too yeah yeah so first I'll start off by saying there's a whole host of functions that are running on cloud functions which are relatively lightweight simple targeted event-driven functions those work great where we see us really making a difference for our customers is in two ways the first is get these more complex workloads that are currently running in a container whether it's a docker container our and or on gke for that matter and bring the agility of service to those workloads so it's the first thing it's something that we think is very unique because combining containers with serverless the second bit really is the open approach we've taken right built on top of K native key native as you know has a number of partners so one of the cool demos that you'll see during during Google Cloud next is you'll see a workload being shifted from cloud run on gke to the IBM cloud IBM is one of our partners 4k native without a single line of code and that flexibility is something that I think customers really decided talk about the business pen and some of the benefits at the business level in a developer level at the operations level can you hit those three points yeah of serverless silikal server less on those three sectors what's the benefits yep so we talked about the benefits for developers for developers it's simply about agility focus on your own code don't worry about Gamal don't worry about ki native you don't have to worry about any of that we'll take care of it for you the second benefit that I'll talk about is again this is just a benefit for the CIO which is hey we're gonna give you the flexibility and the openness so you can have portability of your workloads across whatever and why are you environment you want whether it's on tram or in a cloud whether it's Google or another cloud that's the second benefit the third bit is all of the operational benefits of service one of the things you'll see us do and continue to commit to do is we'll bill you to the hundredth of a millisecond right and so you'll continue to get that with all of the resiliency you expect of Google infrastructure security also pretty much baked in as well security is big then there's a fully managed offering from Google and so you'll get security compliance policies all Big Data of course we watched the keynote and we watch every word from Koreans giving Diane green a little tip of the hat which was nice signal a lot of class a great respect for that but jennifer lynn said something i want to get your reaction to she was kind of talking about her thing doing a great demo he changing and when she said this would allow you to negotiate better contracts okay that might have been a slip of the tongue your reaction that that implied to me I took that and say whoa that means leverage shifts to the customer your thoughts and that kind of maybe a slip of the tongue but if you're saying that I couldn't have options and choice yes Janice pardon this is what customers want and at Google what we're focused on is giving customers what they want and one of the things that customers are worried about today is lock-in and especially in the server this area because the current offerings are so proprietary customers are worried about it because they want server lists for all the benefits offers that we talked about here but they do want that flexibility and that's what we negotiate actually we know Oracle is very strict on their cloud this is going to give customers the choice is the saying that's whoa you want a license renewal yeah that's what you're getting out here so Polly you talked about choice and flexibility you know kubernetes gives some of that concern with serverless is if I look at a sure if I look at AWS if I look at Kay native you know those three aren't the same I talked there there's a small start-up called trigger mesh that's getting Kay native to work with AWS lambda but do you see a future is there you know I've talked to the CMC F I've looked at some of the various pieces that you know serverless isn't just something that I'm baked into a cloud yeah look I think we've seen extraordinary momentum around Kay native it's very similar to what we had seen when in the early days of kubernetes this huge amount of ecosystem interest and so we'll see continued innovation where you'll see work load portability come to service and I'm confident in that because of all of the momentum we were seeing around Canada so we're committed at Google to K native and its success so you'll see us continue to innovate yeah talk about open source open source becomes a very strategic part you can Shin kubernetes which you guys were the that have the DNA the founding fathers of kubernetes now teams on the team went to vmware someone have Microsoft some stay within Google containers certainly we see what you guys have done when four against four J but open source still this fear of open source I mean I don't mean it in a way that it's going to be inhibited and primitive but support making sure s LA's work latency microservice is going to be involved you mentioned k- yeah so as open source accelerates the time then value for the code that also triggers this op side of the serviceability and reliability and support what's your thoughts on that how are you guys how do you see the industry supporting that that critical piece of the puzzle yeah could not be more critical right for customers to be able to adopt this because the number one thing that we need to do for customers is give them a managed offering that lets them not have to worry about security lets them not have to worry about compliance lets them not have to worry about policies or identity etc right bake all of that into the managed service and then the second operational bit is which is as important this goes to what Thomas talked about at the very end of his keynote which is the open source announcement is we want to make it simple for customers to adopt it will be supported by Google and the partner you'll get unified billing unified support and one person to call when you have a problem yeah Polly we're at an interesting point in open source today because they're they want to get your opinion as a product person and your relationship with open source because you know there's a certain cloud out there it's they're gonna give you open source as a managed service but you have some of the companies that are making like open source databases changing their policies to try to fight against just being you know taken over by somehow the big players how does Google react to that yeah for us the approach is all about partnership because we think together we can better serve customers needs and best serve them and so our approach has always been about partnership so whether it's kubernetes or key native or the larger manage store manager open source offerings that we talked about earlier in the keynote we want to bring all of these together so we can serve customers so you're gonna see us continue to like support the open source equals because we believe that innovation is absolutely critical to helping our customers really start innovated in be agile final question I know we're tight on time I want to get this in because you know I see a lot of positive I've come out of the show there's been some critical analysis around you've got to build up salespeople and all the field stuff which is you guys are well aware of but one of the things that was kind of teased out in the open source announcement was the role of Google having their own ecosystem Asli the C & C has been a big tailwind for Google you guys been a big part of that ecosystem as a cloud commercial provider and with these kinds of server list you're going to have an ecosystem starting to develop kind of a thousand flowers blooming pun intended so how do you see that in your area because this is going to be super important partnering ecosystem support yeah which is you know developer traction distribution of software integration opportunities that's why in monetization all kind of come together your thoughts huge hugely critical for us and that's something that we've been focused on we have a rich ecosystem of partners for service we're gonna continue to build it out across all of the different pieces you need one of the things we didn't talk much about was our entire operational stack monitoring logging all of those pieces right we need to bring all of those together along with all of our partners we have a big partnership with the likes of data dog right number of others so we're gonna continue to partner with the entire ecosystem so we can go solve the problems that they have are you guys gonna show them the white space where they can play is gonna be part of the strategy yeah so it's gonna be across the board you'll see us continue to support the key native ecosystem tremendously and like lean into that and we're already excited to see all the different offerings that are exist on key native same thing with kubernetes we're gonna continue to like press hard we've got on the operational side we've got an offering called open census it's got lots of traction again just open monitoring of applications so we're gonna continue to do that across the board yeah probably great to have you on vice president of product and design got the keys to the kingdom right here he's the who's running the show for the server list really the key part of how kubernetes really intersects old and new to create the next generation applications thanks for joining us and sharing the insight I'm Jeff forest do many men here live coverage Google next more coverage after this short break

Published Date : Apr 9 2019

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
ThomasPERSON

0.99+

80%QUANTITY

0.99+

AWSORGANIZATION

0.99+

jennifer lynnPERSON

0.99+

GoogleORGANIZATION

0.99+

CanadaLOCATION

0.99+

IBMORGANIZATION

0.99+

VeoliaORGANIZATION

0.99+

365 daysQUANTITY

0.99+

JanicePERSON

0.99+

Jennifer LynnPERSON

0.99+

San FranciscoLOCATION

0.99+

New OrleansLOCATION

0.99+

San FranciscoLOCATION

0.99+

JohnPERSON

0.99+

PollyPERSON

0.99+

MicrosoftORGANIZATION

0.99+

third pieceQUANTITY

0.99+

OracleORGANIZATION

0.99+

second benefitQUANTITY

0.99+

second bitQUANTITY

0.99+

BobPERSON

0.99+

three thingsQUANTITY

0.98+

firstQUANTITY

0.98+

three productsQUANTITY

0.98+

two waysQUANTITY

0.98+

7QUANTITY

0.98+

todayDATE

0.98+

first thingQUANTITY

0.98+

oneQUANTITY

0.98+

three pointsQUANTITY

0.98+

third bitQUANTITY

0.98+

COBOLTITLE

0.98+

three thingsQUANTITY

0.97+

C+TITLE

0.97+

one personQUANTITY

0.97+

last yearDATE

0.97+

Google CloudTITLE

0.97+

24QUANTITY

0.97+

tomorrowDATE

0.97+

CTITLE

0.96+

KayPERSON

0.96+

threeQUANTITY

0.96+

Jeff forestPERSON

0.96+

one thingQUANTITY

0.95+

DianePERSON

0.94+

three sectorsQUANTITY

0.94+

a ton of developersQUANTITY

0.93+

hundredth of a millisecondQUANTITY

0.93+

one thingQUANTITY

0.93+

single lineQUANTITY

0.92+

IntelliJTITLE

0.92+

ChevroletORGANIZATION

0.92+

C++TITLE

0.91+

Kay nativePERSON

0.91+

Moscone CenterLOCATION

0.9+

over 20 years agoDATE

0.86+

CMC FORGANIZATION

0.86+

cloudTITLE

0.82+

twoQUANTITY

0.8+

Jason McGee, IBM | IBM Think 2019


 

>> Live from San Francisco. It's the cube covering IBM thing twenty nineteen brought to you by IBM. >> Welcome back to the Cube here in Mosconi North at IBM. Think twenty nineteen. I'm stupid. And my CLO host for the segment is Day Volante. We have four days, a water wall. Coverage of this big show happened. Welcome back to the program. Jason McGee, who is an IBM fellow, and he's the vice president. CTO of Cloud Platform at IBM. Jason, Great to see a >> guy to have fair. >> All right, So, Jason, we spoke with you at Que Con Way. We're saying it's a slightly different audience. A little bit bigger here. Not as many hoodies and jeans and T shirts a little bit more of a business crowd were still talking about clouds. So let's talk about your kind of your role here at the show. What's gonna keep you busy all week? >> S o? I mean, obviously, cloud is a huge part of what's going on. I think talking a lot about both public and private, about hybrid and some are multi called management capabilities. You know, my role as the leader called Platform. I'm talking a lot about platform as a service and communities and containers in the studio and kind of all the new technologies that people are using to help build the next generation of applications. >> All right, so we've had a few interviews today already talk about some of the multi cloud pieces. We had Sandberg on alien talk about eternity. So first you're gonna help correct the things that he got >> anything. Gang >> and service measures have been a really hot conversation the last year or so SDO envoy and the like t talk to us about where IBM fits into this discussion of service meshes. >> Yeah, so you know, I think >> we've been on this kind of journey as an industry of last year's to build anew at platform on DH service meshes kind of fit the part of the problem, which is, How does everything talk to each other and how to actually control that and get visibility into it? You know, IBM has had a founding role in that project. My team at IBM and Google got together with the guys, a lift to create it. Theo, what I'm most excited about, I think a twenty nineteen is that's that technology is really transitioning into something people are using in production and their applications. It's becoming more of kind of the default stack that people are using Really helping them do security invisibility control over their applications? >> Yeah. What? One thing that I heard just from the community and wonder if you could tell me is, you know, is dio itself. The governance model is still not fully into CNC s. Yeah, I heard a little bit, hasn't he? On some envoy? Of course. Out there in the like. So, you know, where are we? What needs to happen to kind of >> move forward? Yeah, you're right. So we're not there quite yet. We're pushing hard to make that happen. Certainly. From an IBM perspective, we absolutely believe that CNC F is the right home for Osteo as you mentioned some of the pieces like Envoy or they're ready. You know, C N c f has done such a tremendous job over the last eighteen months. Really rallying all the core technologies that make up this new coordinate A platform that we're building on costo is no out there's one. Oh, it's been sure people are using it. You know, that last step needs to happen to get into the community. >> So I have to ask you So things move so fast in this world, you go back to the open stack days, and that was going to change the world. And then Dakar Containers. And then Cooper netease, usto I can't help but thinking, Okay, This isn't the end of the line. What's Jason? What's the underlying trend here that's going on in the coding world? Yeah, sure. I'll put it in, maybe in >> my own lens. Given my history, you nominal WebSphere app server guy. You know that in the first half of my career I built that Andi, >> I think the fundamental >> problem solving is actually exactly the same. It's like, how do you build a platform that's app developers focus on building their APS, and I'll focus on all the plumbing and the infrastructure for running those aps. We did that twenty years ago in Java with APP servers, and we're doing it now with cloud, and we're doing it on top of containers. Things like usto like, while they're important in their own right there really actually Mohr important because they're just part of this bigger puzzle that we're putting together. And I think for the average suffer developer, they shouldn't really have to care about. What part of this deal will part is is Cuban eighties. And which part is K native like all that needs to come together into a single platform that they can use to build their APS and run them security. Right? And and I think it's Seo is just recognizing that next piece. You know, I think we've all agreed on containers and communities. We all talk about it all the time, and it's tio Is that next layer I catalyze securing >> control things. Yeah. So you teed it up nicely because we want out. Developers just be able to worry about the application. So you mentioned K native. The whole server list trend is one where you know the idea, of course, is I shouldn't have to worry about the infrastructure layer it just be taking care of me. We've talked about it for pass for a number of years. There are various ways to do it. So at, uh, Cube Colin and we've been looking for about the last year. Now you know, Where does you No, Crew, Burnett, ease and surveillance. How do they fit together? And K Native looks to be a pieces. Toe bridge. Some of those barrels? Absolutely. Where are we and what? What? What's? What's IBM doing there? >> So I think >> you rightly say that they should fit together like they're all part of this continuum of how developers build APS. And, you know, if you look at server, less applications, you know, there's the servos to mention I'm personally not a big service terminology fan. I think they're Maura about event oriented computing. And how do you have a good model for event oriented systems today? With Cuba Netease, anise Teo, I think we've built the base platform, I think, with a native what we're doing is bringing server lists and also just kind of twelve factor applications into the fold in a more formal way on when we get all those pieces together and we integrate them. I think then developers really unleashed to just build their application, whatever way it makes the most sense for what they're doing. And some things like server lists of Anna Marie. And it's going to be easier. And some problems. Straight containers will be an easier way to do >> it. You know, you say you don't like survivalists you like event better a function. So so explain that to the audience, like Why? Why should we care? And why is that different? How is that different? Yeah, I think, for >> a couple things. First off, the idea of server lists applies much more broadly than just what we think of this kind of function based program. You know, like any system that does a good job of managing and masking the infrastructure below me, you could consider a surveillance system, right? So when you just say server Lis, it's kind of like secondhand for functions. I'd rather we just kind of say, functions because that's actually a different programming model where you kind of trigger off of events and you write a functional piece of code and the system takes care of those details. You could argue that caught foundries, a server list system in the sense that you just as a developer anyway, you just see if push your code and it just runs and its scales and it does whatever you need, right? So part of my mission, you know, part of what I look at a lot is how do we bring all these things together in a way that is easy for the developer to stay focused. It steals a great example. You know, one of things were announcing this week is managed osteo support as part of our community service. What does that really mean? It means the developer can use the capability Viste without worrying about How do I install in Rennes D'oh, which they don't really care about? They just really care about how they get value out of its capability. >> Yeah, that's one of the things that having watched all these crew Benetti system and the like is how many companies really need to understand how to build this and run that because can I just get it delivered to me as a service? And therefore that you know that whole you know what I want out of cloud? I want a simple model to be able to consume, Not necessarily. I want to build the stuff that's important to me and not the rest of you. >> And I think if you look at the industry, there's really, I think, kind of two dominant consumption models that have actually emerged for people really using these things, there's public cloud platforms you're delivering things as a service. And then there's kind of platform software stacks like open shifts like I've been called private, which take all of these pieces and bring them together. And I think for most developers, they'll consume in one of those two ways because they don't really want the task of how to assemble all these pieces together. >> Tio, go back to the service piece like what? One distinction I heard made is okay. If I can really scale it down to zero, if I don't need to make it, then that can be serve a list. But there there's alternatives coming out there like what K native has. If I want to run this in my own environment, it's not turbulence because I do need toe. It might be functions, but I need to manage this environment. The infrastructure is my responsibility, not some >> service provider, right? And I think if you'll get server list to me, I was personally, I always think of it in kind of two scenarios. There's like surveillance as ah program remodel in a technology and surveillance as a business model, right? As a consumption model for payment. I think this programming model parts applicable in lots of cases, including private clouds. And in Custer, the business model parties, I think, frankly, unique to public. I'll thing that says I can just pay for the milliseconds of CPU, Compute that amusing and nothing more. >> That's a good thing for consumers. For >> the consumer, it's actually good thing for cloud providers because it gives us a way Tio reuse our infrastructure and creative ways, Right? But I think first and foremost, we have to get Mohr adoption of it as a programming model that developers used to build their applications and do it combined with other things. Because I think most realistic APs aren't gonna all be cirrhosis or all B Cooper nineties. They're going to be something. >> Yeah, right. It's like everything else. It's it's you know, what percent into the applications? Will this takeover? We had this discussion with virtual ization. We've been having this discussion with cloud and certain list, of course, is is pretty early in that environment. K native did I hear is there's some announcement this week that IBM >> so Soak a native, obviously is a project is kind of much earlier in its maturation and something like Castillo is. But we're making that available as part of our Republican private cards as well, Really? So people can get started with the ideas of K native. They can have an easy way to get that environment stood up, and they can start building those applications on DSO. That's now something that, you know, we're kind of bringing out as we work in the community to actually mature the project itself. >> Excellent. One of the things everybody's, of course, keeping an eye on. I saw Arvin Christian talking about the clouds. Tragedy is how red hat fits into all this. So we know you can't talk about kind of post acquisition. But red hats involved in K native. They're involved in a lot of the >> services and developers you gotta be exciting for. Yeah, >> it is. And obviously, like, Look, we've been partners for many years, you know, in on the open source side of things. We've worked closely with Red Hat for a long time. We actually view the world in very similar ways. You know, like you said, we're working on a native together. We've been working on Open West Feather. We obviously work in Cuban eighties together. So personally, I'm pretty excited about them coming in IBM. Assuming that acquisition goes through, they, you know, they fit into our strategy really well. And I think we'll just kind of enhance what we've all been working to build. >> All right, Jason, what else? What's looking? You talk about the maturity of these solutions, give us, um, guide post for the people watching the industry that we should be looking at as twenty nineteen rolls through >> us. So I think there's a >> couple things that, you know, I think this unified application platform notion that we've been kind of touching on here, I think will really come into its own in twenty nineteen. And and I would really love to see people kind of embraced that idea that we don't need. Three container stacks were not tryingto build these seven things. You know, one of things I'm kind of excited about with a native is by bringing server lists and twelve factor into Cuba Netease. It allows each of those frameworks to be kind of the best they can be at their part of the problem space and not solved unrelated problems. You know, I looked at the kind of server less versus coop camps, you know, the purest. And both think all problems will be solved in their camp. Which means they tried to solve all problems. Like, how do I do state full systems and server, Wes. And how do I bring in storage and solve all these things that maybe containers is better at. So I think this unification that I see happening will allow us to have really high efficiency, twelve factor and surveillance in the context of Koob and will change how people are able to use these platforms. I think twenty nineteen is really about adoption of all of this stuff. You know, we still are really early, frankly, in the kind of container adoption landscape, and I think most people in the broader industry or just kind of getting their feet wet they all agree that they're all trying, but they're just starting, and he knows a lot of interesting work. >> Jason, are there any anything that air holding people back? Anything that you You know what? What do you see is some of the things that might help accelerate some of this adoption? >> Yeah, I think one of the things that's >> holding people back is just the diversity of options that exists in the cognitive space means you guys have all probably rising like the C in C F landscape chart. I've never seen so many icons on something in my life. That's really frightening for the average enterprise. To look at a picture like that and go like which of these things are going to be useful, which are going to exist in a year like how Doe, I bet, make that sort >> of those things. So I think that's actually >> help people back a lot. I think that kind of agreement around communities that happened in the last eighteen months or so was really liberating, for a lot of people have helped them kind of move forward there. I think if we can all agree on a few more pieces around this deal, reckon native like it'll really help kind of unlock people and get them trying actually doing it. And I don't think it's anything more than picking a project and starting. I think a lot of enterprises over analyze everything, and they just need to pick something and go and learn. And they'll >> so pick some narrow use case pick, pick an app, pick >> a use case and go do it right and you'll learn and you'll figure out how it works for you. And then you do the second and the fourth in the tenth. And before you know it, you're on your way. That's what we did at IBM ourselves, and you know, now we're running our whole entire public out on top of communities. >> Jason and any any warnings from that kind of experience that you trade to users? A CZ. They looked forward. >> Yeah, we had a >> lot of learnings from music. One is we could run a heck of a lot more diverse work less than we thought when we started. You know, we're running databases where any data warehouses, running machine learning. We're running Blockchain. We're running every kind of application you didn't think could ever work on containers on containers s so one of the lessons Wass. It's much more flexible than you think. It isthe right. The >> other thing is you >> really have to rethink everything. Like the way you do compliance, the way you do security, the way you monitor the system. Like all of those things I need to change because the underlying kind of container system enables you to solve them in such a powerful way. And so if you go into it just thinking, Oh, I'm just going to change this one part of how I do aps and the rest will change. I think you'll find in a year that you're changing the whole operating model around your environment. >> Well, Jason, rethink everything we're here at IBM. Thing up twenty nineteen. Thinks is always for catching up with Thanks for everything going on for David. Want a, um, stew? Minutemen got three more days of live coverage here for Mosconi North. If you hear, stop by and say hi or reach out to us on the interwebs. Thanks so much for watching the cues.

Published Date : Feb 12 2019

SUMMARY :

IBM thing twenty nineteen brought to you by IBM. And my CLO host for the segment is Day Volante. All right, So, Jason, we spoke with you at Que Con Way. I think talking a lot about both public So first you're gonna help correct the things that he got envoy and the like t talk to us about where IBM fits into this discussion It's becoming more of kind of the default stack that people are using you know, is dio itself. You know, that last step needs to happen to get into the community. So I have to ask you So things move so fast in this world, you go back to the open stack You know that in the first half of my career And I think for the average suffer developer, Now you know, Where does you No, Crew, Burnett, ease and surveillance. And how do you have a good model for event oriented systems today? it. You know, you say you don't like survivalists you like event better a function. You could argue that caught foundries, a server list system in the sense that you just as a developer anyway, And therefore that you know that whole you know what I want And I think if you look at the industry, there's really, I think, kind of two dominant consumption models If I can really scale it down to zero, if I don't need to make it, then that can be serve a list. And I think if you'll get server list to me, I was personally, I always think of it in kind of two That's a good thing for consumers. But I think first and foremost, we have to get Mohr adoption of it as a It's it's you know, what percent into the applications? That's now something that, you know, So we know you can't talk about kind of post acquisition. services and developers you gotta be exciting for. And obviously, like, Look, we've been partners for many years, you know, You know, I looked at the kind of server less versus coop camps, you know, the purest. cognitive space means you guys have all probably rising like the C in C F landscape chart. So I think that's actually And I don't think it's anything more than picking And then you do the second and the fourth in the tenth. Jason and any any warnings from that kind of experience that you trade to users? We're running every kind of application you didn't think could ever work on containers on containers s so one Like the way you do compliance, the way you do security, If you hear, stop by and say hi or reach out to us on the interwebs.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JasonPERSON

0.99+

IBMORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Jason McGeePERSON

0.99+

San FranciscoLOCATION

0.99+

DavidPERSON

0.99+

two waysQUANTITY

0.99+

Arvin ChristianPERSON

0.99+

four daysQUANTITY

0.99+

Mosconi NorthLOCATION

0.99+

fourthQUANTITY

0.99+

tenthQUANTITY

0.99+

twenty years agoDATE

0.99+

bothQUANTITY

0.99+

last yearDATE

0.99+

eachQUANTITY

0.99+

JavaTITLE

0.99+

oneQUANTITY

0.99+

zeroQUANTITY

0.99+

TheoPERSON

0.99+

firstQUANTITY

0.99+

secondQUANTITY

0.98+

seven thingsQUANTITY

0.98+

two scenariosQUANTITY

0.98+

todayDATE

0.98+

three more daysQUANTITY

0.98+

SandbergPERSON

0.97+

anise TeoPERSON

0.97+

FirstQUANTITY

0.97+

OneQUANTITY

0.97+

WebSphereTITLE

0.97+

Three containerQUANTITY

0.96+

twentyQUANTITY

0.96+

this weekDATE

0.96+

Red HatORGANIZATION

0.96+

twelveQUANTITY

0.95+

single platformQUANTITY

0.95+

first halfQUANTITY

0.94+

Day VolanteTITLE

0.94+

RepublicanORGANIZATION

0.94+

CooperPERSON

0.94+

Dakar ContainersORGANIZATION

0.93+

2019DATE

0.93+

twelve factorQUANTITY

0.93+

Cuba NeteaseTITLE

0.92+

twenty nineteenQUANTITY

0.91+

twenty nineteenQUANTITY

0.91+

One thingQUANTITY

0.91+

last eighteen monthsDATE

0.91+

MauraPERSON

0.9+

two dominant consumption modelsQUANTITY

0.9+

One distinctionQUANTITY

0.9+

CastilloPERSON

0.89+

K nativeORGANIZATION

0.89+

coupleQUANTITY

0.84+

CTOPERSON

0.84+

ninetiesDATE

0.84+

Cube ColinPERSON

0.83+

Rennes D'ohORGANIZATION

0.79+