Image Title

Search Results for Linker D:

William Morgan, Buoyant | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to vincia Spain in Coon cloud native con Europe, 2022. I'm Keith towns alongside en Rico senior. Etti senior it analyst for giong welcome back to the show en >>Rico. Thank you again for having me here. >>First impressions of QAN. >>Well, great show. As, as I mentioned before, I think that we are really in this very positive mode of talking with each other and people wanting to see, you know, the projects, people that build the projects at it's amazing. I mean, a lot of interesting conversation in the show floor and in the various sessions, very positive move. >>So this is gonna be a fun one. We have some amazing builders on the show this week, and none other than William Morgan, CEO of buoyant. What's your role in the link D project? >>So I was one of the original creators of link D but at this point I'm just the, the beautiful face of the project. >>Speaking of beautiful face of the project, linker D just graduated from as a CNCF project. >>Yeah, that's right. So last year we, we became the first service mesh to graduate in the CNCF. Very proud of that. And that's thanks, you know, largely to the incredible community around Linky that is just excited about the project and, you know, wants to talk about it and wants to be involved. >>So let's talk about the significance of that link D not the only service mesh project out there. Talk to me about the level effort to get it to the point that it's graduated. That's you don't see too many projects graduating CNCF in general. So let's talk about kind of the work needed to get Nier D to this point. >>Yeah. So, you know, the, the, the bar is high and it's mostly a measure, not necessarily of like the, the project being technically good or bad or anything, but it's really a measure of maturity of the community around it. So is it being adopted by organizations that are really relying on it in a critical way? Is it, you know, being adopted across industries, you know, is it having kind of a significant impact on the cloud native community? And so for us, you know, there was the, the work involved in that was really not any different from the work involved in, in kind of maintaining ity and growing the community in the first place, which is you try and make it really useful. You try and make it really easy to get started with you, try and be supportive and to, you know, have a, a friendly and welcoming community. And if you do those things and, you know, you kind of naturally get yourself to the point where it's a, it's a really strong community full of people who are excited about it. >>So from the of view of, you know, users adopting the, this technology, so we are talking about everybody, or do you see really, you know, large organization, large Kubernetes yeah. Clusters infrastructure adopting it. >>Yeah. So that's the answer to that is changed a little bit over time. But at this point we see Linky adoption across industries, across verticals, and we see it from very small companies to very large ones. So, you know, one of the talks I'm really excited about at this conference is from the folks at Xbox cloud gaming, who talked about, who are gonna talk about how they deployed Linky across, you know, 22,000 pods around the world to serve, you know, basically on demand video games, never a use case I would ever have imagined for Linky. And at the previous Kuan, you know, virtually Kuan EU, we had a whole keynote about how Linky was used to combat COVID 19. So all sorts of uses. And it really doesn't, you know, whether, whether it's a small cluster or large cluster it's equally applicable. >>Wow. So as we talk about link D service match, we obviously are gonna talk about security application control, etcetera. But in this climate Software supply chain is critical, right. And as we think about open source software supply chain, talk to us about the recent security audit of link dealer. >>Yeah. So one of the things that we do as part of a CNCF project, and also as part of, I, I think our relationship with our community is we have regular security audits, you know, where we, we engage security professionals who are very thorough and, you know, dig into all the details. Of course the source code is all out there, you know, so anyone can read through the code, but they'll build threat model analyses and things like that. And then we take their, their report and we publish it. We say, Hey, look, here's, you know, here's the situation. So we have earlier reports online, and this newest one was done by a company called trail of bits. And they built a whole threat model and looked through all the different ways that Linky could go wrong. And they always find issues. Of course, you know, it's, it would be very scary, I think, to get a report that was like, no, we didn't find yeah. Earth clean, you know? Yeah. Everything's fine. You know, should be okay. I don't know. Right. But they, you know, they did not find anything critical. They found some issues that we rapidly addressed and then, you know, everything gets written up in the report and, and then we publish it, you know, as part of an open source artifact >>Are, you let's say, you know, do they give you and add something? So if something happens so that you can act on the code before, you know, somebody else discovers the >>Yeah, yeah. They'll give you a preview of what they found. And then often, you know, it's not like you're going before the judge and the judge makes a judgment and then like off the jail, right. It's, it's a dialogue because they don't necessarily understand the project. Well, they definitely don't understand it as well as you do. So you are helping them, you know, understand which parts and, and your, you know, are, are interesting to look at from the security perspective, which parts are not that interesting. They do their own investigation of course, but it's a dialogue the entire time. So you do have an opportunity to say, oh, you told me that was a, a, a minor issue. I actually think that's larger or, or vice versa. You know, you, you think that's a big problem. Actually, we thought about that, and it's not a big problem because of whatever. So it's a collaborative process. >>So link D been around, like, when I first learned about service me link D was the project that I learned about. Yeah. It's been there for a long time, but just mentioned 22,000 clusters. That's just mind boggling pod, 22,000 pods, the pods. Okay. >>Clusters would be >>Great. Yeah. Yeah. Clusters would be great too, but filled 22 thousands pods, big deployment. That's the big deployment of link D but all the way down to the small, smallest set of pods as well. What are some of the recent project updates from of the learnings you bought back from the community and updated the, the project as a result? >>Yeah. So a big one for us, you know, on the topic of security link, a big driver of link adoption is security and, and less on the supply chain side and more on the traffic, like live traffic security. So things like mutual TLS. So you can encrypt the communication between pods and make sure it's authenticated. One of the recent feature additions is authorization policy. So you can lock down connections between services and you can say service a is only allowed to talk to service B. And I wanna do that. Not based on network identity, you know, and not based on like IP addresses, cuz those are spoof. And you know, we've kind of like as an industry moved, moved, we've gotten a little more advanced from that, but actually based on the workload identity, you know, as captured by the mutual TLS certificate exchange. So we give you the ability now to, to, to restrict the types of communication that are allowed to happen on your cluster. >>So, okay. This is what happened. What about the future? Can you give us, you know, into suggestion of what is going to happen in the medium and long term? >>I think we're done, you know, we graduated, so we're just gonna >>Stop there's >>What else is there to do? There's no grad school, you know? No, no. So for us, there's a clear roadmap ahead, continuing down the, the security realm, for sure. We've given you kind of the very first building block, which at the service level, but coming up in, in the two point 12 release, we'll have route based policy as well, as you can say, this service is only allowed to call these three, you know, routes on this end point and we'll be working later to do things like mesh expansion so we can run the data plane outside of Kubernetes. You know, so the control plane will stay in in Kubernetes, but the data plane will, you'll be able to run that on VMs and, and, and things like that. And then of course in the, you know, we're also starting to look at things like I like to make a fun of WAM a lot, but we are actually starting to look at WAM in, in the ways that that might actually be useful for Linky users. >>So we talk a lot about the flexibility of a project, like link D you can do amazing things with it from a security perspective, but we're talking still to a DevOps type cloud of, of, of developers who are spread thin across their skillset. How do you help balance the need for the flexibility, which usually becomes more nerd knobs and servicing a crowd that wants even higher levels of abstraction and simplicity. >>Yeah. Yeah. That's a great question. And this is, this is what makes Linky so unique in the service mesh spaces. We have a laser focus on simplicity and especially on operational simplicity. So our audience, you know, we can make it easy to install Linky, but what we really care about is when you're running it and you're on call for it and it's sitting in this critical, vulnerable part of your infrastructure, do you feel confident in that? Do you feel like you understand it? Do you feel like you can observe it? Do you feel like you can predict what it's gonna do? And so every aspect of Linky is designed to be as operationally simple as possible. So when we deliver features, you know, that's always our, our primary consideration is, you know, we have to reject the urge. You know, we have an urge as, as engineers to like want to build everything, you know, it's an ultimate platform to solve all problems and we have to really be disciplined and say, we're not gonna do that. >>We're gonna look at solving the minimum possible problem with a minimum set of features because we need to keep things simple. And, and then we need to look at the human aspect to that. And I think that's been a part of, of Link's success. And then on the buoyant side, of course, you know, I don't just work on link day. I also work on, on buoyant, which helps organizations adopt Linky and, and increasingly large organizations that are not service mesh experts don't wanna be service mesh experts that, you know, they wanna spend their time and energy developing their business, right. And, and building the business logic that powers their company. So for them, we have actually re recently introduced, fully managed. Linky where we can take on, even though Linky has to run on your cluster, right? The, the, the, the sidecar proxies has to be alongside your application. We can actually take on the operational burden of, of upgrades and trust, anchor rotation, and installation. And you can effectively treat it as a utility, right. And, and, and have a, a hosted, like, experience, even though the, the actual bits, at least most of them, not all of them, most of 'em have to live on your cluster. >>I love the focus of most CNCF projects, you know, it's, it's peanut butter or jelly, not peanut butter. Yeah. Trying to be become jelly. Right. What's the, what's the, what's the peanut butter to link D's jelly. Like where does link D stop and some of the things that customers should really consider yeah. When looking at service mesh. >>Yeah. No, that's a great way of looking at it. And I, I actually think that that philosophy comes from Kubernetes. I think Kubernetes itself, one of the reasons it was so successful is because it had some clearly delineated, it said, this is what we're gonna do. Right. And this is what we're not gonna do. So we're gonna do layer three, four networking. Right. But we're gonna stop there. We're not gonna do anything with layer seven. And that allowed the service mesh. So I guess if I were to go down the, the bread, the bread of the sandwich has Kubernetes, and then Linky is the, is the peanut butter, I guess, and then the jelly, you know, so I think the jelly is every other aspect of, of building a platform. Right. So if you are the, the audience for Linky, most of the time, it's a platform owners, right. They're building a platform, an internal platform for their developers to write code. And so, as part of that, of course, you've got Kubernetes, you've got Linky, but you've also got a C I CD system. You've also got a, you know, a code repository, if it's GitLab or, or GitHub or wherever you've got, you know, other kind of tools that are enforcing various other constraints. All of that is the jelly, you know, in the, this is, analogy's getting complicated now. And like the, the platform sandwich that, you know, that you're serving. >>So talk to us about trans and service mesh from the, from the, as we think of the macro. >>Yeah. Yeah. So, you know, it's been an interesting space because we were talking a little bit about, you know, about this before the show, but the, there was so much buzz, you know, and then what we, what we saw was basically it took two years for that buzz to become actual adoption, you know, and now a lot of the buzz is off on other exciting things. And the people who remain in the Linky space are, are very focused on, oh, I actually have a, a real problem that I need to solve and I need to solve it now. So that's been great. So in terms of broader trends, you know, I think one thing we've seen for sure is the service mesh space is kind of notorious for complexity, you know, and a lot of what we've been doing on the Linky side has been trying to, to reverse that, that, that idea, you know, because it doesn't actually have to be complex. There's interesting stuff you can do, especially when you get into the way we handle the sidecar model. It's actually really, it's a wonderful model operationally. It's really, it feels weird at first. And then you're like, oh, actually this makes my operations a lot easier. So a lot of the trends that I see at least for Linky is doubling down on the sidecar model, trying to make side cards as small and as thin as possible and try and make them, you know, kind of transparent to the rest of the application. So >>Well, William Morgan, one of the coolest Twitter handles I've seen at WM on Twitter, that's actually a really cool Twitter handle. Thank you, CEO of buoyant. Thank you for joining the cube again. Cube alum from Valencia Spain. I'm Keith towns, along with en Rico, and you're watching the cube, the leader in high tech coverage.

Published Date : May 18 2022

SUMMARY :

brought to you by the cloud native computing foundation. the show en people wanting to see, you know, the projects, people that build the projects at We have some amazing builders on the show the beautiful face of the project. Speaking of beautiful face of the project, linker D just graduated from about the project and, you know, wants to talk about it and wants to be involved. So let's talk about the significance of that link D not the only service mesh project out there. And so for us, you know, there was the, the work involved in that was really not any different from the work involved So from the of view of, you know, users adopting the, this technology, 22,000 pods around the world to serve, you know, basically on demand video games, And as we think about open source software supply chain, talk to us about the recent security audit of Of course the source code is all out there, you know, so anyone can read through the code, And then often, you know, it's not like you're going before pod, 22,000 pods, the pods. What are some of the recent project updates from of the learnings you bought back from but actually based on the workload identity, you know, as captured by the mutual TLS Can you give us, you know, into suggestion of what is going to happen in the medium and you know, we're also starting to look at things like I like to make a fun of WAM a lot, but we are actually starting to look at WAM So we talk a lot about the flexibility of a project, like link D you can do amazing So our audience, you know, we can make it easy to install Linky, but what we really care about is when And then on the buoyant side, of course, you know, I love the focus of most CNCF projects, you know, it's, All of that is the jelly, you know, in the, this is, So in terms of broader trends, you know, Thank you for joining the cube

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
William MorganPERSON

0.99+

LinkyORGANIZATION

0.99+

Valencia SpainLOCATION

0.99+

22,000 podsQUANTITY

0.99+

last yearDATE

0.99+

firstQUANTITY

0.99+

two yearsQUANTITY

0.99+

threeQUANTITY

0.99+

KubernetesTITLE

0.99+

22,000 clustersQUANTITY

0.98+

this weekDATE

0.98+

22 thousands podsQUANTITY

0.98+

oneQUANTITY

0.98+

EuropeLOCATION

0.98+

CNCFORGANIZATION

0.97+

2022DATE

0.97+

OneQUANTITY

0.96+

GitHubORGANIZATION

0.94+

XboxCOMMERCIAL_ITEM

0.94+

buoyantORGANIZATION

0.93+

CloudnativeconORGANIZATION

0.93+

linkORGANIZATION

0.91+

first serviceQUANTITY

0.9+

link DORGANIZATION

0.9+

LinkORGANIZATION

0.9+

CoonORGANIZATION

0.88+

WMORGANIZATION

0.87+

GitLabORGANIZATION

0.87+

SpainLOCATION

0.86+

layer threeQUANTITY

0.86+

First impressionsQUANTITY

0.86+

linker DORGANIZATION

0.85+

TwitterORGANIZATION

0.84+

Nier DPERSON

0.83+

BuoyantPERSON

0.83+

EarthLOCATION

0.82+

KeithPERSON

0.8+

COVID 19OTHER

0.78+

KubernetesORGANIZATION

0.75+

KuanPERSON

0.73+

QANORGANIZATION

0.72+

RicoLOCATION

0.7+

KubernetesPERSON

0.7+

two pointQUANTITY

0.7+

one thingQUANTITY

0.68+

cloud native conORGANIZATION

0.68+

C ITITLE

0.67+

bitsORGANIZATION

0.65+

trailORGANIZATION

0.65+

layer sevenQUANTITY

0.65+

KubeconORGANIZATION

0.63+

22EVENT

0.62+

cloudORGANIZATION

0.61+

vinciaORGANIZATION

0.59+

12QUANTITY

0.59+

William Oliveira & Brian "Redbeard" Harrington, Red Hat | KubeCon 2018


 

>> Announcer: Live from Seattle Washington, it's the Cube covering KubeCon and CloudNativeCon, North America, 2018. Brought to you by Redhat, the CloudNative Computing Foundation, and it's ecosystem's partners. (techno music) >> Okay welcome back everyone. We are live in Seattle for KubeCon and CloudNativeCon 2018, Cube's live coverage three days. Day one of a full house event here, through 8,000 people, doubled from last year, I'm John Furrier for Stu Miniman. Our next two guests are from Red Hat. Great to have these guys as our guests, as also thank Red Hat for being great sponsors. Brian "Redbeard" Harrington, Cube Alumni Back Product Manager of Service Mesh at Red Hat, and William Oliveria, Product Manager Serverless at Red Hat, we'll hear a lot about that. You guys, first of all, thanks for coming on, and thanks to your company Red Hat, for being a great supporter of the Cube and the community, the contribution you guys have helped up make, we really appreciate that. Thank you. >> Absolutely delighted to be here. >> Happy to be here. >> John Furrier: Alright, so let's get into it. So service meshes are hot because now Kubernetes is kind of like, we're seeing that is totally stabilized, and now you start to see the engineering, and the value creation happening in layers. Shim layers they call here, I got state-full applications. So you're starting to see service meshes conceptually adopt. Give us a quick update on where that is, how real is it, what's the progress, and what's some of the state-of-the-art activities around it? >> [Brian "Redbeard" Harrington] Well the beautiful thing is, using a service mesh is not anything new at all. I mean, that was really built to top the Netflix OSS ideas. They've been around for seven, eight years now. It's really just kind of decomposing what were a bunch of individual libraries that you had to implement into more infrastructure services, so that you know that you just, regardless of the language, environment, etc., you've always got a certain base platform ready to go. >> John Furrier: Is Service Mesh going to be a standard thing? Is it going to be, service meshes of your flavor, is there going to be certain instances custom services? How do you see that coming out with CSDO, Knative? There's things evolving. >> [Brian "Redbeard" Harrington] Mmhm, yeah. >> What's the state there, is that going to be the new normal, or is it going to see settling? What's your view on that? >> [Brian "Redbeard" Harrington] I think to some extent, it depends on the scale that you're at. If you are at the scale of Yelp or Stripe, one of those, and using Envoy, you already have a good idea of what that mesh is going to look like, so you're building that control plain, in the way that you need it. Where Istio and Linker D and some of the other ones come in, is when you are a smaller scale and you need to figure out what you're control plane is going to look like, that's where it really shines, because it gives you something that you can just start using and has some training wheels on it to make sure that you've got a stable platform to use from day one. >> Stu Miniman: So one of the other news items today I wanted to get your opinion on is, EtsyD has been handed over to Linux Foundation and CNCF, so EtsyD came out of CoreOS of course, which was acquired by Red Hat. Give us a little bit of the update as to why that happened and why it's a good thing for the community. >> So I think for any stable platform, it's really been the theme of what I've been talking about, you've got to know that it's safe to use the software, that there's going to be a longer term vision, and a lot of community guidance around that, and that's why Red Hat made the contribution. When we were at CoreOS, we really wanted to, and it was something that was ultimately a goal, but it kind of became a little bit of a race condition. Do we go ahead and contribute it, and then hope that other folks will join us in building it? Just by open sourcing it, we saw some contributions from IBM around PowerPC architecture and Maso's, and other groups coming in, but putting it just full-bore in the CNCF really guarantees that there will be ongoing community collaboration. >> John Furrier: Just to give a shout out to you guys at CoreOS, you guys did an amazing job, and I think this is a benefit of the Red Hat relationship, because that's the start up dilemma you have, do we get it in there, how do we support it, how do we make it better, is it competitive, was our focus what we optimized it for? But now with the Red Hat piece you guys should lean back, and do the right thing and get it in there with the right resource push, is that kind of how it's evolving, because that seems like what's-- >> It absolutely is. This goes beyond just EtsyD. The really rad thing is that I think it's safe to say that there is no part of the CoreOS portfolio that really isn't getting open sourced. You can kind of read into that what you will but, it meant that there was no technology that was getting left behind, nad that our users who really felt passionately about pieces of software, again, we're going to be able to have that utility. >> Stu Miniman: I think it goes back, we've been at Red Hat summits for many years and Red Hat is a hundred percent open sourced, it must be, and even I go back to Polvey and yourself and Brandon, all of the tools at CoreOS were creating is, they were all going to be open sourced tools that you will be involved in. I guess William, a good point to bring you into the conversation, Serverless, and fully open source, if not been have you thought about it at least for the last couple of years so, before we get into the Knative, give us the Red Hat positioning, where does Serverless fit into the architecture? And then we'd love to tease out all of the Knative discussion. >> Absolutely. For us, Serverless then is a lot about the user experience, and how we can simplify how developers can leverage technology such as Itsiu and service meshes and everything around the developer experience on top of Kuberneties. Serverless can deliver that and a lot of what we believe is that, it should not be then tied too much to functions because we can do that for functions, but we can do that for any class of applications actually running on top of the platform, and that's a lot of why we believe that Knative is this powerful interesting project going on out there right now. We already have all these different players collaborating, which is fantastic for inter-oper ability, we make sure that we can leverage that implementation on different platforms, we can run that anywhere pretty much on top of Kuberneties, and that's a big goal, to make sure that you can plug all these different parts as part of a consistent user experience there. >> Stu Miniman: Okay so we had the cube at the Google event this summer when it was announced I was at Serverless conference this year and to be honest, a lot of people were kind of scratching their heads trying to understand. Okay, Serverless and Kuberneties are going together but I'm not sure I quite get it? Give us the update where are we, when does this get baked into platforms, what can I do today, where do I learn more? >> Today, what we are offering is the three big modules as part of Knative are built, events, and serving. So it's the basic capabilities for you to build a serverless platform that, can again, work on any kind of application, not only functions, and we are at that stage. The project is very new, we are still in 0.2 release, at this point, so there's a lot of missing parts around user experience and what-not, but we are getting there, and that's where most of the focus is going on right now. But with something like events, that's a perfect opportunity for example, to integrate with all the different services we have available, let's say on Service Catalog, or through the operator's framework, for example, to connect to the applications that you are building on top of Kuberneties. That was part of the things that was missing to connect the dots when your implementing those applications, how are you going to consume events, how are you going to consume services, how those applications are going to scale? That's a lot of what we're addressing with Knative right now. >> What's the big walk away around the current event here at KubeCon? We hear maturity, great, check. A lot of people are fine in their swim lanes or whatever, their value layer, check. Clear a lot more gaps things white space start to appear, when that visibility lifts. What do you guys see the opportunities for the community, and you guys, certainly one of the big players, Red Hat, leading the way, as this ecosystem is, I mean companies I've never heard of, coming out of the woodwork. This is vibrant! There are opportunities for people to kind of, play in these white spaces. Do you guys have any thoughts on where you could give guidance to where people could jump in and create value? >> Well, there's two areas that are really fascinating to me. One is the fact that now that Kuberneties has gotten to the level of boarding infrastructure, it means that there are a lot more companies that are really comfortable saying, "we're building a top that, we don't care about what the compute layer is, because we just know". So you see a lot of organizations that are coming in, because they want to collaborate with other organizations, and see how they're using it to cross pollinate and get new ideas. That's why you've got full retail companies like Nordstrom here, that are the local band in town, and they're happy to come and show off, and you've also got a lot of, to the second piece of that, emerging companies that are finding areas, white space that we didn't consider as the incumbance in the space, and they're providing direct value. I think that as we have seen a lot more acquisitions coming through the space, there is going to be a lot of opportunity for the organization that has that five, ten, fifty million dollar idea to come in, build it quickly, know that it works on top of Kuberneties, and then be able to port it to Enterprise software that runs on a local cluster or across clouds. >> John Furrier: So new business model innovations are coming out of it as well., hence opportunities. It's okay to have a fifty million dollar business. >> Yes. >> Not bad, and could be acquired as well, some other value there. Okay, Microservice is hard to manage. Guys, talk about this dynamic. This is one of the things you guys really work hard to address, I know. We hear a lot about it. Porting to Microservice, "Hey, I'm in Enterprise! We should move from our Red Hat Linux implementation, to full cloud, and then it's going to go all the way to Microservices." Well, what the hell is Microservices? So again, this is kind of like, well I'm not saying that they're thinking that way, but this is not that easy. How do you guys make it easier? What are some of the speed bumps that customers hit? And what are the things to overcome those? What's your view on that? >> [William Oliveria] I'll talk about, first of all, how Knative is contributing to that. Again, the whole thing that we're talking about, not being tied to functions is because again, I want to leverage the serverless capabilities available in the platform for Microservices as well. And whenever you're talking about monitoring, tracing, observe-abilities, Istio comes into play, and solved that problem and connect all those different Microservices in a very nice way. With Knative, things we can improve on the user experience, so you can do that in a very easy way, when you are coming from this brown field applications when you are migrating to the cloud, when you are trying to port those applications, it's a big learning curve. You got to learn about all these different technologies. So if you can improve that user experience, so you can do what you do best, which is focus on your code, and then we can take care of a lot the complexities of building and wiring together all these different parts on the platform. We'll do that. And that's a lot of what we are doing with serverless. >> That's where the manage piece comes in, right? >> [William Oliveria] Right. >> And then the monitoring, that part of it to? >> Yeaa, well to build on top of that, there is the organizations that want to still design things the way that they've been doing it. And we've had a big focus with a project called Red Hat OpenShift Application Runtimes, or RHOAR, which it goes more in the direction of the past concept, which is a big difference between OpenShift and TechTonic, for example, and through that, a lot of the RHOAR bundles for Python and Java and Node.js kind of integrate in the concepts of distributed tracing and permethius monitoring and things like that, to make sure that you focus, again, to William's point, on building the thing that brings yuour business value and standing on the shoulders of software at the infrastructure level. >> That's great stuff and it's a lot more work to do. >> Yeah, just the last thing, I know Red Hat's been working on trying to, I don't know if you call it "templatize", but how do I make it easier for people to, I'm trying to remember the name of the term for it. >> Yeah, so it's the OpenShift Application Runtime. Having what used to be the gear in the old OpenShift realm. Which is just here is a great template, a package to start from, so that you can go in and implement the things that you care about, and really step then into, the "Okay, we know that the code's going to work okay, because we built that, we know the application platform is going to be predictable, we know that we have all of these additional hooks to manage it." So hopefully, it lowers the bar, to make it trivial to get started. >> That's awesome. Well, Redbeard and William, thanks for coming on the Cube, really appreciate it. Just quick plug, what's up next for you guys? What's on the horizon? What itch are you scratching these days? What's getting you motivated? >> The big things that's exciting for me is the fourth coming release of OpenShift 4.0, which gives me the room to shine on the GA release of all the service mesh stuff. And then, kind of in parallel, just a lot of the vector packet processing, FITO, high scale networking stuff just sends a tingle up my spine. I love keeping an eye on that >> For me we just announced a review of Knative and OpenShift as an add-on. You can just install and run that when you're on OpenShift, and like what Redbeard said, I'm looking forward for 4.0 as well, to make sure that I could plug that user experience on top of 4.0 and we are already doing a lot for the ops side, and I'd like to do that also now for our developers as well. >> Well when you're ready, we'll pop a digital cork on Twitter, let us know, we'll certainly cover it. Thanks for coming out, appreciate the insight. >> We'll bring you the insights and all the data here at KubeCon CloudNative. Of course we're the Cube, don't be confused with KubeCon, on one of our conferences coming. But only kidding, we're not going to have that. Thanks for watching day one, live coverage. Stay with us for more coverage after this short break. (techno music)

Published Date : Dec 11 2018

SUMMARY :

Brought to you by Redhat, the contribution you guys have helped up make, and now you start to see the engineering, into more infrastructure services, so that you know that is there going to be certain instances custom services? in the way that you need it. Stu Miniman: So one of the other news items today that there's going to be a longer term vision, You can kind of read into that what you will but, I guess William, a good point to bring you into the to make sure that you can plug all these different parts Stu Miniman: Okay so we had the cube at the Google event So it's the basic capabilities for you to build a serverless and you guys, certainly one of the big players, Red Hat, One is the fact that now that Kuberneties has gotten to the It's okay to have a fifty million dollar business. This is one of the things you guys really work hard to and then we can take care of a lot the complexities of and things like that, to make sure that you focus, again, on trying to, I don't know if you call it "templatize", a package to start from, so that you can go in and implement What's on the horizon? of all the service mesh stuff. and I'd like to do that also now for our developers as well. Thanks for coming out, appreciate the insight. We'll bring you the insights and all the data here at

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FurrierPERSON

0.99+

SeattleLOCATION

0.99+

William OliveriaPERSON

0.99+

fiveQUANTITY

0.99+

IBMORGANIZATION

0.99+

NordstromORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Red HatORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

WilliamPERSON

0.99+

CloudNative Computing FoundationORGANIZATION

0.99+

RedhatORGANIZATION

0.99+

Brian "Redbeard" HarringtonPERSON

0.99+

Linux FoundationORGANIZATION

0.99+

CubeORGANIZATION

0.99+

JavaTITLE

0.99+

PythonTITLE

0.99+

sevenQUANTITY

0.99+

Node.jsTITLE

0.99+

second pieceQUANTITY

0.99+

KubeConEVENT

0.99+

OneQUANTITY

0.99+

OpenShiftTITLE

0.99+

two areasQUANTITY

0.99+

TodayDATE

0.99+

EtsyDORGANIZATION

0.99+

oneQUANTITY

0.99+

OpenShift 4.0TITLE

0.99+

8,000 peopleQUANTITY

0.99+

last yearDATE

0.99+

CoreOSTITLE

0.99+

CloudNativeConEVENT

0.99+

Seattle WashingtonLOCATION

0.98+

three daysQUANTITY

0.98+

fifty million dollarQUANTITY

0.98+

NetflixORGANIZATION

0.98+

BrandonPERSON

0.98+

YelpORGANIZATION

0.98+

TwitterORGANIZATION

0.98+

William OliveiraPERSON

0.98+

TechTonicORGANIZATION

0.98+

todayDATE

0.97+

CloudNativeCon 2018EVENT

0.97+

two guestsQUANTITY

0.97+

RHOARTITLE

0.97+

ServerlessEVENT

0.97+

North AmericaLOCATION

0.96+

day oneQUANTITY

0.96+

StripeORGANIZATION

0.96+

KubernetiesORGANIZATION

0.96+

KnativeORGANIZATION

0.95+

MasoORGANIZATION

0.94+

CoreOSORGANIZATION

0.93+

PowerPCORGANIZATION

0.93+

Linker DORGANIZATION

0.93+

eight yearsQUANTITY

0.91+

hundred percentQUANTITY

0.91+

RedbeardPERSON

0.89+

Matt Klein, Lyft | KubeCon 2017


 

>> Narrator: Live from Austin Texas. It's theCUBE, covering KubeKon and CloudNativeCon 2017. Brought to you by Red Hat, the Linux Foundation, and theCUBE's ecosystem partners. >> Welcome back everyone, live here in Austin Texas, theCUBE's exclusive coverage of CloudNativeConference and KubeKon, for Kubernetes' Conference. I'm John Furrier, co-founder of SiliconANGLE and my co-host Stu Miniman, our analyst. And next is Matt Klein, a software engineer at Lyft, ride-hailing service, car sharing, social network, great company, everyone knows that everyone loves Lyft. Thanks for coming on. >> Thanks very much for having me. >> All right so you're a customer of all this technology. You guys built, and I think this is like the shiny use cases of our generation, entrepreneurs and techies build their own stuff because they can't get product from the general market. You guys had a large-scale demand for the service, you had to go out and build your own with open source and all those tools, you had a problem you had to solve, you build it, used some open source and then give it back to open source and be part of the community, and everybody wins, you donated it back. This is, this is the future, this is what it's going to be like, great community work. What problem were you solving? Obviously Lyft, everyone knows it's hard, they see their car, lot of real time going on, lot of stuff happening >> Matt: Yeah, sure. >> magic's happening behind the scenes, you had to build that. Talk about the problem you solved. >> Well, I think, you know, when people look at Lyft, like you were saying, they look at the app and the car, and I think many people think that it's a relative simple thing. Like how hard could it be to bring up your app and say, I want a ride, and you know, get that car from here to there, but it turns out that it's really complicated. There's a lot of real-time systems involved in actually finding what are all the cars that are near you, and what's the fastest route, all of that stuff. So, I think what people don't realize is that Lyft is a very large, real-time system that, at current scale, operates at millions of requests per second, and has a lot of different use cases around databases, and caching, you know, all those technologies. So, Lyft was built on open source, as you say, and, you know Lyft grew from what I think most companies do, which is a very simple, monolithic stack, you know, it starts with a PHP application, we're a big user of MongoDB, and some load balancer, and then, you know-- >> John: That breaks (laughs) >> Well, well no but but people do that because that's what's very quick to do. And I think what happened, like most companies, is, or that most companies that become very successful, is Lyft grew a lot, and like the few companies that can become very successful, they start to outgrow some of that basic software, or the basic pieces that they're actually using. So, as Lyft started to grow a lot, things just didn't actually start working, so then we had to start fixing and building different things. >> Yeah, Matt, scale is one of those things that gets talked about a lot. But, I mean Lyft, you know, really does operate at a significant scale. >> Matt: Yeah, sure. >> Maybe you can talk a little bit about, you know, what kind of things were breaking, >> Matt: Absolutely, yeah, and then what led to Envoy and why that happened. >> Yeah, sure. I mean, I think there's two different types of scale, and I think this is something that people don't talk about enough. There's scale in terms of things that people talk about, in terms of data throughput or requests per second, or stuff like that. But there's also people scale, right. So, as organizations grow, we go from 10 developers to 50 developers to 100, where Lyft is now many hundreds of developers and we're continuing to grow, and what I think people don't talk about enough is the human scale, so you know, so we have a lot of people that are trying to edit code, and at a certain size, that number of people, you can't all be editing on that same code base. So that's I think the biggest move where people start moving towards this microservice or service-oriented architecture, so you start splitting that apart to get people-scale. People-scale probably usually comes with requests per second scale and data scale and that kind of stuff. But these problems come hand in hand, where as you grow the number of people, you start going into microservices, and then suddenly you have actual scale problems. The database is not working, or the network is not actually reliable. So from Envoy perspective, so Envoy is an open source proxy we built at Lyft, it's now part of CNCF, it's having tremendous uptake across the industry, which is fantastic, and the reason that we built Envoy is what we're seeing now in the industry is people are moving towards polyglot architectures, so they're moving towards architectures with many different applications, or many different languages. And it used to be that you could use Java and you could have one particular library that would do all of your networking and service discovery and load balancing, and now you might have six different languages. So how as an organization do you actually deal with that? And what we decided to do was build an out-of-process proxy, which allows people to build a lot of functionality into one place, around load balancing, and service discovery, and rate limiting, and buffering, and all those kinds of things, and also most importantly, observability. So things like tracing and stats and logging. And that allowed us to actually understand what was going on in the network, so that when problems were happening, we could actually debug what was going on. And what we saw at Lyft, about three years ago, is we had started our microservices journey, but it was actually almost, it was almost stopped, because what people found is they had started to build services because supposedly it was faster than the monolith, but then we would start having problems with tail latency and other things, and they didn't know hot to debug it. So they didn't trust those services, and then at that point they say, not surprisingly, we're just going to go back and we're going to build it back into the monolith. So, we're almost in that situation where things are kind of in that split. >> So Matt I have to think that's the natural, where you led to service mesh, and Istio specifically and Lyft, Google, IBM all working on that. Talk a little bit about, more about what Istio, it was really the buzz coming in with service mesh, there's also there's some competing offerings out there, Conduit, new one announced this week, maybe give us the landscape, kind of where we are, and what you're seeing. >> So I think service mesh is, it's incredible to look around this conference, I think there's 15 or more talks on service mesh between all of the Buoyant talks on Linker D and Conduit and Istio and Envoy, it's super fantastic. I think the reason that service mesh is so compelling to people is that we have these problems where people want to build in five or six languages, they have some common problems around load balancing and other types of things, and this is a great solution for offloading some of those problems into a common place. So, the confusion that I see right now around the industry is service mesh is really split into two pieces. It's split into the data plane, so the proxy, and the control plane. So the proxy's the thing that actually moves the bytes, moves the requests, and the control plane is the thing that actually tells all the proxies what to do, tells it the topology, tells it all the configurations, all the settings. So the landscape right now is essentially that Envoy is a proxy, it's a data plane. Envoy has been built into a bunch of control planes, so Istio is a control plane, it's reference proxy is Envoy, though other companies have shown that they can integrate with Istio. Linker D has shown that, NGINX has shown that. Buoyant just came out with a new combined control plane data plane service mesh called Conduit, that was brand new a couple days ago, and I think we're going to see other companies get in there, because this is a very popular paradigm, so having the competition is good. I think it's going to push everyone to be better. >> How do companies make sense of this, I mean, if I'm just a boring enterprise with complexity, legacy, you know I have a lot of stuff, maybe not the kind of scale in terms of transactions per second, because they're not Lyft, but they still have a lot of stuff. They got servers, they got data center, they got stuff in the cloud, they're trying to put this cloud native package in because the developer movement is clearly pushing the legacy guy, old guard, into cloud. So how does your stuff translate into the mainstream, how would you categorize it? >> Well, what I counsel people is, and I think that's actually a problem that we have within the industry, is that I think sometimes we push people towards complexity that they don't necessarily need yet. And I'm not saying that all of these cloud native technologies aren't great, right, I mean people here are doing fantastic things. >> You know how to drive a car, so to speak, you don't know how to use the tech. >> Right, and I advise companies and organizations to use the technology and the complexity that they need. So I think that service mesh and microservices and tracing and a lot of the stuff that's being talked about at this conference are very important if you have the scale to have a service-oriented microservice architecture. And, you know, some enterprises they're segmented enough where they may not actually need a full microservice real-time architecture. So I think that the thing to actually decide is, number one, do you need a microservice architecture, and it's okay if you don't, that's just fine, take the complexity that you need. If you do need a microservice architecture, then I think you're going to have a set of common problems around things like networking, and databases, and those types of things, and then yes, you are probably going to need to build in more complicated technologies to actually deal with that. But the key takeaway is that as you bring on, as you bring on more complexity, the complexity is a snowballing effect. More complexity yields more complexity. >> So Matt, might be a little bit out of bounds for what we're talking about, but when I think about autonomous vehicles, that's just going to put even more strain on the kind of the distributed natured systems, you know, things that have to have the edge, you know. Are we laying the groundwork at a conference like this? How's Lyft looking at this? >> For sure, and I mean, we're obviously starting to look into autonomous a lot, obviously Uber's doing that a fair amount, and if you actually start looking at the sheer amount of data that is generated by these cars when they're actually moving around, it's terabytes and terabytes of data, you start thinking through the complexity of ingesting that data from the cars into a cloud and actually analyzing it and doing things with it either offline or in real-time, it's pretty incredible. So, yes, I think that these are just more massive scale real-time systems that require more data, more hard drives, more networks, and as you manage more things with more people, it becomes more complicated for sure. >> What are you doing inside Lyft, your job. I mean obviously, you're involved in open source. Like, what are you coding specifically these days, what's the current assignment? >> Yeah, so I'm a software engineer at Lyft, I lead our networking team. Our networking team owns obviously all the stuff that we do with Envoy, we own our edge system, so basically how internet traffic comes into Lyft, all of our service discovery systems, rate limiting, auth between services. We're increasingly owning our GRPC communications, so how people define their APIs, moving from a more polling-based API to a more push-based API. So our team essentially owns the end-to-end pipe from all of our back-end services to the client, so that's APIs, analytics, stats, logging, >> So to the app >> Yeah, right, right, to the app, so, on the phone. So that's my job. I also help a lot with general kind of infrastructure architecture, so we're increasingly moving towards Kubernetes, so that's a big thing that we're doing at Lyft. Like many companies of Lyft's kind of age range, we started on VMs and AWS and we used SaltStack and you know, it's the standard story from companies that were probably six or eight years old. >> Classic dev ops. >> Right, and >> Gen One devops. >> And now we're trying to move into the, as you say, Gen Two world, which is pretty fantastic. So this is becoming, probably, the most applicable conference for us, because we're obviously doing a lot with service mesh, and we're leading the way with Envoy. But as we integrate with technologies like Istio and increasingly use Kubernetes, and all of the different related technologies, we are trying to kind of get rid of all of our bespoke stuff that many companies like Lyft had, and we're trying to get on that general train. >> I mean you guys, I mean this is going to be written in the history books, you look at this time in a generation, I mean this is going to define open source for a long, long time, because, I say Gen one kind of sounds pejorative but it's not. It's really, you need to build your own, you couldn't just buy Oracle database, because, you probably have some maybe Oracle in there, but like, you build your own. Facebook did it, you guys are doing it. Why, because you're badass, you had to. Otherwise you don't build customers. >> Right and I absolutely agree about that. I think we are in a very unique time right now, and I actually think that if you look out 10 years, and you look at some of the services that are coming online, and like Amazon just did Fargate, that whole container scheduling system, and Azure has one, and I think Google has one, but the idea there is that in 10 years' time, people are really going to be writing business logic, they're going to insert that business logic >> They may do a powerpoint slides. >> That would be nice. >> I mean it's easy to me, like powerpoint, it's so easy, that's, I'm not going to say that's coding, but that's the way it should be. >> I absolutely agree, and we'll keep moving towards that, but the way that's going to happen is, more and more plumbing if you will, will get built into these clouds, so that people don't have to worry about all this stuff. But we're in this intermediate time, where people are building these massive scale systems, and the pieces that they need is not necessarily there. >> I've been saying in theCUBE now for multiple events, all through this last year, kind of crystallized and we were talking about with Kelsey about this, Hightower, yesterday, craft is coming back to programming. So you've got software engineering, and you've got craftsmanship. And so, there's real software engineering being done, it's engineering. Application development is going to go back to the old school of real craft. I mean, Agile, all it did was create a treadmill of de-risking rapid build scale, by listening to data and constantly iterating, but it kind of took the craft out of it. >> I agree. >> But that turned into engineering. Now you have developers working on say business logic or just solving, building a healthcare app. That's just awesome software. Do you agree with this craft? >> I absolutely agree, and actually what we say about Envoy, so kind of the catchword buzz phrase of Envoy is to make the network transparent to applications. And I think most of what's happening in infrastructure right now is to get back to a time where application developers can focus on business logic, and not have to worry about how some of this plumbing actually works. And what you see around the industry right now, is it is just too painful for people to operate some of these large systems. And I think we're heading in the right direction, all of the trends are there, but it's going to take a lot more time to actually make that happen. >> I remember when I was graduating college in the 80s, sound old but, not to date myself, but the jobs were for software engineering. I mean that is what they called it, and now we're back to this devops brought it, cloud, the systems kind of engineering, really at a large scale, because you got to think about these things. >> Yeah, and I think what's also kind of interesting is that companies have moved toward this devops culture, or expecting developers to operate their systems, to be on call for them and I think that's fantastic, but what we're not doing as an industry is we're not actually teaching and helping people how to do this. So like we have this expectation that people know how to be on-call and know how to make dashboards, and know how to do all this work, but they don't learn it in school, and actually we come into organizations where we may not help them learn these skills. >> Every company has different cultures, that complicates things. >> So I think we're also, as an industry, we are figuring out how to train people and how to help them actually do this in a way that makes sense. >> Well, fascinating conversation Matt. Congratulations on all your success. Obviously a big fan of Lyft, one of the board members gave a keynote, she's from Palo Alto, from Floodgate. Great investors, great fans of the company. Congratulations, great success story, and again open source, this is the new playbook, community scale contribution, innovation. TheCUBE's doing it's share here live in Austin, Texas, for KubeKon, for Kubernetes conference and CloudNativeCon. I'm John Furrrier, for Stu Miniman, we'll be back with more after this short break. (futuristic music)

Published Date : Dec 7 2017

SUMMARY :

Brought to you by Red Hat, the Linux Foundation, and KubeKon, for Kubernetes' Conference. and all those tools, you had a problem you had to solve, Talk about the problem you solved. and caching, you know, all those technologies. some of that basic software, or the basic pieces But, I mean Lyft, you know, really does operate and why that happened. is the human scale, so you know, so we have a lot of people where you led to service mesh, and Istio specifically that actually tells all the proxies what to do, you know I have a lot of stuff, maybe not the kind of scale is that I think sometimes we push people towards you don't know how to use the tech. But the key takeaway is that as you bring on, on the kind of the distributed natured systems, you know, amount, and if you actually start looking at the sheer Like, what are you coding specifically these days, from all of our back-end services to the client, and you know, it's the standard story from companies And now we're trying to move into the, as you say, in the history books, you look at this time and I actually think that if you look out 10 years, They may do a powerpoint I mean it's easy to me, like powerpoint, it's so easy, and the pieces that they need is not necessarily there. Application development is going to go back Now you have developers working on say business logic And what you see around the industry right now, I mean that is what they called it, and now we're back and know how to do all this work, but they don't learn it that complicates things. and how to help them actually do this in a way Obviously a big fan of Lyft, one of the board members

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt KleinPERSON

0.99+

fiveQUANTITY

0.99+

Stu MinimanPERSON

0.99+

IBMORGANIZATION

0.99+

UberORGANIZATION

0.99+

John FurrierPERSON

0.99+

John FurrrierPERSON

0.99+

sixQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

MattPERSON

0.99+

JohnPERSON

0.99+

LyftORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

10 developersQUANTITY

0.99+

Linux FoundationORGANIZATION

0.99+

two piecesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

oneQUANTITY

0.99+

six languagesQUANTITY

0.99+

50 developersQUANTITY

0.99+

Palo AltoLOCATION

0.99+

theCUBEORGANIZATION

0.99+

Austin TexasLOCATION

0.99+

OracleORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

eight yearsQUANTITY

0.99+

JavaTITLE

0.99+

AWSORGANIZATION

0.99+

10 years'QUANTITY

0.99+

ConduitORGANIZATION

0.99+

100QUANTITY

0.99+

CloudNativeConferenceEVENT

0.99+

hundredsQUANTITY

0.99+

SiliconANGLEORGANIZATION

0.99+

last yearDATE

0.98+

Austin, TexasLOCATION

0.98+

EnvoyORGANIZATION

0.98+

this weekDATE

0.98+

KubeConEVENT

0.98+

CloudNativeConEVENT

0.98+

Linker DORGANIZATION

0.98+

yesterdayDATE

0.98+

KelseyPERSON

0.98+

KubeKonEVENT

0.98+

IstioORGANIZATION

0.97+

six different languagesQUANTITY

0.97+

PHPTITLE

0.97+

MongoDBTITLE

0.97+

80sDATE

0.97+

EnvoyTITLE

0.96+

two different typesQUANTITY

0.96+

one placeQUANTITY

0.94+

NGINXTITLE

0.94+

TheCUBEORGANIZATION

0.93+

second scaleQUANTITY

0.92+

CloudNativeCon 2017EVENT

0.92+

FloodgateORGANIZATION

0.92+

about three years agoDATE

0.92+