Image Title

Search Results for 10different starters:

Matt Klein, Lyft | KubeCon 2017


 

>> Narrator: Live from Austin Texas. It's theCUBE, covering KubeKon and CloudNativeCon 2017. Brought to you by Red Hat, the Linux Foundation, and theCUBE's ecosystem partners. >> Welcome back everyone, live here in Austin Texas, theCUBE's exclusive coverage of CloudNativeConference and KubeKon, for Kubernetes' Conference. I'm John Furrier, co-founder of SiliconANGLE and my co-host Stu Miniman, our analyst. And next is Matt Klein, a software engineer at Lyft, ride-hailing service, car sharing, social network, great company, everyone knows that everyone loves Lyft. Thanks for coming on. >> Thanks very much for having me. >> All right so you're a customer of all this technology. You guys built, and I think this is like the shiny use cases of our generation, entrepreneurs and techies build their own stuff because they can't get product from the general market. You guys had a large-scale demand for the service, you had to go out and build your own with open source and all those tools, you had a problem you had to solve, you build it, used some open source and then give it back to open source and be part of the community, and everybody wins, you donated it back. This is, this is the future, this is what it's going to be like, great community work. What problem were you solving? Obviously Lyft, everyone knows it's hard, they see their car, lot of real time going on, lot of stuff happening >> Matt: Yeah, sure. >> magic's happening behind the scenes, you had to build that. Talk about the problem you solved. >> Well, I think, you know, when people look at Lyft, like you were saying, they look at the app and the car, and I think many people think that it's a relative simple thing. Like how hard could it be to bring up your app and say, I want a ride, and you know, get that car from here to there, but it turns out that it's really complicated. There's a lot of real-time systems involved in actually finding what are all the cars that are near you, and what's the fastest route, all of that stuff. So, I think what people don't realize is that Lyft is a very large, real-time system that, at current scale, operates at millions of requests per second, and has a lot of different use cases around databases, and caching, you know, all those technologies. So, Lyft was built on open source, as you say, and, you know Lyft grew from what I think most companies do, which is a very simple, monolithic stack, you know, it starts with a PHP application, we're a big user of MongoDB, and some load balancer, and then, you know-- >> John: That breaks (laughs) >> Well, well no but but people do that because that's what's very quick to do. And I think what happened, like most companies, is, or that most companies that become very successful, is Lyft grew a lot, and like the few companies that can become very successful, they start to outgrow some of that basic software, or the basic pieces that they're actually using. So, as Lyft started to grow a lot, things just didn't actually start working, so then we had to start fixing and building different things. >> Yeah, Matt, scale is one of those things that gets talked about a lot. But, I mean Lyft, you know, really does operate at a significant scale. >> Matt: Yeah, sure. >> Maybe you can talk a little bit about, you know, what kind of things were breaking, >> Matt: Absolutely, yeah, and then what led to Envoy and why that happened. >> Yeah, sure. I mean, I think there's two different types of scale, and I think this is something that people don't talk about enough. There's scale in terms of things that people talk about, in terms of data throughput or requests per second, or stuff like that. But there's also people scale, right. So, as organizations grow, we go from 10 developers to 50 developers to 100, where Lyft is now many hundreds of developers and we're continuing to grow, and what I think people don't talk about enough is the human scale, so you know, so we have a lot of people that are trying to edit code, and at a certain size, that number of people, you can't all be editing on that same code base. So that's I think the biggest move where people start moving towards this microservice or service-oriented architecture, so you start splitting that apart to get people-scale. People-scale probably usually comes with requests per second scale and data scale and that kind of stuff. But these problems come hand in hand, where as you grow the number of people, you start going into microservices, and then suddenly you have actual scale problems. The database is not working, or the network is not actually reliable. So from Envoy perspective, so Envoy is an open source proxy we built at Lyft, it's now part of CNCF, it's having tremendous uptake across the industry, which is fantastic, and the reason that we built Envoy is what we're seeing now in the industry is people are moving towards polyglot architectures, so they're moving towards architectures with many different applications, or many different languages. And it used to be that you could use Java and you could have one particular library that would do all of your networking and service discovery and load balancing, and now you might have six different languages. So how as an organization do you actually deal with that? And what we decided to do was build an out-of-process proxy, which allows people to build a lot of functionality into one place, around load balancing, and service discovery, and rate limiting, and buffering, and all those kinds of things, and also most importantly, observability. So things like tracing and stats and logging. And that allowed us to actually understand what was going on in the network, so that when problems were happening, we could actually debug what was going on. And what we saw at Lyft, about three years ago, is we had started our microservices journey, but it was actually almost, it was almost stopped, because what people found is they had started to build services because supposedly it was faster than the monolith, but then we would start having problems with tail latency and other things, and they didn't know hot to debug it. So they didn't trust those services, and then at that point they say, not surprisingly, we're just going to go back and we're going to build it back into the monolith. So, we're almost in that situation where things are kind of in that split. >> So Matt I have to think that's the natural, where you led to service mesh, and Istio specifically and Lyft, Google, IBM all working on that. Talk a little bit about, more about what Istio, it was really the buzz coming in with service mesh, there's also there's some competing offerings out there, Conduit, new one announced this week, maybe give us the landscape, kind of where we are, and what you're seeing. >> So I think service mesh is, it's incredible to look around this conference, I think there's 15 or more talks on service mesh between all of the Buoyant talks on Linker D and Conduit and Istio and Envoy, it's super fantastic. I think the reason that service mesh is so compelling to people is that we have these problems where people want to build in five or six languages, they have some common problems around load balancing and other types of things, and this is a great solution for offloading some of those problems into a common place. So, the confusion that I see right now around the industry is service mesh is really split into two pieces. It's split into the data plane, so the proxy, and the control plane. So the proxy's the thing that actually moves the bytes, moves the requests, and the control plane is the thing that actually tells all the proxies what to do, tells it the topology, tells it all the configurations, all the settings. So the landscape right now is essentially that Envoy is a proxy, it's a data plane. Envoy has been built into a bunch of control planes, so Istio is a control plane, it's reference proxy is Envoy, though other companies have shown that they can integrate with Istio. Linker D has shown that, NGINX has shown that. Buoyant just came out with a new combined control plane data plane service mesh called Conduit, that was brand new a couple days ago, and I think we're going to see other companies get in there, because this is a very popular paradigm, so having the competition is good. I think it's going to push everyone to be better. >> How do companies make sense of this, I mean, if I'm just a boring enterprise with complexity, legacy, you know I have a lot of stuff, maybe not the kind of scale in terms of transactions per second, because they're not Lyft, but they still have a lot of stuff. They got servers, they got data center, they got stuff in the cloud, they're trying to put this cloud native package in because the developer movement is clearly pushing the legacy guy, old guard, into cloud. So how does your stuff translate into the mainstream, how would you categorize it? >> Well, what I counsel people is, and I think that's actually a problem that we have within the industry, is that I think sometimes we push people towards complexity that they don't necessarily need yet. And I'm not saying that all of these cloud native technologies aren't great, right, I mean people here are doing fantastic things. >> You know how to drive a car, so to speak, you don't know how to use the tech. >> Right, and I advise companies and organizations to use the technology and the complexity that they need. So I think that service mesh and microservices and tracing and a lot of the stuff that's being talked about at this conference are very important if you have the scale to have a service-oriented microservice architecture. And, you know, some enterprises they're segmented enough where they may not actually need a full microservice real-time architecture. So I think that the thing to actually decide is, number one, do you need a microservice architecture, and it's okay if you don't, that's just fine, take the complexity that you need. If you do need a microservice architecture, then I think you're going to have a set of common problems around things like networking, and databases, and those types of things, and then yes, you are probably going to need to build in more complicated technologies to actually deal with that. But the key takeaway is that as you bring on, as you bring on more complexity, the complexity is a snowballing effect. More complexity yields more complexity. >> So Matt, might be a little bit out of bounds for what we're talking about, but when I think about autonomous vehicles, that's just going to put even more strain on the kind of the distributed natured systems, you know, things that have to have the edge, you know. Are we laying the groundwork at a conference like this? How's Lyft looking at this? >> For sure, and I mean, we're obviously starting to look into autonomous a lot, obviously Uber's doing that a fair amount, and if you actually start looking at the sheer amount of data that is generated by these cars when they're actually moving around, it's terabytes and terabytes of data, you start thinking through the complexity of ingesting that data from the cars into a cloud and actually analyzing it and doing things with it either offline or in real-time, it's pretty incredible. So, yes, I think that these are just more massive scale real-time systems that require more data, more hard drives, more networks, and as you manage more things with more people, it becomes more complicated for sure. >> What are you doing inside Lyft, your job. I mean obviously, you're involved in open source. Like, what are you coding specifically these days, what's the current assignment? >> Yeah, so I'm a software engineer at Lyft, I lead our networking team. Our networking team owns obviously all the stuff that we do with Envoy, we own our edge system, so basically how internet traffic comes into Lyft, all of our service discovery systems, rate limiting, auth between services. We're increasingly owning our GRPC communications, so how people define their APIs, moving from a more polling-based API to a more push-based API. So our team essentially owns the end-to-end pipe from all of our back-end services to the client, so that's APIs, analytics, stats, logging, >> So to the app >> Yeah, right, right, to the app, so, on the phone. So that's my job. I also help a lot with general kind of infrastructure architecture, so we're increasingly moving towards Kubernetes, so that's a big thing that we're doing at Lyft. Like many companies of Lyft's kind of age range, we started on VMs and AWS and we used SaltStack and you know, it's the standard story from companies that were probably six or eight years old. >> Classic dev ops. >> Right, and >> Gen One devops. >> And now we're trying to move into the, as you say, Gen Two world, which is pretty fantastic. So this is becoming, probably, the most applicable conference for us, because we're obviously doing a lot with service mesh, and we're leading the way with Envoy. But as we integrate with technologies like Istio and increasingly use Kubernetes, and all of the different related technologies, we are trying to kind of get rid of all of our bespoke stuff that many companies like Lyft had, and we're trying to get on that general train. >> I mean you guys, I mean this is going to be written in the history books, you look at this time in a generation, I mean this is going to define open source for a long, long time, because, I say Gen one kind of sounds pejorative but it's not. It's really, you need to build your own, you couldn't just buy Oracle database, because, you probably have some maybe Oracle in there, but like, you build your own. Facebook did it, you guys are doing it. Why, because you're badass, you had to. Otherwise you don't build customers. >> Right and I absolutely agree about that. I think we are in a very unique time right now, and I actually think that if you look out 10 years, and you look at some of the services that are coming online, and like Amazon just did Fargate, that whole container scheduling system, and Azure has one, and I think Google has one, but the idea there is that in 10 years' time, people are really going to be writing business logic, they're going to insert that business logic >> They may do a powerpoint slides. >> That would be nice. >> I mean it's easy to me, like powerpoint, it's so easy, that's, I'm not going to say that's coding, but that's the way it should be. >> I absolutely agree, and we'll keep moving towards that, but the way that's going to happen is, more and more plumbing if you will, will get built into these clouds, so that people don't have to worry about all this stuff. But we're in this intermediate time, where people are building these massive scale systems, and the pieces that they need is not necessarily there. >> I've been saying in theCUBE now for multiple events, all through this last year, kind of crystallized and we were talking about with Kelsey about this, Hightower, yesterday, craft is coming back to programming. So you've got software engineering, and you've got craftsmanship. And so, there's real software engineering being done, it's engineering. Application development is going to go back to the old school of real craft. I mean, Agile, all it did was create a treadmill of de-risking rapid build scale, by listening to data and constantly iterating, but it kind of took the craft out of it. >> I agree. >> But that turned into engineering. Now you have developers working on say business logic or just solving, building a healthcare app. That's just awesome software. Do you agree with this craft? >> I absolutely agree, and actually what we say about Envoy, so kind of the catchword buzz phrase of Envoy is to make the network transparent to applications. And I think most of what's happening in infrastructure right now is to get back to a time where application developers can focus on business logic, and not have to worry about how some of this plumbing actually works. And what you see around the industry right now, is it is just too painful for people to operate some of these large systems. And I think we're heading in the right direction, all of the trends are there, but it's going to take a lot more time to actually make that happen. >> I remember when I was graduating college in the 80s, sound old but, not to date myself, but the jobs were for software engineering. I mean that is what they called it, and now we're back to this devops brought it, cloud, the systems kind of engineering, really at a large scale, because you got to think about these things. >> Yeah, and I think what's also kind of interesting is that companies have moved toward this devops culture, or expecting developers to operate their systems, to be on call for them and I think that's fantastic, but what we're not doing as an industry is we're not actually teaching and helping people how to do this. So like we have this expectation that people know how to be on-call and know how to make dashboards, and know how to do all this work, but they don't learn it in school, and actually we come into organizations where we may not help them learn these skills. >> Every company has different cultures, that complicates things. >> So I think we're also, as an industry, we are figuring out how to train people and how to help them actually do this in a way that makes sense. >> Well, fascinating conversation Matt. Congratulations on all your success. Obviously a big fan of Lyft, one of the board members gave a keynote, she's from Palo Alto, from Floodgate. Great investors, great fans of the company. Congratulations, great success story, and again open source, this is the new playbook, community scale contribution, innovation. TheCUBE's doing it's share here live in Austin, Texas, for KubeKon, for Kubernetes conference and CloudNativeCon. I'm John Furrrier, for Stu Miniman, we'll be back with more after this short break. (futuristic music)

Published Date : Dec 7 2017

SUMMARY :

Brought to you by Red Hat, the Linux Foundation, and KubeKon, for Kubernetes' Conference. and all those tools, you had a problem you had to solve, Talk about the problem you solved. and caching, you know, all those technologies. some of that basic software, or the basic pieces But, I mean Lyft, you know, really does operate and why that happened. is the human scale, so you know, so we have a lot of people where you led to service mesh, and Istio specifically that actually tells all the proxies what to do, you know I have a lot of stuff, maybe not the kind of scale is that I think sometimes we push people towards you don't know how to use the tech. But the key takeaway is that as you bring on, on the kind of the distributed natured systems, you know, amount, and if you actually start looking at the sheer Like, what are you coding specifically these days, from all of our back-end services to the client, and you know, it's the standard story from companies And now we're trying to move into the, as you say, in the history books, you look at this time and I actually think that if you look out 10 years, They may do a powerpoint I mean it's easy to me, like powerpoint, it's so easy, and the pieces that they need is not necessarily there. Application development is going to go back Now you have developers working on say business logic And what you see around the industry right now, I mean that is what they called it, and now we're back and know how to do all this work, but they don't learn it that complicates things. and how to help them actually do this in a way Obviously a big fan of Lyft, one of the board members

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt KleinPERSON

0.99+

fiveQUANTITY

0.99+

Stu MinimanPERSON

0.99+

IBMORGANIZATION

0.99+

UberORGANIZATION

0.99+

John FurrierPERSON

0.99+

John FurrrierPERSON

0.99+

sixQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

MattPERSON

0.99+

JohnPERSON

0.99+

LyftORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

10 developersQUANTITY

0.99+

Linux FoundationORGANIZATION

0.99+

two piecesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

oneQUANTITY

0.99+

six languagesQUANTITY

0.99+

50 developersQUANTITY

0.99+

Palo AltoLOCATION

0.99+

theCUBEORGANIZATION

0.99+

Austin TexasLOCATION

0.99+

OracleORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

eight yearsQUANTITY

0.99+

JavaTITLE

0.99+

AWSORGANIZATION

0.99+

10 years'QUANTITY

0.99+

ConduitORGANIZATION

0.99+

100QUANTITY

0.99+

CloudNativeConferenceEVENT

0.99+

hundredsQUANTITY

0.99+

SiliconANGLEORGANIZATION

0.99+

last yearDATE

0.98+

Austin, TexasLOCATION

0.98+

EnvoyORGANIZATION

0.98+

this weekDATE

0.98+

KubeConEVENT

0.98+

CloudNativeConEVENT

0.98+

Linker DORGANIZATION

0.98+

yesterdayDATE

0.98+

KelseyPERSON

0.98+

KubeKonEVENT

0.98+

IstioORGANIZATION

0.97+

six different languagesQUANTITY

0.97+

PHPTITLE

0.97+

MongoDBTITLE

0.97+

80sDATE

0.97+

EnvoyTITLE

0.96+

two different typesQUANTITY

0.96+

one placeQUANTITY

0.94+

NGINXTITLE

0.94+

TheCUBEORGANIZATION

0.93+

second scaleQUANTITY

0.92+

CloudNativeCon 2017EVENT

0.92+

FloodgateORGANIZATION

0.92+

about three years agoDATE

0.92+