Pat Conte, Opsani | AWS Startup Showcase
(upbeat music) >> Hello and welcome to this CUBE conversation here presenting the "AWS Startup Showcase: "New Breakthroughs in DevOps, Data Analytics "and Cloud Management Tools" featuring Opsani for the cloud management and migration track here today, I'm your host John Furrier. Today, we're joined by Patrick Conte, Chief Commercial Officer, Opsani. Thanks for coming on. Appreciate you coming on. Future of AI operations. >> Thanks, John. Great to be here. Appreciate being with you. >> So congratulations on all your success being showcased here as part of the Startups Showcase, future of AI operations. You've got the cloud scale happening. A lot of new transitions in this quote digital transformation as cloud scales goes next generation. DevOps revolution as Emily Freeman pointed out in her keynote. What's the problem statement that you guys are focused on? Obviously, AI involves a lot of automation. I can imagine there's a data problem in there somewhere. What's the core problem that you guys are focused on? >> Yeah, it's interesting because there are a lot of companies that focus on trying to help other companies optimize what they're doing in the cloud, whether it's cost or whether it's performance or something else. We felt very strongly that AI was the way to do that. I've got a slide prepared, and maybe we can take a quick look at that, and that'll talk about the three elements or dimensions of the problem. So we think about cloud services and the challenge of delivering cloud services. You've really got three things that customers are trying to solve for. They're trying to solve for performance, they're trying to solve for the best performance, and, ultimately, scalability. I mean, applications are growing really quickly especially in this current timeframe with cloud services and whatnot. They're trying to keep costs under control because certainly, it can get way out of control in the cloud since you don't own the infrastructure, and more importantly than anything else which is why it's at the bottom sort of at the foundation of all this, is they want their applications to be a really a good experience for their customers. So our customer's customer is actually who we're trying to solve this problem for. So what we've done is we've built a platform that uses AI and machine learning to optimize, meaning tune, all of the key parameters of a cloud application. So those are things like the CPU usage, the memory usage, the number of replicas in a Kubernetes or container environment, those kinds of things. It seems like it would be simple just to grab some values and plug 'em in, but it's not. It's actually the combination of them has to be right. Otherwise, you get delays or faults or other problems with the application. >> Andrew, if you can bring that slide back up for a second. I want to just ask one quick question on the problem statement. You got expenditures, performance, customer experience kind of on the sides there. Do you see this tip a certain way depending upon use cases? I mean, is there one thing that jumps out at you, Patrick, from your customer's customer's standpoint? Obviously, customer experience is the outcome. That's the app, whatever. That's whatever we got going on there. >> Sure. >> But is there patterns 'cause you can have good performance, but then budget overruns. Or all of them could be failing. Talk about this dynamic with this triangle. >> Well, without AI, without machine learning, you can solve for one of these, only one, right? So if you want to solve for performance like you said, your costs may overrun, and you're probably not going to have control of the customer experience. If you want to solve for one of the others, you're going to have to sacrifice the other two. With machine learning though, we can actually balance that, and it isn't a perfect balance, and the question you asked is really a great one. Sometimes, you want to over-correct on something. Sometimes, scalability is more important than cost, but what we're going to do because of our machine learning capability, we're going to always make sure that you're never spending more than you should spend, so we're always going to make sure that you have the best cost for whatever the performance and reliability factors that you you want to have are. >> Yeah, I can imagine. Some people leave services on. Happened to us one time. An intern left one of the services on, and like where did that bill come from? So kind of looked back, we had to kind of fix that. There's a ton of action, but I got to ask you, what are customers looking for with you guys? I mean, as they look at Opsani, what you guys are offering, what's different than what other people might be proposing with optimization solutions? >> Sure. Well, why don't we bring up the second slide, and this'll illustrate some of the differences, and we can talk through some of this stuff as well. So really, the area that we play in is called AIOps, and that's sort of a new area, if you will, over the last few years, and really what it means is applying intelligence to your cloud operations, and those cloud operations could be development operations, or they could be production operations. And what this slide is really representing is in the upper slide, that's sort of the way customers experience their DevOps model today. Somebody says we need an application or we need a feature, the developers pull down something from get. They hack an early version of it. They run through some tests. They size it whatever way they know that it won't fail, and then they throw it over to the SREs to try to tune it before they shove it out into production, but nobody really sizes it properly. It's not optimized, and so it's not tuned either. When it goes into production, it's just the first combination of settings that work. So what happens is undoubtedly, there's some type of a problem, a fault or a delay, or you push new code, or there's a change in traffic. Something happens, and then, you've got to figure out what the heck. So what happens then is you use your tools. First thing you do is you over-provision everything. That's what everybody does, they over-provision and try to soak up the problem. But that doesn't solve it because now, your costs are going crazy. You've got to go back and find out and try as best you can to get root cause. You go back to the tests, and you're trying to find something in the test phase that might be an indicator. Eventually your developers have to hack a hot fix, and the conveyor belt sort of keeps on going. We've tested this model on every single customer that we've spoken to, and they've all said this is what they experience on a day-to-day basis. Now, if we can go back to the side, let's talk about the second part which is what we do and what makes us different. So on the bottom of this slide, you'll see it's really a shift-left model. What we do is we plug in in the production phase, and as I mentioned earlier, what we're doing is we're tuning all those cloud parameters. We're tuning the CPU, the memory, the Replicas, all those kinds of things. We're tuning them all in concert, and we're doing it at machine speed, so that's how the customer gets the best performance, the best reliability at the best cost. That's the way we're able to achieve that is because we're iterating this thing in machine speed, but there's one other place where we plug in and we help the whole concept of AIOps and DevOps, and that is we can plug in in the test phase as well. And so if you think about it, the DevOps guy can actually not have to over-provision before he throws it over to the SREs. He can actually optimize and find the right size of the application before he sends it through to the SREs, and what this does is collapses the timeframe because it means the SREs don't have to hunt for a working set of parameters. They get one from the DevOps guys when they send it over, and this is how the future of AIOps is being really affected by optimization and what we call autonomous optimization which means that it's happening without humans having to press a button on it. >> John: Andrew, bring that slide back up. I want to just ask another question. Tuning in concert thing is very interesting to me. So how does that work? Are you telegraphing information to the developer from the autonomous workload tuning engine piece? I mean, how does the developer know the right knobs or where does it get that provisioning information? I see the performance lag. I see where you're solving that problem. >> Sure. >> How does that work? >> Yeah, so actually, if we go to the next slide, I'll show you exactly how it works. Okay, so this slide represents the architecture of a typical application environment that we would find ourselves in, and inside the dotted line is the customer's application namespace. That's where the app is. And so, it's got a bunch of pods. It's got a horizontal pod. It's got something for replication, probably an HPA. And so, what we do is we install inside that namespace two small instances. One is a tuning pod which some people call a canary, and that tuning pod joins the rest of the pods, but it's not part of the application. It's actually separate, but it gets the same traffic. We also install somebody we call Servo which is basically an action engine. What Servo does is Servo takes the metrics from whatever the metric system is is collecting all those different settings and whatnot from the working application. It could be something like Prometheus. It could be an Envoy Sidecar, or more likely, it's something like AppDynamics, or we can even collect metrics off of Nginx which is at the front of the service. We can plug into anywhere where those metrics are. We can pull the metrics forward. Once we see the metrics, we send them to our backend. The Opsani SaaS service is our machine learning backend. That's where all the magic happens, and what happens then is that service sees the settings, sends a recommendation to Servo, Servo sends it to the tuning pod, and we tune until we find optimal. And so, that iteration typically takes about 20 steps. It depends on how big the application is and whatnot, how fast those steps take. It could be anywhere from seconds to minutes to 10 to 20 minutes per step, but typically within about 20 steps, we can find optimal, and then we'll come back and we'll say, "Here's optimal, and do you want to "promote this to production," and the customer says, "Yes, I want to promote it to production "because I'm saving a lot of money or because I've gotten "better performance or better reliability." Then, all he has to do is press a button, and all that stuff gets sent right to the production pods, and all of those settings get put into production, and now he's now he's actually saving the money. So that's basically how it works. >> It's kind of like when I want to go to the beach, I look at the weather.com, I check the forecast, and I decide whether I want to go or not. You're getting the data, so you're getting a good look at the information, and then putting that into a policy standpoint. I get that, makes total sense. Can I ask you, if you don't mind, expanding on the performance and reliability and the cost advantage? You mentioned cost. How is that impacting? Give us an example of some performance impact, reliability, and cost impacts. >> Well, let's talk about what those things mean because like a lot of people might have different ideas about what they think those mean. So from a cost standpoint, we're talking about cloud spend ultimately, but it's represented by the settings themselves, so I'm not talking about what deal you cut with AWS or Azure or Google. I'm talking about whatever deal you cut, we're going to save you 30, 50, 70% off of that. So it doesn't really matter what cost you negotiated. What we're talking about is right-sizing the settings for CPU and memory, Replica. Could be Java. It could be garbage collection, time ratios, or heap sizes or things like that. Those are all the kinds of things that we can tune. The thing is most of those settings have an unlimited number of values, and this is why machine learning is important because, if you think about it, even if they only had eight settings or eight values per setting, now you're talking about literally billions of combinations. So to find optimal, you've got to have machine speed to be able to do it, and you have to iterate very, very quickly to make it happen. So that's basically the thing, and that's really one of the things that makes us different from anybody else, and if you put that last slide back up, the architecture slide, for just a second, there's a couple of key words at the bottom of it that I want to want to focus on, continuous. So continuous really means that we're on all the time. We're not plug us in one time, make a change, and then walk away. We're actually always measuring and adjusting, and the reason why this is important is in the modern DevOps world, your traffic level is going to change. You're going to push new code. Things are going to happen that are going to change the basic nature of the software, and you have to be able to tune for those changes. So continuous is very important. Second thing is autonomous. This is designed to take pressure off of the SREs. It's not designed to replace them, but to take the pressure off of them having to check pager all the time and run in and make adjustments, or try to divine or find an adjustment that might be very, very difficult for them to do so. So we're doing it for them, and that scale means that we can solve this for, let's say, one big monolithic application, or we can solve it for literally hundreds of applications and thousands of microservices that make up those applications and tune them all at the same time. So the same platform can be used for all of those. You originally asked about the parameters and the settings. Did I answer the question there? >> You totally did. I mean, the tuning in concert. You mentioned early as a key point. I mean, you're basically tuning the engine. It's not so much negotiating a purchase SaaS discount. It's essentially cost overruns by the engine, either over burning or heating or whatever you want to call it. I mean, basically inefficiency. You're tuning the core engine. >> Exactly so. So the cost thing is I mentioned is due to right-sizing the settings and the number of Replicas. The performance is typically measured via latency, and the reliability is typically measured via error rates. And there's some other measures as well. We have a whole list of them that are in the application itself, but those are the kinds of things that we look for as results. When we do our tuning, we look for reducing error rates, or we look for holding error rates at zero, for example, even if we improve the performance or we improve the cost. So we're looking for the best result, the best combination result, and then a customer can decide if they want to do so to actually over-correct on something. We have the whole concept of guard rail, so if performance is the most important thing, or maybe some customers, cost is the most important thing, they can actually say, "Well, give us the best cost, "and give us the best performance and the best reliability, "but at this cost," and we can then use that as a service-level objective and tune around it. >> Yeah, it reminds me back in the old days when you had filtering white lists of black lists of addresses that can go through, say, a firewall or a device. You have billions of combinations now with machine learning. It's essentially scaling the same concept to unbelievable. These guardrails are now in place, and that's super cool and I think really relevant call-out point, Patrick, to kind of highlight that. At this kind of scale, you need machine learning, you need the AI to essentially identify quickly the patterns or combinations that are actually happening so a human doesn't have to waste their time that can be filled by basically a bot at that point. >> So John, there's just one other thing I want to mention around this, and that is one of the things that makes us different from other companies that do optimization. Basically, every other company in the optimization space creates a static recommendation, basically their recommendation engines, and what you get out of that is, let's say it's a manifest of changes, and you hand that to the SREs, and they put it into effect. Well, the fact of the matter is is that the traffic could have changed then. It could have spiked up, or it could have dropped below normal. You could have introduced a new feature or some other code change, and at that point in time, you've already instituted these changes. They may be completely out of date. That's why the continuous nature of what we do is important and different. >> It's funny, even the language that we're using here: network, garbage collection. I mean, you're talking about tuning an engine, am operating system. You're talking about stuff that's moving up the stack to the application layer, hence this new kind of eliminating of these kind of siloed waterfall, as you pointed out in your second slide, is kind of one integrated kind of operating environment. So when you have that or think about the data coming in, and you have to think about the automation just like self-correcting, error-correcting, tuning, garbage collection. These are words that we've kind of kicking around, but at the end of the day, it's an operating system. >> Well in the old days of automobiles, which I remember cause I'm I'm an old guy, if you wanted to tune your engine, you would probably rebuild your carburetor and turn some dials to get the air-oxygen-gas mix right. You'd re-gap your spark plugs. You'd probably make sure your points were right. There'd be four or five key things that you would do. You couldn't do them at the same time unless you had a magic wand. So we're the magic wand that basically, or in modern world, we're sort of that thing you plug in that tunes everything at once within that engine which is all now electronically controlled. So that's the big differences as you think about what we used to do manually, and now, can be done with automation. It can be done much, much faster without humans having to get their fingernails greasy, let's say. >> And I think the dynamic versus static is an interesting point. I want to bring up the SRE which has become a role that's becoming very prominent in the DevOps kind of plus world that's happening. You're seeing this new revolution. The role of the SRE is not just to be there to hold down and do the manual configuration. They had a scale. They're a developer, too. So I think this notion of offloading the SRE from doing manual tasks is another big, important point. Can you just react to that and share more about why the SRE role is so important and why automating that away through when you guys have is important? >> The SRE role is becoming more and more important, just as you said, and the reason is because somebody has to get that application ready for production. The DevOps guys don't do it. That's not their job. Their job is to get the code finished and send it through, and the SREs then have to make sure that that code will work, so they have to find a set of settings that will actually work in production. Once they find that set of settings, the first one they find that works, they'll push it through. It's not optimized at that point in time because they don't have time to try to find optimal, and if you think about it, the difference between a machine learning backend and an army of SREs that work 24-by-seven, we're talking about being able to do the work of many, many SREs that never get tired, that never need to go play video games, to unstress or whatever. We're working all the time. We're always measuring, adjusting. A lot of the companies we talked to do a once-a-month adjustment on their software. So they put an application out, and then they send in their SREs once a month to try to tune the application, and maybe they're using some of these other tools, or maybe they're using just their smarts, but they'll do that once a month. Well, gosh, they've pushed code probably four times during the month, and they probably had a bunch of different spikes and drops in traffic and other things that have happened. So we just want to help them spend their time on making sure that the application is ready for production. Want to make sure that all the other parts of the application are where they should be, and let us worry about tuning CPU, memory, Replica, job instances, and things like that so that they can work on making sure that application gets out and that it can scale, which is really important for them, for their companies to make money is for the apps to scale. >> Well, that's a great insight, Patrick. You mentioned you have a lot of great customers, and certainly if you have your customer base are early adopters, pioneers, and grow big companies because they have DevOps. They know that they're seeing a DevOps engineer and an SRE. Some of the other enterprises that are transforming think the DevOps engineer is the SRE person 'cause they're having to get transformed. So you guys are at the high end and getting now the new enterprises as they come on board to cloud scale. You have a huge uptake in Kubernetes, starting to see the standardization of microservices. People are getting it, so I got to ask you can you give us some examples of your customers, how they're organized, some case studies, who uses you guys, and why they love you? >> Sure. Well, let's bring up the next slide. We've got some customer examples here, and your viewers, our viewers, can probably figure out who these guys are. I can't tell them, but if they go on our website, they can sort of put two and two together, but the first one there is a major financial application SaaS provider, and in this particular case, they were having problems that they couldn't diagnose within the stack. Ultimately, they had to apply automation to it, and what we were able to do for them was give them a huge jump in reliability which was actually the biggest problem that they were having. We gave them 5,000 hours back a month in terms of the application. They were they're having pager duty alerts going off all the time. We actually gave them better performance. We gave them a 10% performance boost, and we dropped their cloud spend for that application by 72%. So in fact, it was an 80-plus % price performance or cost performance improvement that we gave them, and essentially, we helped them tune the entire stack. This was a hybrid environment, so this included VMs as well as more modern architecture. Today, I would say the overwhelming majority of our customers have moved off of the VMs and are in a containerized environment, and even more to the point, Kubernetes which we find just a very, very high percentage of our customers have moved to. So most of the work we're doing today with new customers is around that, and if we look at the second and third examples here, those are examples of that. In the second example, that's a company that develops websites. It's one of the big ones out in the marketplace that, let's say, if you were starting a new business and you wanted a website, they would develop that website for you. So their internal infrastructure is all brand new stuff. It's all Kubernetes, and what we were able to do for them is they were actually getting decent performance. We held their performance at their SLO. We achieved a 100% error-free scenario for them at runtime, and we dropped their cost by 80%. So for them, they needed us to hold-serve, if you will, on performance and reliability and get their costs under control because everything in that, that's a cloud native company. Everything there is cloud cost. So the interesting thing is it took us nine steps because nine of our iterations to actually get to optimal. So it was very, very quick, and there was no integration required. In the first case, we actually had to do a custom integration for an underlying platform that was used for CICD, but with the- >> John: Because of the hybrid, right? >> Patrick: Sorry? >> John: Because it was hybrid, right? >> Patrick: Yes, because it was hybrid, exactly. But within the second one, we just plugged right in, and we were able to tune the Kubernetes environment just as I showed in that architecture slide, and then the third one is one of the leading application performance monitoring companies on the market. They have a bunch of their own internal applications and those use a lot of cloud spend. They're actually running Kubernetes on top of VMs, but we don't have to worry about the VM layer. We just worry about the Kubernetes layer for them, and what we did for them was we gave them a 48% performance improvement in terms of latency and throughput. We dropped their error rates by 90% which is pretty substantial to say the least, and we gave them a 50% cost delta from where they had been. So this is the perfect example of actually being able to deliver on all three things which you can't always do. It has to be, sort of all applications are not created equal. This was one where we were able to actually deliver on all three of the key objectives. We were able to set them up in about 25 minutes from the time we got started, no extra integration, and needless to say, it was a big, happy moment for the developers to be able to go back to their bosses and say, "Hey, we have better performance, "better reliability. "Oh, by the way, we saved you half." >> So depending on the stack situation, you got VMs and Kubernetes on the one side, cloud-native, all Kubernetes, that's dream scenario obviously. Not many people like that. All the new stuff's going cloud-native, so that's ideal, and then the mixed ones, Kubernetes, but no VMs, right? >> Yeah, exactly. So Kubernetes with no VMs, no problem. Kubernetes on top of VMs, no problem, but we don't manage the VMs. We don't manage the underlay at all, in fact. And the other thing is we don't have to go back to the slide, but I think everybody will remember the slide that had the architecture, and on one side was our cloud instance. The only data that's going between the application and our cloud instance are the settings, so there's never any data. There's never any customer data, nothing for PCI, nothing for HIPPA, nothing for GDPR or any of those things. So no personal data, no health data. Nothing is passing back and forth. Just the settings of the containers. >> Patrick, while I got you here 'cause you're such a great, insightful guest, thank you for coming on and showcasing your company. Kubernetes real quick. How prevalent is this mainstream trend is because you're seeing such great examples of performance improvements. SLAs being met, SLOs being met. How real is Kubernetes for the mainstream enterprise as they're starting to use containers to tip their legacy and get into the cloud-native and certainly hybrid and soon to be multi-cloud environment? >> Yeah, I would not say it's dominant yet. Of container environments, I would say it's dominant now, but for all environments, it's not. I think the larger legacy companies are still going through that digital transformation, and so what we do is we catch them at that transformation point, and we can help them develop because as we remember from the AIOps slide, we can plug in at that test level and help them sort of pre-optimize as they're coming through. So we can actually help them be more efficient as they're transforming. The other side of it is the cloud-native companies. So you've got the legacy companies, brick and mortar, who are desperately trying to move to digitization. Then, you've got the ones that are born in the cloud. Most of them aren't on VMs at all. Most of them are on containers right from the get-go, but you do have some in the middle who have started to make a transition, and what they've done is they've taken their native VM environment and they've put Kubernetes on top of it so that way, they don't have to scuttle everything underneath it. >> Great. >> So I would say it's mixed at this point. >> Great business model, helping customers today, and being a bridge to the future. Real quick, what licensing models, how to buy, promotions you have for Amazon Web Services customers? How do people get involved? How do you guys charge? >> The product is licensed as a service, and the typical service is an annual. We license it by application, so let's just say you have an application, and it has 10 microservices. That would be a standard application. We'd have an annual cost for optimizing that application over the course of the year. We have a large application pack, if you will, for let's say applications of 20 services, something like that, and then we also have a platform, what we call Opsani platform, and that is for environments where the customer might have hundreds of applications and-or thousands of services, and we can plug into their deployment platform, something like a harness or Spinnaker or Jenkins or something like that, or we can plug into their their cloud Kubernetes orchestrator, and then we can actually discover the apps and optimize them. So we've got environments for both single apps and for many, many apps, and with the same platform. And yes, thanks for reminding me. We do have a promotion for for our AWS viewers. If you reference this presentation, and you look at the URL there which is opsani.com/awsstartupshowcase, can't forget that, you will, number one, get a free trial of our software. If you optimize one of your own applications, we're going to give you an Oculus set of goggles, the augmented reality goggles. And we have one other promotion for your viewers and for our joint customers here, and that is if you buy an annual license, you're going to get actually 15 months. So that's what we're putting on the table. It's actually a pretty good deal. The Oculus isn't contingent. That's a promotion. It's contingent on you actually optimizing one of your own services. So it's not a synthetic app. It's got to be one of your own apps, but that's what we've got on the table here, and I think it's a pretty good deal, and I hope your guys take us up on it. >> All right, great. Get Oculus Rift for optimizing one of your apps and 15 months for the price of 12. Patrick, thank you for coming on and sharing the future of AIOps with you guys. Great product, bridge to the future, solving a lot of problems. A lot of use cases there. Congratulations on your success. Thanks for coming on. >> Thank you so much. This has been excellent, and I really appreciate it. >> Hey, thanks for sharing. I'm John Furrier, your host with theCUBE. Thanks for watching. (upbeat music)
SUMMARY :
for the cloud management and Appreciate being with you. of the Startups Showcase, and that'll talk about the three elements kind of on the sides there. 'cause you can have good performance, and the question you asked An intern left one of the services on, and find the right size I mean, how does the and the customer says, and the cost advantage? and that's really one of the things I mean, the tuning in concert. So the cost thing is I mentioned is due to in the old days when you had and that is one of the things and you have to think about the automation So that's the big differences of offloading the SRE and the SREs then have to make sure and certainly if you So most of the work we're doing today "Oh, by the way, we saved you half." So depending on the stack situation, and our cloud instance are the settings, and get into the cloud-native that are born in the cloud. So I would say it's and being a bridge to the future. and the typical service is an annual. and 15 months for the price of 12. and I really appreciate it. I'm John Furrier, your host with theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Emily Freeman | PERSON | 0.99+ |
Patrick | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Pat Conte | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Patrick Conte | PERSON | 0.99+ |
15 months | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
nine steps | QUANTITY | 0.99+ |
30 | QUANTITY | 0.99+ |
Oculus | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
72% | QUANTITY | 0.99+ |
48% | QUANTITY | 0.99+ |
10 microservices | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
second slide | QUANTITY | 0.99+ |
first case | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
one | QUANTITY | 0.99+ |
20 services | QUANTITY | 0.99+ |
Prometheus | TITLE | 0.99+ |
second example | QUANTITY | 0.99+ |
second one | QUANTITY | 0.99+ |
five key | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
80-plus % | QUANTITY | 0.99+ |
eight settings | QUANTITY | 0.99+ |
Opsani | PERSON | 0.99+ |
third examples | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
services | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
eight values | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
nine | QUANTITY | 0.98+ |
three elements | QUANTITY | 0.98+ |
Servo | ORGANIZATION | 0.98+ |
80% | QUANTITY | 0.98+ |
opsani.com/awsstartupshowcase | OTHER | 0.98+ |
first one | QUANTITY | 0.98+ |
two small instances | QUANTITY | 0.98+ |
10 | QUANTITY | 0.97+ |
three things | QUANTITY | 0.97+ |
once a month | QUANTITY | 0.97+ |
one time | QUANTITY | 0.97+ |
70% | QUANTITY | 0.97+ |
GDPR | TITLE | 0.97+ |
zero | QUANTITY | 0.97+ |
Servo | TITLE | 0.97+ |
about 20 steps | QUANTITY | 0.97+ |
12 | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.96+ |
four times | QUANTITY | 0.96+ |
ON DEMAND SPEED K8S DEV OPS SECURE SUPPLY CHAIN
>> In this session, we will be reviewing the power and benefits of implementing a secure software supply chain and how we can gain a cloud like experience with the flexibility, speed and security of modern software delivering. Hi, I'm Matt Bentley and I run our technical pre-sales team here at Mirantis. I spent the last six years working with customers on their containerization journey. One thing almost every one of my customers has focused on is how they can leverage the speed and agility benefits of containerizing their applications while continuing to apply the same security controls. One of the most important things to remember is that we are all doing this for one reason and that is for our applications. So now let's take a look at how we can provide flexibility to all layers of the stack from the infrastructure on up to the application layer. When building a secure supply chain for container focused platforms, I generally see two different mindsets in terms of where their responsibilities lie between the developers of the applications and the operations teams who run the middleware platforms. Most organizations are looking to build a secure, yet robust service that fits their organization's goals around how modern applications are built and delivered. First, let's take a look at the developer or application team approach. This approach falls more of the DevOps philosophy, where a developer and application teams are the owners of their applications from the development through their life cycle, all the way to production. I would refer to this more of a self service model of application delivery and promotion when deployed to a container platform. This is fairly common, organizations where full stack responsibilities have been delegated to the application teams. Even in organizations where full stack ownership doesn't exist, I see the self service application deployment model work very well in lab development or non production environments. This allows teams to experiment with newer technologies, which is one of the most effective benefits of utilizing containers. In other organizations, there is a strong separation between responsibilities for developers and IT operations. This is often due to the complex nature of controlled processes related to the compliance and regulatory needs. Developers are responsible for their application development. This can either include dock at the development layer or be more traditional, throw it over the wall approach to application development. There's also quite a common experience around building a center of excellence with this approach where we can take container platforms and be delivered as a service to other consumers inside of the IT organization. This is fairly prescriptive in the manner of which application teams would consume it. Yeah when examining the two approaches, there are pros and cons to each. Process, controls and compliance are often seen as inhibitors to speed. Self-service creation, starting with the infrastructure layer, leads to inconsistency, security and control concerns, which leads to compliance issues. While self-service is great, without visibility into the utilization and optimization of those environments, it continues the cycles of inefficient resource utilization. And a true infrastructure as a code experience, requires DevOps, related coding skills that teams often have in pockets, but maybe aren't ingrained in the company culture. Luckily for us, there is a middle ground for all of this. Docker Enterprise Container Cloud provide the foundation for the cloud like experience on any infrastructure without all of the out of the box security and controls that our professional services team and your operations teams spend their time designing and implementing. This removes much of the additional work and worry around ensuring that your clusters and experiences are consistent, while maintaining the ideal self service model. No matter if it is a full stack ownership or easing the needs of IT operations. We're also bringing the most natural Kubernetes experience today with Lens to allow for multi-cluster visibility that is both developer and operator friendly. Lens provide immediate feedback for the health of your applications, observability for your clusters, fast context switching between environments and allowing you to choose the best in tool for the task at hand, whether it is the graphic user interface or command line interface driven. Combining the cloud like experience with the efficiencies of a secure supply chain that meet your needs brings you the best of both worlds. You get DevOps speed with all the security and controls to meet the regulations your business lives by. We're talking about more frequent deployments, faster time to recover from application issues and better code quality. As you can see from our clusters we have worked with, we're able to tie these processes back to real cost savings, real efficiency and faster adoption. This all adds up to delivering business value to end users in the overall perceived value. Now let's look and see how we're able to actually build a secure supply chain to help deliver these sorts of initiatives. In our example secure supply chain, where utilizing Docker desktop to help with consistency of developer experience, GitHub for our source control, Jenkins for our CACD tooling, the Docker trusted registry for our secure container registry and the Universal Control Plane to provide us with our secure container runtime with Kubernetes and Swarm, providing a consistent experience, no matter where our clusters are deployed. You work with our teams of developers and operators to design a system that provides a fast, consistent and secure experience. For my developers, that works for any application, Brownfield or Greenfield, Monolith or Microservice. Onboarding teams can be simplified with integrations into enterprise authentication services, calls to GitHub repositories, Jenkins access and jobs, Universal Control Plan and Docker trusted registry teams and organizations, Kubernetes namespace with access control, creating Docker trusted registry namespaces with access control, image scanning and promotion policies. So, now let's take a look and see what it looks like from the CICD process, including Jenkins. So let's start with Docker desktop. From the Docker desktop standpoint, we'll actually be utilizing visual studio code and Docker desktop to provide a consistent developer experience. So no matter if we have one developer or a hundred, we're going to be able to walk through a consistent process through Docker container utilization at the development layer. Once we've made our changes to our code, we'll be able to check those into our source code repository. In this case, we'll be using GitHub. Then when Jenkins picks up, it will check out that code from our source code repository, build our Docker containers, test the application that will build the image, and then it will take the image and push it to our Docker trusted registry. From there, we can scan the image and then make sure it doesn't have any vulnerabilities. Then we can sign them. So once we've signed our images, we've deployed our application to dev, we can actually test our application deployed in our real environment. Jenkins will then test the deployed application. And if all tests show that as good, we'll promote our Docker image to production. So now, let's look at the process, beginning from the developer interaction. First of all, let's take a look at our application as it's deployed today. Here, we can see that we have a change that we want to make on our application. So our marketing team says we need to change containerize NGINX to something more Mirantis branded. So let's take a look at visual studio code, which we'll be using for our ID to change our application. So here's our application. We have our code loaded and we're going to be able to use Docker desktop on our local environment with our Docker desktop plugin for visual studio code, to be able to build our application inside of Docker, without needing to run any command line specific tools. Here with our code, we'll be able to interact with Docker maker changes, see it live and be able to quickly see if our changes actually made the impact that we're expecting our application. So let's find our updated tiles for application and let's go ahead and change that to our Mirantis sized NGINX instead of containerized NGINX. So we'll change it in a title and on the front page of the application. So now that we've saved that changed to our application, we can actually take a look at our code here in VS code. And as simple as this, we can right click on the Docker file and build our application. We give it a name for our Docker image and VS code will take care of the automatic building of our application. So now we have a Docker image that has everything we need in our application inside of that image. So, here we can actually just right click on that image tag that we just created and do run. This will interactively run the container for us. And then once our containers running, we can just right click and open it up in a browser. So here we can see the change to our application as it exists live. So, once we can actually verify that our applications working as expected, we can stop our container. And then from here, we can actually make that change live by pushing it to our source code repository. So here, we're going to go ahead and make a commit message to say that we updated to our Mirantis branding. We will commit that change and then we'll push it to our source code repository. Again, in this case, we're using GitHub to be able to use as our source code repository. So here in VS code, we'll have that pushed here to our source code repository. And then, we'll move on to our next environment, which is Jenkins. Jenkins is going to be picking up those changes for our application and it checked it out from our source code repository. So GitHub notifies Jenkins that there's a change. Checks out the code, builds our Docker image using the Docker file. So we're getting a consistent experience between the local development environment on our desktop and then in Jenkins where we're actually building our application, doing our tests, pushing it into our Docker trusted registry, scanning it and signing our image in our Docker trusted registry and then deploying to our development environment. So let's actually take a look at that development environment as it's been deployed. So, here we can see that our title has been updated on our application, so we can verify that it looks good in development. If we jump back here to Jenkins, we'll see that Jenkins go ahead and runs our integration tests for our development environment. Everything worked as expected, so it promoted that image for our production repository in our Docker trusted registry. We're then, we're going to also sign that image. So we're assigning that yes, we've signed off that has made it through our integration tests and it's deployed to production. So here in Jenkins, we can take a look at our deployed production environment where our application is live in production. We've made a change, automated and very secure manner. So now, let's take a look at our Docker trusted registry, where we can see our name space for our application and our simple NGINX repository. From here, we'll be able to see information about our application image that we've pushed into the registry, such as the image signature, when it was pushed by who and then, we'll also be able to see the results of our image. In this case, we can actually see that there are vulnerabilities for our image and we'll actually take a look at that. Docker trusted registry does binary level scanning. So we get detailed information about our individual image layers. From here, these image layers give us details about where the vulnerabilities were located and what those vulnerabilities actually are. So if we click on the vulnerability, we can see specific information about that vulnerability to give us details around the severity and more information about what exactly is vulnerable inside of our container. One of the challenges that you often face around vulnerabilities is how exactly we would remediate that in a secure supply chain. So let's take a look at that. In the example that we were looking at, the vulnerability is actually in the base layer of our image. In order to pull in a new base layer for our image, we need to actually find the source of that and update it. One of the ways that we can help secure that as a part of the supply chain is to actually take a look at where we get our base layers of our images. Docker hub really provides a great source of content to start from, but opening up Docker hub within your organization, opens up all sorts of security concerns around the origins of that content. Not all images are made equal when it comes to the security of those images. The official images from Docker hub are curated by Docker, open source projects and other vendors. One of the most important use cases is around how you get base images into your environment. It is much easier to consume the base operating system layer images than building your own and also trying to maintain them. Instead of just blindly trusting the content from Docker hub, we can take a set of content that we find useful such as those base image layers or content from vendors and pull that into our own Docker trusted registry, using our mirroring feature. Once the images have been mirrored into a staging area of our Docker trusted registry, we can then scan them to ensure that the images meet our security requirements. And then based off of the scan result, promote the image to a public repository where you can actually sign the images and make them available to our internal consumers to meet their needs. This allows us to provide a set of curated content that we know is secure and controlled within our environment. So from here, we can find our updated Docker image in our Docker trusted registry, where we can see that the vulnerabilities have been resolved. From a developer's point of view, that's about as smooth as the process gets. Now, let's take a look at how we can provide that secure content for our developers in our own Docker trusted registry. So in this case, we're taking a look at our Alpine image that we've mirrored into our Docker trusted registry. Here, we're looking at the staging area where the images get temporarily pulled because we have to pull them in order to actually be able to scan them. So here we set up mirroring and we can quickly turn it on by making it active. And then we can see that our image mirroring, we'll pull our content from Docker hub and then make it available in our Docker trusted registry in an automatic fashion. So from here, we can actually take a look at the promotions to be able to see how exactly we promote our images. In this case, we created a promotion policy within Docker trusted registry that makes it so that content gets promoted to a public repository for internal users to consume based off of the vulnerabilities that are found or not found inside of the Docker image. So our actual users, how they would consume this content is by taking a look at the public to them, official images that we've made available. Here again, looking at our Alpine image, we can take a look at the tags that exist and we can see that we have our content that has been made available. So we've pulled in all sorts of content from Docker hub. In this case, we've even pulled in the multi architecture images, which we can scan due to the binary level nature of our scanning solution. Now let's take a look at Lens. Lens provides capabilities to be able to give developers a quick opinionated view that focuses around how they would want to view, manage and inspect applications deployed to a Kubernetes cluster. Lens integrates natively out of the box with Universal Control Plane clam bundles. So you're automatically generated TLS certificates from UCP, just work. Inside our organization, we want to give our developers the ability to see their applications in a very easy to view manner. So in this case, let's actually filter down to the application that we just employed to our development environment. Here, we can see the pod for application. And when we click on that, we get instant detailed feedback about the components and information that this pod is utilizing. We can also see here in Lens that it gives us the ability to quickly switch contexts between different clusters that we have access to. With that, we also have capabilities to be able to quickly deploy other types of components. One of those is helm charts. Helm charts are a great way to package up applications, especially those that may be more complex to make it much simpler to be able to consume and inversion our applications. In this case, let's take a look at the application that we just built and deployed. In this case, our simple NGINX application has been bundled up as a helm chart and is made available through Lens. Here, we can just click on that description of our application to be able to see more information about the helm chart. So we can publish whatever information may be relevant about our application. And through one click, we can install our helm chart. Here, it will show us the actual details of the helm charts. So before we install it, we can actually look at those individual components. So in this case, we can see this created an ingress rule. And then this will tell Kubernetes how did it create this specific components of our application. We'd just have to pick a namespace to deploy it to and in this case, we're actually going to do a quick test here because in this case, we're trying to deploy the application from Docker hub. In our Universal Control Plane, we've turned on Docker content trust policy enforcement. So this is actually going to fail to deploy. Because we're trying to employ our application from Docker hub, the image hasn't been properly signed in our environment. So the Docker content trust policy enforcement prevents us from deploying our Docker image from Docker hub. In this case, we have to go through our approved process through our secure supply chain to be able to ensure that we know where our image came from and that meets our quality standards. So if we comment out the Docker hub repository and comment in our Docker trusted registry repository and click install, it will then install the helm chart with our Docker image being pulled from our DTR, which then it has a proper signature. We can see that our application has been successfully deployed through our home chart releases view. From here, we can see that simple NGINX application and in this case, we'll get details around the actual deployed helm chart. The nice thing is, is that Lens provides us this capability here with helm to be able to see all of the components that make up our application. From this view, it's giving us that single pane of glass into that specific application, so that we know all of the components that is created inside of Kubernetes. There are specific details that can help us access the applications such as that ingress rule that we just talked about, gives us the details of that, but it also gives us the resources such as the service, the deployment and ingress that has been created within Kubernetes to be able to actually have the application exist. So to recap, we've covered how we can offer all the benefits of a cloud like experience and offer flexibility around DevOps and operations control processes through the use of a secure supply chain, allowing our developers to spend more time developing and our operators, more time designing systems that meet our security and compliance concerns.
SUMMARY :
of our application to be
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt Bentley | PERSON | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
one reason | QUANTITY | 0.99+ |
Mirantis | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
NGINX | TITLE | 0.99+ |
Docker | TITLE | 0.99+ |
two approaches | QUANTITY | 0.99+ |
Monolith | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
UCP | ORGANIZATION | 0.98+ |
Kubernetes | TITLE | 0.98+ |
One thing | QUANTITY | 0.98+ |
one developer | QUANTITY | 0.98+ |
Jenkins | TITLE | 0.98+ |
today | DATE | 0.98+ |
Brownfield | ORGANIZATION | 0.97+ |
both worlds | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
both | QUANTITY | 0.96+ |
one click | QUANTITY | 0.96+ |
Greenfield | ORGANIZATION | 0.95+ |
each | QUANTITY | 0.95+ |
single pane | QUANTITY | 0.92+ |
Docker hub | TITLE | 0.91+ |
a hundred | QUANTITY | 0.91+ |
Lens | TITLE | 0.9+ |
Docker | ORGANIZATION | 0.9+ |
Microservice | ORGANIZATION | 0.9+ |
VS | TITLE | 0.88+ |
DevOps | TITLE | 0.87+ |
K8S | COMMERCIAL_ITEM | 0.87+ |
Docker hub | ORGANIZATION | 0.85+ |
ways | QUANTITY | 0.83+ |
Kubernetes | ORGANIZATION | 0.83+ |
last six years | DATE | 0.82+ |
Jenkins | PERSON | 0.72+ |
One of | QUANTITY | 0.7+ |
ON DEMAND API GATEWAYS INGRESS SERVICE MESH
>> Thank you, everyone for joining. I'm here today to talk about ingress controllers, API gateways, and service mesh on Kubernetes, three very hot topics that are also frequently confusing. So I'm Richard Li, founder/CEO of Ambassador Labs, formerly known as Datawire. We sponsor a number of popular open source projects that are part of the Cloud Native Computing Foundation, including Telepresence and Ambassador, which is a Kubernetes native API gateway. And most of what I'm going to talk about today is related to our work around Ambassador. So I want to start by talking about application architecture and workflow on Kubernetes and how applications that are being built on Kubernetes really differ from how they used to be built. So when you're building applications on Kubernetes, the traditional architecture is the very famous monolith. And the monolith is a central piece of software. It's one giant thing that you build deploy, run. And the value of a monolith is it's really simple. And if you think about the monolithic development process, more importantly is that architecture is really reflected in that workflow. So with a monolith, you have a very centralized development process. You tend not to release too frequently because you have all these different development teams that are working on different features, and then you decide in advance when you're going to release that particular piece of software and everyone works towards that release train. And you have specialized teams. You have a development team, which has all your developers. You have a QA team, you have a release team, you have an operations team. So that's your typical development organization and workflow with a monolithic application. As organizations shift to microservices, they adopt a very different development paradigm. It's a decentralized development paradigm where you have lots of different independent teams that are simultaneously working on different parts of this application, and those application components are really shipped as independent services. And so you really have a continuous release cycle because instead of synchronizing all your teams around one particular vehicle, you have so many different release vehicles that each team is able to ship as soon as they're ready. And so we call this full cycle development because that team is really responsible not just for the coding of that microservice, but also the testing and the release and operations of that service. So this is a huge change, particularly with workflow, and there's a lot of implications for this. So I have a diagram here that just tries to visualize a little bit more the difference in organization. With the monolith, you have everyone who works on this monolith. With microservices, you have the yellow folks work on the yellow microservice and the purple folks work on the purple microservice and maybe just one person work on the orange microservice and so forth. So there's a lot more diversity around your teams and your microservices, and it lets you really adjust the granularity of your development to your specific business needs. So how do users actually access your microservices? Well, with a monolith, it's pretty straightforward. You have one big thing, so you just tell the internet, well, I have this one big thing on the internet. Make sure you send all your traffic to the big thing. But when you have microservices and you have a bunch of different microservices, how do users actually access these microservices? So the solution is an API gateway. So the API gateway consolidates all access to your microservices. So requests come from the internet. They go to your API gateway. The API gateway looks at these requests, and based on the nature of these requests, it routes them to the appropriate microservice. And because the API gateway is centralizing access to all of the microservices, it also really helps you simplify authentication, observability, routing, all these different cross-cutting concerns, because instead of implementing authentication in each of your microservices, which would be a maintenance nightmare and a security nightmare, you've put all of your authentication in your API gateway. So if you look at this world of microservices, API gateways are a really important part of your infrastructure which are really necessary, and pre-microservices, pre-Kubernetes, an API gateway, while valuable, was much more optional. So that's one of the really big things around recognizing with the microservices architecture, you really need to start thinking much more about an API gateway. The other consideration with an API gateway is around your management workflow, because as I mentioned, each team is actually responsible for their own microservice, which also means each team needs to be able to independently manage the gateway. So Team A working on that microservice needs to be able to tell the API gateway, this is how I want you to route requests to my microservice, and the purple team needs to be able to say something different for how purple requests get routed to the purple microservice. So that's also a really important consideration as you think about API gateways and how it fits in your architecture, because it's not just about your architecture, it's also about your workflow. So let me talk about API gateways on Kubernetes. I'm going to start by talking about ingress. So ingress is the process of getting traffic from the internet to services inside the cluster. Kubernetes, from an architectural perspective, it actually has a requirement that all the different pods in a Kubernetes cluster needs to communicate with each other. And as a consequence, what Kubernetes does is it creates its own private network space for all these pods, and each pod gets its own IP address. So this makes things very, very simple for interpod communication. Kubernetes, on the other hand, does not say very much around how traffic should actually get into the cluster. So there's a lot of detail around how traffic actually, once it's in the cluster, how you route it around the cluster, and it's very opinionated about how this works, but getting traffic into the cluster, there's a lot of different options and there's multiple strategies. There's Pod IP, there's Ingress, there's LoadBalancer resources, there's NodePort. I'm not going to go into exhaustive detail on all these different options, and I'm going to just talk about the most common approach that most organizations take today. So the most common strategy for routing is coupling an external load balancer with an ingress controller. And so an external load balancer can be a hardware load balancer. It can be a virtual machine. It can be a cloud load balancer. But the key requirement for an external load balancer is to be able to attach a stable IP address so that you can actually map a domain name and DNS to that particular external load balancer, and that external load balancer usually, but not always, will then route traffic and pass that traffic straight through to your ingress controller. And then your ingress controller takes that traffic and then routes it internally inside Kubernetes to the various pods that are running your microservices. There are other approaches, but this is the most common approach. And the reason for this is that the alternative approaches really require each of your microservices to be exposed outside of the cluster, which causes a lot of challenges around management and deployment and maintenance that you generally want to avoid. So I've been talking about an ingress controller. What exactly is an ingress controller? So an ingress controller is an application that can process rules according to the Kubernetes ingress specification. Strangely, Kubernetes is not actually shipped with a built-in ingress controller. I say strangely because you think, well, getting traffic into a cluster is probably a pretty common requirement, and it is. It turns out that this is complex enough that there's no one size fits all ingress controller. And so there is a set of ingress rules that are part of the Kubernetes ingress specification that specify how traffic gets routed into the cluster, and then you need a proxy that can actually route this traffic to these different pods. And so an ingress controller really translates between the Kubernetes configuration and the proxy configuration, and common proxies for ingress controllers include HAProxy, Envoy Proxy, or NGINX. So let me talk a little bit more about these common proxies. So all these proxies, and there are many other proxies. I'm just highlighting what I consider to be probably the three most well-established proxies, HAProxy, NGINX, and Envoy Proxy. So HAProxy is managed by HAProxy Technologies. Started in 2001. The HAProxy organization actually creates an ingress controller. And before they created an ingress controller, there was an open source project called Voyager which built an ingress controller on HAProxy. NGINX, managed by NGINX, Inc., subsequently acquired by F5. Also open source. Started a little bit later, the proxy, in 2004. And there's the Nginx-ingress, which is a community project. That's the most popular. As well as the Nginx, Inc. kubernetes-ingress project, which is maintained by the company. This is a common source of confusion because sometimes people will think that they're using the NGINX ingress controller, and it's not clear if they're using this commercially supported version or this open source version. And they actually, although they have very similar names, they actually have different functionality. Finally, Envoy Proxy, the newest entrant to the proxy market, originally developed by engineers at Lyft, the ride sharing company. They subsequently donated it to the Cloud Native Computing Foundation. Envoy has become probably the most popular cloud native proxy. It's used by Ambassador, the API gateway. It's used in the Istio service mesh. It's used in the VMware Contour. It's been used by Amazon in App Mesh. It's probably the most common proxy in the cloud native world. So as I mentioned, there's a lot of different options for ingress controllers. The most common is the NGINX ingress controller, not the one maintained by NGINX, Inc., but the one that's part of the Kubernetes project. Ambassador is the most popular Envoy-based option. Another common option is the Istio Gateway, which is directly integrated with the Istio mesh, and that's actually part of Docker Enterprise. So with all these choices around ingress controller, how do you actually decide? Well, the reality is the ingress specification's very limited. And the reason for this is that getting traffic into a cluster, there's a lot of nuance into how you want to do that, and it turns out it's very challenging to create a generic one size fits all specification because of the vast diversity of implementations and choices that are available to end users. And so you don't see ingress specifying anything around resilience. So if you want to specify a timeout or rate-limiting, it's not possible. Ingress is really limited to support for HTTP. So if you're using gRPC or web sockets, you can't use the ingress specification. Different ways of routing, authentication. The list goes on and on. And so what happens is that different ingress controllers extend the core ingress specification to support these use cases in different ways. So NGINX ingress, they actually use a combination of config maps and the ingress resources plus custom annotations that extend the ingress to really let you configure a lot of the additional extensions that is exposed in the NGINX ingress. With Ambassador, we actually use custom resource definitions, different CRDs that extend Kubernetes itself to configure Ambassador. And one of the benefits of the CRD approach is that we can create a standard schema that's actually validated by Kubernetes. So when you do a kub control apply of an Ambassador CRD, kub control can immediately validate and tell you if you're actually applying a valid schema and format for your Ambassador configuration. And as I previously mentioned, Ambassador's built on Envoy Proxy, Istio Gateway also uses CRDs. They can be used in extension of the service mesh CRDs as opposed to dedicated gateway CRDs. And again, Istio Gateway is built on Envoy Proxy. So I've been talking a lot about ingress controllers, but the title of my talk was really about API gateways and ingress controllers and service mesh. So what's the difference between an ingress controller and an API gateway? So to recap, an ingress controller processes Kubernetes ingress routing rules. An API gateway is a central point for managing all your traffic to Kubernetes services. It typically has additional functionality such as authentication, observability, a developer portal, and so forth. So what you find is that not all API gateways are ingress controllers because some API gateways don't support Kubernetes at all. So you can't, they can't be ingress controllers. And not all ingress controllers support the functionality such as authentication, observability, developer portal, that you would typically associate with an API gateway. So generally speaking, API gateways that run on Kubernetes should be considered a superset of an ingress controller. But if the API gateway doesn't run on Kubernetes, then it's an API gateway and not an ingress controller. So what's the difference between a service mesh and an API gateway? So an API gateway is really focused on traffic into and out of a cluster. So the colloquial term for this is North/South traffic. A service mesh is focused on traffic between services in a cluster, East/West traffic. All service meshes need an API gateway. So Istio includes a basic ingress or API gateway called the Istio Gateway, because a service mesh needs traffic from the internet to be routed into the mesh before it can actually do anything. Envoy Proxy, as I mentioned, is the most common proxy for both mesh and gateways. Docker Enterprise provides an Envoy-based solution out of the box, Istio Gateway. The reason Docker does this is because, as I mentioned, Kubernetes doesn't come package with an ingress. It makes sense for Docker Enterprise to provide something that's easy to get going, no extra steps required, because with Docker enterprise, you can deploy it and get going, get it exposed on the internet without any additional software. Docker Enterprise can also be easily upgraded to Ambassador because they're both built on Envoy. It ensures consistent routing semantics. And also with Ambassador, you get greater security for single sign-on. There's a lot of security by default that's configured directly into Ambassador. Better control over TLS, things like that. And then finally, there's commercial support that's actually available for Ambassador. Istio is an open source project that has a very broad community, but no commercial support options. So to recap, ingress controllers and API gateways are critical pieces of your cloud native stack. So make sure that you choose something that works well for you. And I think a lot of times organizations don't think critically enough about the API gateway until they're much further down the Kubernetes journey. Considerations around how to choose that API gateway include functionality such as how does it do with traffic management and observability? Does it support the protocols that you need? Also nonfunctional requirements such as does it integrate with your workflow? Do you offer commercial support? Can you get commercial support for this? An API gateway is focused on North/South traffic, so traffic into and out of your Kubernetes cluster. A service mesh is focused on East/West traffic, so traffic between different services inside the same cluster. Docker Enterprise includes Istio Gateway out of the box. Easy to use, but can also be extended with Ambassador for enhanced functionality and security. So thank you for your time. Hope this was helpful in understanding the difference between API gateways, ingress controllers, and service meshes, and how you should be thinking about that on your Kubernetes deployment.
SUMMARY :
So ingress is the process
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
2004 | DATE | 0.99+ |
Richard Li | PERSON | 0.99+ |
2001 | DATE | 0.99+ |
Ambassador Labs | ORGANIZATION | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
each team | QUANTITY | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
each team | QUANTITY | 0.99+ |
Datawire | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
each pod | QUANTITY | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
Nginx, Inc. | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
each | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
one person | QUANTITY | 0.98+ |
HAProxy Technologies | ORGANIZATION | 0.98+ |
HAProxy | TITLE | 0.97+ |
Docker Enterprise | TITLE | 0.96+ |
Ambassador | ORGANIZATION | 0.96+ |
both | QUANTITY | 0.96+ |
NGINX | TITLE | 0.96+ |
NGINX, Inc. | ORGANIZATION | 0.96+ |
Docker Enterprise | TITLE | 0.96+ |
Envoy Proxy | TITLE | 0.96+ |
one | QUANTITY | 0.95+ |
one big thing | QUANTITY | 0.95+ |
NGINX ingress | TITLE | 0.95+ |
Docker enterprise | TITLE | 0.94+ |
one particular vehicle | QUANTITY | 0.93+ |
ingress | ORGANIZATION | 0.91+ |
Telepresence | ORGANIZATION | 0.87+ |
F5 | ORGANIZATION | 0.87+ |
Envoy | TITLE | 0.86+ |
Nginx-ingress | TITLE | 0.85+ |
three very hot topics | QUANTITY | 0.82+ |
both mesh | QUANTITY | 0.82+ |
three most well-established proxies | QUANTITY | 0.76+ |
single sign | QUANTITY | 0.75+ |
Istio Gateway | OTHER | 0.75+ |
one giant thing | QUANTITY | 0.73+ |
VMware Contour | TITLE | 0.71+ |
Ingress | ORGANIZATION | 0.7+ |
Docker Enterprise | ORGANIZATION | 0.69+ |
Ambassador | TITLE | 0.67+ |
Voyager | TITLE | 0.67+ |
Envoy | ORGANIZATION | 0.65+ |
Istio Gateway | TITLE | 0.65+ |
Istio | ORGANIZATION | 0.62+ |
Christine Yen, Honeycomb io | DevNet Create 2018
>> Announcer: Live from the Computer History Museum in Mountain View, California. It's theCUBE, covering DevNet Create 2018. Brought to you by Cisco. >> Hey, welcome back, everyone. This is theCUBE, live here in Mountain View, California, heart of Silicon Valley for Cisco's DevNet Create. This is their Cloud developer event. It's not the main Cisco DevNet which is more of the Cisco developer, this is much more Cloud Native DevOps. I'm joined with my cohost, Lauren Cooney and our next guest is Christine Yen, who is co-founder and Chief Product Officer of Honeycomb.io. Welcome to theCUBE. >> Thank you. >> Great to have an entrepreneur and also Chief Product Officer because you blend in the entrepreneurial zeal, but also you got to build the product in the Cloud Native world. You guys done a few ventures before. First, take a minute and talk about what you guys do, what the company is built on, what's the mission? What's your vision? >> Absolutely, Honeycomb was built, we are an observability platform to help people find the unknown unknowns. Our whole thesis is that the world is getting more complicated. We have microservices and containers, and instead of having five application servers that we treated like pets in the past, we now have 500 containers running that are more like cattle and where any one of them might die at any given time. And we need our tools to be able to support us to figure out how and why. And when something happens, what happened and why, and how do we resolve it? We look around at the landscape and we feel like this dichotomy out there of, we have logging tools and we have metrics tools. And those really evolved from the fact that in 1995, we had to choose between grep or counters. And as technology evolved, those evolved to distribute grep or RDS. And then we have distribute grep with fancy UIs and well, fancy RDS with UIs. And Honeycomb, we were started a couple years ago. We really feel like what if you didn't have to choose? What if technology supported the power of having all the context there the way that you do with logs while still being able to provide instant analytics the way that you have with metrics? >> So the problem that you're solving is one, antiquated methodologies from old architectures and stacks if you will, to helping people save time, with the arcane tools. Is that the main premise? >> We want people to be able to debug their production systems. >> All right, so, beyond that now, the developer that you're targeting, can you take us through a day in the life of where you are helping them, vis a vis the old way? >> Absolutely, so I'll tell a story of when myself and my co-founder, Charity, were working together at PaaS. PaaS, for those who aren't familiar, used to be RD, a backend form of mobile apps. You can think of someone who just wants to build an iOS app, doesn't want to deal with data storage, user records, things like that. And PaaS started in 2011, got bought by Facebook in 2013, spun down very beginning of 2016. And in 2013, when the acquisition happened, we were supporting somewhere on the order of 60,000 different mobile apps. Each one of them could be totally different workload, totally different usage pattern, but any one of them might be experiencing problems. And again, in this old world, this pre-Honeycomb world, we had our top level metrics. We had latency, response, overall throughput, error rates, and we were very proud of them. We were very proud of these big dashboards on the wall that were green. And they were great, except when you had a customer write in being like, "Hey, PaaS is down." And we look at our dashboard we'd be like, "Nope, it's not down. "It must be network issues." >> John: That's on your end. >> Yeah, that's on your end. >> John: Not a good answer. >> Not a good answer, and especially not if that customer was Disney, right? When you're dealing with these high level metrics, and you're processing tens or hundreds of thousands of requests per second, when Disney comes in, they've got eight requests a second and they're seeing all of them fail. Even though those are really important, eight requests per second, you can't tease that out of your graphs. You can't figure out why they're failing, what's going on, how to fix it. You've got to dispatch an engineer to go add a bunch of if app ID equals Disney, track it down, figure out what's going on there. And it takes time. And when we got to Facebook, we were exposed to a type of tool that essentially inspired Honeycomb as it is today that let us capture all this data, capture a bunch of information about everything that was happening down to these eight requests per second. And when a customer complained, we could immediately isolate, oh, this one app, okay let's zoom in. For this one customer, this tiny customer, let's look at their throughput, error rates, latency. Oh, okay. Something looks funny there, let's break down by endpoint for this customer. And it's this iterative fast, highly granular investigation, that is where all of us are approaching today. With our systems getting more complicated you need to be able to isolate. Okay, I don't care about the 200s, I only care about the 500s, and within the 500s, then what's going on? What's going on with this server, with that set of containers? >> So this is basically an issue of data, unstructured data or have the ability to take this data in at the same time with your eye on the prize of instrumentation. And then having the ability to make that addressable and discoverable in real time, is that kind of? >> Yeah, we've been using the term observability to describe this feeling of, I need to be able to find unknown unknowns. And instrumentation is absolutely the tactic to observability of the strategy. It is how people will be able to get information out of their systems in a way that is relevant to their business. A common thing that we'll hear or people will ask, "Oh, can you ingest my nginx logs?" "Can you ingest my SQL logs?" Often, that's a great place to start, but really where are the problems in an application? Where are your problems in the system? Usually it's the places that are custom that the engineers wrote. And tools need to be able to support, providing information, providing graphs, providing analytics in a way that makes it easy for the folks who wrote the code to track down the problem and address them. >> It's a haystack of needles. >> Yeah, absolutely. >> They're all relevant but you don't know which needle you're going to need. >> Exactly. >> So, let me just get this. So I'm ducking out, just trying to understand 'cause this is super important because this is really the key to large scale Cloud ops, what we're talking about here. From a developer standpoint, and we just had a great guest on, talking about testing features and production which is really the important, people want to do that. And then, but for one person, but in production scale, huge problem, opportunity as well. So, if most people think of like, "Oh, I'll just ingest with Splunk," but that's a different, is that different? I mean, 'cause people think of Splunk and they think of Redshift and Kinesis on Amazon, they go, "Okay." Is that the solution? Are you guys different? Are you a tool? How do I understand you guys' context to those known solutions? >> First of all, explain the difference between ourselves and the Redshifts and big queries of the world, and then I'll talk about Splunk. We really view those tools as primarily things built for data scientists. They're in the big data realm, but they are very concerned with being 100% correct. They're concerned with fitting into big data tools and they often have an unfortunate delay in getting data in and making it acquirable. Honeycomb is 100% built for engineers. Engineers of people, the folks who are going to be on the hook for, "Hey, there's downtime, what's going on?" And in-- >> So once business benefits, more data warehouse like. >> Yeah. And what that means is that for Honeycomb, everything is real time. It's real time. We believe in recent data. If you're looking to get query data from a year ago we're not really the thing, but instead of waiting 20 minutes for a query over a huge volume of data, you wait 10 seconds, or it's 3:00 AM and you need to figure out what's happening right now, you can go from query to query, to query, to query, as you come up with hypotheses, validate them or invalidate them, and continue on your investigation path. So that's... >> That makes sense. >> Yeah. >> So data wrangling, doing queries, business intelligence, insights as a service, that's all that? >> Yeah. We almost, we played with and tossed the tagline BI for systems because we want that BI mentality of what's going on, let me investigate. But for the folks who need answers now, an approximate answer now is miles better than a perfect one-- >> And you can't keep large customers waiting, right? At the end of the day, you can't keep the large customers waiting. >> Well, it's also so complicated. The edge is very robust and diverse now. I mean, no-js is a lot of IO going on for instance. So let's just take an example. I had developer talking the other day with me about no-js. It's like, oh, someone's complaining but they're using Firefox. It's like, okay, different memory configuration. So the developer had to debug because the complaints were coming in. Everyone else was fine, but the one guy is complaining because he's on Firefox. Well, how many tabs does he have open? What's the memory look like? So like, this a weird thing, I mean, that's just a weird example, but that's just the kinds of diverse things that developers have to get on. And then where do they start? I mean. >> Absolutely. So, there's something we ran into or we saw our developers run into all the time at PaaS, right? These are mobile developers. They have to worry about not only which version of the app it is, they have to worry about which version of the app, using which version of RSDK on which version of the operating system, where any kind of strange combination of these could result in some terrible user experience. And these are things that don't really work well if you're relying on pre-aggregated 10 series system, like the evolution of the RDS, I mentioned. And for folks who are trying to address this, something like Splunk, these logging tools, frankly, a lot of these tools are built on storage engines that are intended for full text search. They're unstructured text, you're grepping over them, and then you're build indices and structure on top of that. >> There's some lag involved too in that. >> There's so much lag involved. And there's almost this negative feedback loop built in where if you want to add more data, if on each log line you want to start tracking browser user agent, you're going to incur not only extra storage costs, you're going to incur extra read time costs because you're reading that more data, even if you're don't even care about that on those queries. And you're probably incurring cost on the right time to maintain these indices. Honeycomb, we're a column store through and through. We do not care about your unstructured text logs, we really don't want them. We want you to structure your data-- >> John: Did you guys write your own column store or is that? >> We did write our own column store because ultimately there's nothing off the shelf that gave us the speed that we wanted. We wanted to be able to, Hey, sending us data blogs with 20, 50, 200 keys. But if you're running analysis and all you care about is a simple filter and account, you shouldn't have to pull in all this-- >> To become sort of like Ferrari, if you customize, it's really purpose built, is that what you guys did? >> That is. >> So talk about the dynamic, because now you're dealing with things like, I mean, I had a conversation with someone who's looking at say blockchain, where there's some costs involved, obviously writing to the blockchain. And this is not like a crypto thing it's more of a supply chain thing. They want visibility into latency and things of that nature. Does this sounds like you would fit there as a potential use case? Is that something that you guys thought of at all? >> It could absolutely be. I'm actually not super familiar with the blockchain or blockchain based applications but ultimately Honeycomb is intended for you to be able to answer questions about your system in a way that tends to stymie existing tools. So we see lots of people come to us from strange use cases who just want to be able to instrument, "Hey I have this custom logic. "I want to be able to look at what it's doing." And when a customer complains and my graphs are fine or when my graphs are complaining, being able to go in and figure out why. >> Take a minute to talk about the company you founded. How many employees funding, if you can talk about it. And use case customers you have now. And how do you guys engage? The service, is it, do I download code? Is it SaaS? I mean, you got all this great tech. What's the value proposition? >> I think I'll answer this-- >> John: Company first. >> All right. >> John: Status of the company. >> Sure. Honeycomb is about 25 people, 30 people. We raised a series A in January. We are about two and a half years old and we are very much SaaS of the future. We're very opinionated about a number of things and how we want customers to interact with us. So, we are SaaS only. We do offer a secure proxy option for folks who have PII concerns. We only take structured data. So, at our API, you can use whatever you want to slurp data from your system. But at our API, we want JSON. We do offer a wide variety of integrations, connectors, SDKs, to help you structure that data. But ultimately-- >> Do you provide SDKs to your customers? >> We do. So that if they want to instrument their application, we just have the niceties around like batching and doing things asynchronously so it doesn't block their application. But ultimately, so we try to meet folks where they're at, but it's 2016, it was 2017, 2018-- >> You have a hardened API, API pretty much defines your service from an inbound standpoint. Prices, cost, how does someone engage with you guys? When does someone know to engage? Where's the smoke signals? When is the house on fire? Is it like people are standing around? What's the problem? When does someone know to call you guys up at? >> People know to call us when they're having production problems that they can't solve. When it takes them way too long to go from there's an alert that went off or a customer complaint, to, "Oh, I found the problem, I can address it." We price based on storage. So we are a bunch of engineers, we try to keep the business side as simple as possible for better, for worse. And so, the more data you send us, the more it'll cost. If you want a lot of data, but stored for a short period of time, that will cost less than a lot of data stored for a long period of time. One of the things that we, another one of the approaches that is possibly more common in the big data world and less in the monitoring world is we talk a lot about sampling. Sampling as a way to control those costs. Say you are, Facebook, again, I'll return to that example. Facebook knew that in this world where lots and lots of things can go wrong at any point in time, you need to be able to store the actual context of a given event happening. Some unit of work, you want to keep track of all the pieces of metadata that make that piece of work unique. But at Facebook scale, you can't store every single one of them. So, all right, you start to develop these heuristics. What things are more interesting than others? Errors are probably more interesting than 200 okays. Okay. So we'll keep track of most errors, we'll store 1% of successful requests. Okay. Well, within that, what about errors? Okay. Well, things that time out are maybe more interesting than things that are permissioning errors. And you start to develop this sampling scheme that essentially maps to the interesting ness of the traffic that's flowing through your system. To throw out some numbers, I think-- >> Machine learning is perfect for that too. They can then use the sampling. >> Yeah. There's definitely some learning that can happen to determine what things should be dropped on the ground, what requests are perfectly representative of a large swath of things. And Instagram, used a tool like this inside Facebook. They stored something like 1/10 of a percent or a 1/100 of a percent of their requests. 'Cause simply, that was enough to give them a sketch of what representative traffic, what's going wrong, or what's weird that, and is worth digging into. >> Final question. What's your priorities for the product roadmap? What are you guys focused on now? Get some fresh funding, that's great. So expand the team, hiring probably. Like product, what's the focus on the product? >> Focus on the product is making this mindset of observability accessible to software engineers. Right, we're entering this world where more and more, it's the software engineers deploying their code, pushing things out in containers. And they're going to need to also develop this sense of, "Okay, well, how do I make sure "something's working in production? "How do I make sure something keeps working? "And how do I think about correctness "in this world where it's not just my component, "it's my component talking to these other folks' pieces?" We believe really strongly that the era of this single person in a room keeping everything up, is outdated. It's teams now, it's on call rotations. It's handing off the baton and sharing knowledge. One of the things that we're really trying to build into the product, that we're hoping that this is the year that we can really deliver on this, is this feeling of, I might not be the best debugger on the team or I might not be the best person, best constructor of graphs on the team, and John, you might be. But how can a tool help me as a new person on a team, learn from what you've done? How can a tool help me be like, Oh man, last week when John was on call, he ran into something around my SQL also. History doesn't repeat, but it rhymes. So how can I learn from the sequence of those things-- >> John: Something an expert system. >> Yeah. Like how can we help build experts? How can we raise entire teams to the level of the best debugger? >> And that's the beautiful thing with metadata, metadata is a wonderful thing. 'Cause Jeff Jonas said on the, he was a Cube alumni, entrepreneur, famous data entrepreneur, observation space is super critical for understanding how to make AI work. And that's to your point, having observation data, super important. And of course our observation space is all things. Here at DevNet Create, Christine, thanks for coming on theCUBE, spending the time. >> Thank you. >> Fascinating story, great new venture. Congratulations. >> Christine: Thank you. >> And tackling the world of making developers more productive in real time in production. Really making an impact to coders and sharing and learning. Here in theCUBE, we're doing our share, live coverage here in Mountain View, DevNet Create. We'll be back with more after this short break. (gentle music)
SUMMARY :
Brought to you by Cisco. It's not the main Cisco DevNet in the Cloud Native world. the way that you have with metrics? Is that the main premise? to debug their production systems. on the wall that were green. I only care about the 500s, And then having the ability to make that that the engineers wrote. but you don't know which Is that the solution? and big queries of the world, So once business benefits, or it's 3:00 AM and you need to figure out But for the folks who need answers now, And you can't keep large So the developer had to debug all the time at PaaS, right? on the right time to and all you care about is a Is that something that you is intended for you about the company you founded. and how we want customers So that if they want to call you guys up at? And so, the more data you perfect for that too. that can happen to determine what things focus on the product? that the era of this to the level of the best debugger? And that's the beautiful And tackling the world
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lauren Cooney | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Jeff Jonas | PERSON | 0.99+ |
Christine | PERSON | 0.99+ |
January | DATE | 0.99+ |
2013 | DATE | 0.99+ |
tens | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
1995 | DATE | 0.99+ |
Christine Yen | PERSON | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
2011 | DATE | 0.99+ |
Disney | ORGANIZATION | 0.99+ |
10 seconds | QUANTITY | 0.99+ |
Firefox | TITLE | 0.99+ |
1% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
100% | QUANTITY | 0.99+ |
500 containers | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
3:00 AM | DATE | 0.99+ |
30 people | QUANTITY | 0.99+ |
Ferrari | ORGANIZATION | 0.99+ |
iOS | TITLE | 0.99+ |
50 | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
1/100 | QUANTITY | 0.99+ |
Mountain View, California | LOCATION | 0.99+ |
Honeycomb.io | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
2017 | DATE | 0.99+ |
Honeycomb | ORGANIZATION | 0.99+ |
Mountain View | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
60,000 different mobile apps | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
200 keys | QUANTITY | 0.98+ |
2016 | DATE | 0.98+ |
2018 | DATE | 0.98+ |
Cube | ORGANIZATION | 0.98+ |
DevNet Create | ORGANIZATION | 0.97+ |
SQL | TITLE | 0.97+ |
five application servers | QUANTITY | 0.97+ |
one customer | QUANTITY | 0.97+ |
a year ago | DATE | 0.96+ |
ORGANIZATION | 0.96+ | |
one person | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
about 25 people | QUANTITY | 0.94+ |
JSON | TITLE | 0.94+ |
about two and a half years old | QUANTITY | 0.94+ |
series A | OTHER | 0.93+ |
Each one | QUANTITY | 0.93+ |
one guy | QUANTITY | 0.91+ |
eight requests per second | QUANTITY | 0.9+ |
eight requests a second | QUANTITY | 0.89+ |
less than a lot of data | QUANTITY | 0.89+ |
1/10 of a percent | QUANTITY | 0.89+ |
each log line | QUANTITY | 0.88+ |
one app | QUANTITY | 0.87+ |
Splunk | ORGANIZATION | 0.86+ |
couple years ago | DATE | 0.85+ |
a percent | QUANTITY | 0.85+ |
Tom Joyce, Pensa | CUBEConversation, Feb 2018
(techy music playing) >> Hi, I'm Peter Burris and welcome to another CUBEConversation. I'm here with Tom Joyce, CEO of Pensa, from our beautiful Palo Alto theCUBE Studios, and we're talking a bit about some of the trends and most importantly, some of the real business value reasons behind some of the new network virtualization technologies, but before we get there, Tom, tell us a little bit about yourself, how did you get here? >> Okay, thank you, Peter, thanks for having me in today. I am CEO of Pensa, I've been there for about six months, company's about three years old, so I joined them when a lot of the engineering work had already been done and I've been around the tech industry, mostly on the enterprise side, for a long time. I worked with Hewlett-Packard in a number of different roles, I worked at Dell, I worked at EMC and a number of startups. So, I've been through, you know, a lot of different transitions in tech, as you have, over the years, and got excited about this because I think we're on the cusp of a number of big transitions with some of the things that are coming down the road that make a company like Pensa really interesting and have a lot of potential. So, it's been a tremendous amount of fun working in startup land again. >> So, what does Pensa do? >> So, Pensa is a software company and again, we're based here in Mountain View. Most of our operations are here. We also have an engineering team over in India, and they're all people that are focused on networking technology. They have a long history there and what we do is help primarily service providers, you can think the classic telecommunications industry, but also other modern service providers, build modern networks. We're very focused on network functions, virtualization technology or NFE, which is about building network services that are highly flexible in software that you can deploy on industry standard server technologies, you know, kind of cloud native network service development as opposed to, you know, what many folks have done with hardware-based and siloed networking technologies in that industry for a really long time. So, what we help them do is use intelligent automation to make it easy to build those things in incredible combinations with a lot of complexity, but do it fast, do it correctly every time, and deliver those network services in a way that they can actually transform their businesses and develop new apps a lot faster than they could do otherwise. >> So, Tom, I got to tell you, I'm an analyst, I've been around for a long time and every so often someone comes along and says, "Yeah, the tel-cos are finally going to "break out of their malaise and do something different," yet they always don't quite get there. What is it about this transition that makes it more likely that they succeed at becoming more than just a hauler of data to actual digital services provider? >> Yeah, I mean, it's an excellent question. Frankly, you know, it's one that I face all the time. You know, as you traffic around Silicon Valley people are focused on certain hot topics, and you know, getting folks to understand that, you know, we are at a cusp point where this industry's going to fundamentally change and there's a huge amount of money that's actually being spent and a lot more coming. You know, a lot of folks don't necessarily, who don't spend their time there everyday, realize what's happening in these communications service providers, which you know, we used to call tel-cos, because what's happened, and I think, you know, I'm interested in your perspective on this, over time you see long periods in that industry of things don't change and then everything changes at once. >> Yeah. >> We've seen that many, many times, you know, and the disruptions in that industry, which were very public, you know, 15 years ago and then another 10 years before that, those were trigger points when the industry had to change, and we strongly believe that we're at that point right now where if you look at the rest of, like, enterprise IT where I've spent most of my career, we've gone through 15 years of going from hardware-based, proprietary, siloed to software-based, industry standard servers, cloud, and cloud native. >> Peter: And service-based. >> And service-based, right, and the formerly known as tel-co business is late to the party, you know. So, it's almost like that industry is the last domino to fall in this transition to new technology, and right now they're under enormous pressure. They have been for a while, I mean, I think if you look at the industry it's a trillion dollar plus business that touches basically every business and every person in the world, and every business and every person has gone to wireless and data from wire-line and the old way of doing things, and these service providers have pretty much squeezed as much as they can possibly get out of the old technology model and doing a great job of adapting to wireless and delivering new services, but now there's a whole new wave of growth coming and there's new technologies coming that the old model won't adapt to, and so frankly, the industry's been trying to figure this out for about five years through standards and cooperation and investment and open-source stuff, and it's kind of only at the point now where a lot of these technologies work, but our job is to come in and figure out how do we make them, you know, work in production. How do we make it scalable, and so you know, that's why we're focused there is because there's an enormous amount of money that gets spent here, there are real problems. It's not crowded with startups, you know. We have kind of a free shot on goal to actually do something big, and that's why I'm excited about being part of this company. >> Well, the network industry is always, unlike the server and storage industries, always been a series of step functions, and it's largely because of exactly what you said, that the tel-cos, which I'll still call them tel-cos, but those network service providers historically have tied their services and their rates back to capital investments. >> Tom: You're right, yeah. >> And so they'd wait and they'd wait and they'd wait before they pulled the trigger on that capital investment-- >> Tom: Mm-hm. >> Because there was no smooth way of doing it. >> Tom: Right, yeah. >> And so as a consequence you've got these horrible step functions, and customers, enterprises like a more smooth set of transitions, >> Tom: Yeah. and so it's not surprising that more of the money's been going to the server and the storage guys and the traditional networking types of technologies. >> Mm-hm, yeah. >> But this raises an interested question. Does some of the technology that you're providing make it possible for the tel-co or the network service provider. >> Tom: Yeah, yeah. >> To say, "You know what, I can use NFV "as a way of smoothing out my investments "and enter into markets faster with a little bit "more agility so that I can make my customers "happy by showing a smoother program forward." You know, make my rates, adjust my rates accordingly, but ultimately be more likely to be successful because I don't have to put two or three or $10 billion behind a new service. I can put just what's needed and use NFV to help me scale that. >> That's exactly right, I mean, we're really bringing software programmability and devops kinds of capabilities to this industry, us and other folks that are involved in this, you know, this transition, which we think is enormous. I mean, it's probably one of the biggest transitions that's left to happen in tech, and the old model of set it and forget it. I put in my hardware based router, my switch, build out my, make a big investment, that step function you talked about, and depreciate it over a long period of time doesn't work it anymore, because during that long period of time new opportunities emerge, and these communication service providers haven't gotten all the growth because other people have jumped into those opportunities, the over-the-top people, the Netflixes, probably increasingly cloud players and saying we're going to take that growth, and so if you're one of these... You know, there's a few hundred large communication service providers throughout the world. This is an existential problem for them. They have to figure out how to adapt, so when the next thing comes along they can reprogram that network. You know, if there's an opportunity to drop a server in a remote branch and offer a whole range of services on it, they want to be able to continually reprogram that, update those, and you know, we've seen the first signs of that, we saw-- >> And let me stop-- >> Right, as an example of that. >> But not just take a hardware approach to adding new services and improving the quality of the experience that the customers have. >> That's exactly right, they want to have software programmability. They want to behave like everybody else in the world now-- >> Right. >> And take advantage, frankly, of a lot of things that have been proven to work in other spheres. >> So, the fundamental value proposition that you guys are providing to them is bring some of these new software disciplines to your traditional way of building out your infrastructure so that you can add new services more smoothly, grow them in a way that's natural and organic, establish rates that don't require a 30-year visibility in what your capital expenses are. >> That's right, I mean, so one of our, you know, our flagship customers is Nokia. Nokia you can think about as kind of a classic network equipment supplier to many of those service providers, but they also provide software based services through things like Nuage that they own and some things they got from Alcatel-Lucent, and they do system integration and they've been kind of on the leading edge in using our technology to help with that of saying, "Look, let's deliver you "industry standard, intel-based servers, "running network functions in software," and what we help them do is actually design, validate, build those capabilities that they ship to their customers, and you know, without something like Pensa... Somebody has to go in and code it up. Somebody has to really understand how to make these different parts work together. I've got a router from one place. I've got a virtual network function from someplace else. Interoperability is a challenge. We automate all of that. >> Peter: Right. >> And we're using intelligence to do it, so you can kind of go much faster than you otherwise could. >> Which means that you're bringing value to them and at the same time essentially fitting their operating model of how they operate. >> Exactly, yeah. >> So, you're not forcing dramatic change in how they think about their assets, but there are some real serious changes on the horizon. 5G, net neutrality and what that means and whether or not these service providers are going to be able to enter into new markets, so it does seem like there's a triple witching hour here of the need for new capital investment because those new services are going to have to be required, and there's new competitors that are coming after them. We like to think that, or we think in many respects the companies that are really in AWS's crosshairs are the tel-cos, and you guys are trying to give them approach so that they can introduce new agility or be more agile, introduce some services, and break that bond of rate-based, capital investment-based innovation. >> Yeah, exactly right, and also, frankly, break the bond of having to buy everything from the same tel-co equipment provider they've done for the last 20 years in extraordinary margins. People want to have flexibility to combine things in different combinations as these changes hit. You know, 5G, you mentioned, is probably the biggest one, you know, and I'd say even a year ago it was clearly on the horizon but way out in the distance, and now almost every day you're seeing production deployments in certain areas, and it is going to fundamentally change how the relationship works between businesses and consumers and the service providers and the cloud people. All of a sudden you have the ability to slice up a network, you have the ability to program it remotely, you have the ability to deliver all kinds of new video-based apps and there's a whole bunch of stuff we can't even conceive of. The key thing is you need to be able to program it in software and change it when change is required, and they don't have that with technology like this. >> That's right, and 5G provides that density of services that can actually truly be provided in a wireless way. >> Exactly. >> All right, but so this raises an issue. Look, we're talking about big problems here. These are big, big, big problems, and no company, let alone Pensa, has unlimited resources. >> Tom: Hm. >> So, where are you driving your engineers and your team to place their design and engineering bets? >> Yeah, I mean, look, there's clearly a set of problems that need to be solved, and then there's some things that we do particularly well. We have some technology that we think is actually unique in a couple of areas. Probably the heart of it is intelligently validating that the network you designed works. So, let's say you are a person in a service provider or you're an SI providing a solution to a service provider, you make choices based on the requirements, because you're a network engineer, that I'm going to use this router, I'm going to use a Palo Alto Networks firewall, I'm going to use Nginx, I'm going to use Nuage, whatever that combination is, so I've got my network service. Very often they don't have a way to figure out that it's going to work when they deploy it. >> Peter: Hm. >> And we build, effectively, models for every single element and understand the relationships of how they work together. So, we can, you know, pretty much on-the-fly validate that a new network service is going to work. The next thing we do is go match that to the hardware that's required. I mean, servers, you know, they're not all the same and configurations matter. I mean, we know that obviously from the enterprise space and we can make sure that what you're actually intending to deploy you have a server configuration or underlying network infrastructure that can support it. So, our goal is to say, you know, we do everything, frankly, from import network services or onboard them from different vendors and test them from an interoperability standpoint, help you do the design, but the real heart of what we do is in that validation area. I think the key design choice that we are making, and frankly, have had to make is to be integratable and interoperable, and what that means is, you know, these service providers are working with multiple different other vendors. They might have two different orchestration software platforms. They might have some old stuff they want to work with. What we're going to do is kind of be integratable with all of the major players out there. We're not going to come in and force, you know, our orchestrator down your throat. We're going to work with all of the major open-source ones that are there and be integratable with them. You know, we believe strongly in kind of an API economy where we've got to make our APIs available and be integratable because, as you said, it's a big problem. We're not going to solve it all ourselves. We've got to work with other choices that one of these customers makes. >> So, we like to say at Wikibon that in many respects the goal of some of these technologies, the NFV software defined networking technologies, needs to be to move away from the device being the primary citizen to truly the API being the primary citizen. >> Mm-hm. >> People talk about the network economy without actually explaining what it means. Well, in many respects what it really means is networks of APIs. >> Tom: Yes. >> Is that kind of the direction that you see your product going and how are you going to rely on the open-source community, or not, to get there, because there's a lot of ancillary activity going on in creating new inventive and innovative capabilities. >> Yeah, I think, I mean, that's a really big question and to kind of tackle the key parts of it in my mind... You know, open-source is extremely valuable, and if you were a communication service provider you may want to use open-source because it gives you the ability to innovate. You can have your programmers go in and make changes and do something other folks might not do, but the other side of the coin for these service providers is they need it to be bullet-proof. >> Peter: Right. >> They can't have networks that go down, and that's the value of validation and proving that it works, but they also need commercial software companies to be able to work with the major open-source components and bring them together in a way that when they deploy it they know it's going to work, and so we've joined the Linux Foundation. We're one of the founding members of Linux Foundation networking, which now has open NFV, and has ONAP, and a number of other critical programs, and we're working with them. We've also joined OSM, which is part of the European Telecommunications Standards Institute, which is another big standards organization. I'm not aware of another company in our space or related to NFV that's working with both, and so we feel positively about open-source but we think that there's a role for commercial software companies to help make it bullet-proof for that buyer and make... If you are a very large service provider you want somebody that you can work with that will stand behind it and support it, and that's what we intend to do. >> Well, as you said, your fundamental value proposition sounds like yeah, you're doing network virtualization, you're doing the, you're adding the elements required for interoperability and integration, but also you're adding that layer of operational affinity to how tel-cos, or how service providers actually work. >> Tom: Mm-hm. >> That is a tough computing model. I don't know that open-source is going to do that. There's always going to be a need to try to ensure that all these technologies can fit into the way a business actually works. >> Tom: Yeah. >> And that's going to be a software, an enterprise software approach, whoever the target customer is, do you agree? >> Yeah, we use a great partnership between the open-source community, commercial software companies like us, and the service providers-- >> Peter: Right. >> To build this thing, and we've seen that happen in enterprise. Devops was that kind of a phenomenon. You have winning commercial software providers, you have a lot of open-source, and you have the users themselves, and we think a lot of those concepts are going into this service provider space, and you know, for us it's all about at the end of the day we want to have the ability to get people to do their job faster. You know, if things change in the industry, a service provider using Pensa or an SI using Pensa can design, validate, build and run that next thing and blow it out to their network faster than anybody else. >> Peter: Time to value. So, it's time to value. >> Right, time to value. >> And certainty that it'll work. >> And in many respects, at the end of the day we all want to be big, digital businesses, but if you don't have a network that supports your digital business you don't have a digital business. >> That is correct. >> All right, so last question. >> Tom: Yes. >> Pensa two years from now... >> Tom: Hm. >> What does it look like? >> Yeah, I think we're, our goal right now is to line up with some of the leading industry players here. You know, folks that service those large service providers and help them build these solutions and do it faster. I think our goal over the next two years is to become a control point before service providers and again, folks like SIs that work for them and sometimes help run their networks for them. Give them a control point to adapt to new opportunities and respond to new threats by being able to rapidly change and modify and roll out new network services for new opportunities. You know, the thing we learned in the whole mobile transition is you really can't conceive of what's next. What's next two years from now in this space, who knows? You know, if your model is buy a bunch of hardware and depreciate it over five years you won't be able to adapt. We want to be-- >> You do know that. >> We know that, you know, we want to be one of those control points-- >> Peter: Right. >> That helps you do that quickly without having to go wade into the code. You know, so our goal is to allow... You know, our whole tagline is think faster, which means use intelligent technology to drive your business faster, and that's what we intend to be in two years. >> Excellent, Tom Joyce, CEO of Pensa. Thanks very much for being on theCUBE. >> Thank you very much. >> And for all of you, this is Peter Burris. Once again, another great CUBEConversation from our Palo Alto Studios. Look forward to seeing you on another CUBEConversation. (techy music playing)
SUMMARY :
of the trends and most importantly, So, I've been through, you know, that you can deploy on industry standard "Yeah, the tel-cos are finally going to and you know, getting folks to understand that, had to change, and we strongly believe and doing a great job of adapting to wireless and it's largely because of exactly what you said, of doing it. of the money's been going to the server Does some of the technology that because I don't have to put two or three that are involved in this, you know, of the experience that the customers have. to have software programmability. that have been proven to work in other spheres. that you guys are providing to them is that they ship to their customers, so you can kind of go much faster than you otherwise could. to them and at the same time essentially fitting are the tel-cos, and you guys are trying to program it remotely, you have the ability of services that can actually truly be provided All right, but so this raises an issue. a set of problems that need to be solved, So, our goal is to say, you know, being the primary citizen to truly People talk about the network economy Is that kind of the direction that you see and if you were a communication service provider and that's the value of validation of operational affinity to how tel-cos, I don't know that open-source is going to do that. the ability to get people to do their job faster. So, it's time to value. And in many respects, at the end of the day in the whole mobile transition is you You know, so our goal is to allow... Excellent, Tom Joyce, CEO of Pensa. Look forward to seeing you on another CUBEConversation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tom Joyce | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
OSM | ORGANIZATION | 0.99+ |
India | LOCATION | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Hewlett-Packard | ORGANIZATION | 0.99+ |
30-year | QUANTITY | 0.99+ |
European Telecommunications Standards Institute | ORGANIZATION | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
Mountain View | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Pensa | ORGANIZATION | 0.99+ |
Nuage | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Feb 2018 | DATE | 0.99+ |
15 years | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Alcatel-Lucent | ORGANIZATION | 0.98+ |
a year ago | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Wikibon | ORGANIZATION | 0.98+ |
about five years | QUANTITY | 0.98+ |
15 years ago | DATE | 0.97+ |
first signs | QUANTITY | 0.97+ |
about six months | QUANTITY | 0.96+ |
CUBEConversation | EVENT | 0.96+ |
two years | QUANTITY | 0.94+ |
Netflixes | ORGANIZATION | 0.93+ |
Palo Alto | LOCATION | 0.92+ |
Palo Alto Studios | ORGANIZATION | 0.92+ |
ONAP | ORGANIZATION | 0.9+ |
Palo Alto Networks | ORGANIZATION | 0.87+ |
single element | QUANTITY | 0.86+ |
last 20 years | DATE | 0.86+ |
over five years | QUANTITY | 0.85+ |
about three years old | QUANTITY | 0.84+ |
5G | ORGANIZATION | 0.8+ |
trillion dollar | QUANTITY | 0.77+ |
triple witching hour | QUANTITY | 0.73+ |
theCUBE Studios | ORGANIZATION | 0.71+ |
one place | QUANTITY | 0.71+ |
next two years | DATE | 0.71+ |
CEO | PERSON | 0.7+ |
10 years | DATE | 0.69+ |
hundred large communication service | QUANTITY | 0.67+ |
years | DATE | 0.66+ |
Pensa | PERSON | 0.65+ |
person | QUANTITY | 0.64+ |
Nginx | TITLE | 0.63+ |
Matt Klein, Lyft | KubeCon 2017
>> Narrator: Live from Austin Texas. It's theCUBE, covering KubeKon and CloudNativeCon 2017. Brought to you by Red Hat, the Linux Foundation, and theCUBE's ecosystem partners. >> Welcome back everyone, live here in Austin Texas, theCUBE's exclusive coverage of CloudNativeConference and KubeKon, for Kubernetes' Conference. I'm John Furrier, co-founder of SiliconANGLE and my co-host Stu Miniman, our analyst. And next is Matt Klein, a software engineer at Lyft, ride-hailing service, car sharing, social network, great company, everyone knows that everyone loves Lyft. Thanks for coming on. >> Thanks very much for having me. >> All right so you're a customer of all this technology. You guys built, and I think this is like the shiny use cases of our generation, entrepreneurs and techies build their own stuff because they can't get product from the general market. You guys had a large-scale demand for the service, you had to go out and build your own with open source and all those tools, you had a problem you had to solve, you build it, used some open source and then give it back to open source and be part of the community, and everybody wins, you donated it back. This is, this is the future, this is what it's going to be like, great community work. What problem were you solving? Obviously Lyft, everyone knows it's hard, they see their car, lot of real time going on, lot of stuff happening >> Matt: Yeah, sure. >> magic's happening behind the scenes, you had to build that. Talk about the problem you solved. >> Well, I think, you know, when people look at Lyft, like you were saying, they look at the app and the car, and I think many people think that it's a relative simple thing. Like how hard could it be to bring up your app and say, I want a ride, and you know, get that car from here to there, but it turns out that it's really complicated. There's a lot of real-time systems involved in actually finding what are all the cars that are near you, and what's the fastest route, all of that stuff. So, I think what people don't realize is that Lyft is a very large, real-time system that, at current scale, operates at millions of requests per second, and has a lot of different use cases around databases, and caching, you know, all those technologies. So, Lyft was built on open source, as you say, and, you know Lyft grew from what I think most companies do, which is a very simple, monolithic stack, you know, it starts with a PHP application, we're a big user of MongoDB, and some load balancer, and then, you know-- >> John: That breaks (laughs) >> Well, well no but but people do that because that's what's very quick to do. And I think what happened, like most companies, is, or that most companies that become very successful, is Lyft grew a lot, and like the few companies that can become very successful, they start to outgrow some of that basic software, or the basic pieces that they're actually using. So, as Lyft started to grow a lot, things just didn't actually start working, so then we had to start fixing and building different things. >> Yeah, Matt, scale is one of those things that gets talked about a lot. But, I mean Lyft, you know, really does operate at a significant scale. >> Matt: Yeah, sure. >> Maybe you can talk a little bit about, you know, what kind of things were breaking, >> Matt: Absolutely, yeah, and then what led to Envoy and why that happened. >> Yeah, sure. I mean, I think there's two different types of scale, and I think this is something that people don't talk about enough. There's scale in terms of things that people talk about, in terms of data throughput or requests per second, or stuff like that. But there's also people scale, right. So, as organizations grow, we go from 10 developers to 50 developers to 100, where Lyft is now many hundreds of developers and we're continuing to grow, and what I think people don't talk about enough is the human scale, so you know, so we have a lot of people that are trying to edit code, and at a certain size, that number of people, you can't all be editing on that same code base. So that's I think the biggest move where people start moving towards this microservice or service-oriented architecture, so you start splitting that apart to get people-scale. People-scale probably usually comes with requests per second scale and data scale and that kind of stuff. But these problems come hand in hand, where as you grow the number of people, you start going into microservices, and then suddenly you have actual scale problems. The database is not working, or the network is not actually reliable. So from Envoy perspective, so Envoy is an open source proxy we built at Lyft, it's now part of CNCF, it's having tremendous uptake across the industry, which is fantastic, and the reason that we built Envoy is what we're seeing now in the industry is people are moving towards polyglot architectures, so they're moving towards architectures with many different applications, or many different languages. And it used to be that you could use Java and you could have one particular library that would do all of your networking and service discovery and load balancing, and now you might have six different languages. So how as an organization do you actually deal with that? And what we decided to do was build an out-of-process proxy, which allows people to build a lot of functionality into one place, around load balancing, and service discovery, and rate limiting, and buffering, and all those kinds of things, and also most importantly, observability. So things like tracing and stats and logging. And that allowed us to actually understand what was going on in the network, so that when problems were happening, we could actually debug what was going on. And what we saw at Lyft, about three years ago, is we had started our microservices journey, but it was actually almost, it was almost stopped, because what people found is they had started to build services because supposedly it was faster than the monolith, but then we would start having problems with tail latency and other things, and they didn't know hot to debug it. So they didn't trust those services, and then at that point they say, not surprisingly, we're just going to go back and we're going to build it back into the monolith. So, we're almost in that situation where things are kind of in that split. >> So Matt I have to think that's the natural, where you led to service mesh, and Istio specifically and Lyft, Google, IBM all working on that. Talk a little bit about, more about what Istio, it was really the buzz coming in with service mesh, there's also there's some competing offerings out there, Conduit, new one announced this week, maybe give us the landscape, kind of where we are, and what you're seeing. >> So I think service mesh is, it's incredible to look around this conference, I think there's 15 or more talks on service mesh between all of the Buoyant talks on Linker D and Conduit and Istio and Envoy, it's super fantastic. I think the reason that service mesh is so compelling to people is that we have these problems where people want to build in five or six languages, they have some common problems around load balancing and other types of things, and this is a great solution for offloading some of those problems into a common place. So, the confusion that I see right now around the industry is service mesh is really split into two pieces. It's split into the data plane, so the proxy, and the control plane. So the proxy's the thing that actually moves the bytes, moves the requests, and the control plane is the thing that actually tells all the proxies what to do, tells it the topology, tells it all the configurations, all the settings. So the landscape right now is essentially that Envoy is a proxy, it's a data plane. Envoy has been built into a bunch of control planes, so Istio is a control plane, it's reference proxy is Envoy, though other companies have shown that they can integrate with Istio. Linker D has shown that, NGINX has shown that. Buoyant just came out with a new combined control plane data plane service mesh called Conduit, that was brand new a couple days ago, and I think we're going to see other companies get in there, because this is a very popular paradigm, so having the competition is good. I think it's going to push everyone to be better. >> How do companies make sense of this, I mean, if I'm just a boring enterprise with complexity, legacy, you know I have a lot of stuff, maybe not the kind of scale in terms of transactions per second, because they're not Lyft, but they still have a lot of stuff. They got servers, they got data center, they got stuff in the cloud, they're trying to put this cloud native package in because the developer movement is clearly pushing the legacy guy, old guard, into cloud. So how does your stuff translate into the mainstream, how would you categorize it? >> Well, what I counsel people is, and I think that's actually a problem that we have within the industry, is that I think sometimes we push people towards complexity that they don't necessarily need yet. And I'm not saying that all of these cloud native technologies aren't great, right, I mean people here are doing fantastic things. >> You know how to drive a car, so to speak, you don't know how to use the tech. >> Right, and I advise companies and organizations to use the technology and the complexity that they need. So I think that service mesh and microservices and tracing and a lot of the stuff that's being talked about at this conference are very important if you have the scale to have a service-oriented microservice architecture. And, you know, some enterprises they're segmented enough where they may not actually need a full microservice real-time architecture. So I think that the thing to actually decide is, number one, do you need a microservice architecture, and it's okay if you don't, that's just fine, take the complexity that you need. If you do need a microservice architecture, then I think you're going to have a set of common problems around things like networking, and databases, and those types of things, and then yes, you are probably going to need to build in more complicated technologies to actually deal with that. But the key takeaway is that as you bring on, as you bring on more complexity, the complexity is a snowballing effect. More complexity yields more complexity. >> So Matt, might be a little bit out of bounds for what we're talking about, but when I think about autonomous vehicles, that's just going to put even more strain on the kind of the distributed natured systems, you know, things that have to have the edge, you know. Are we laying the groundwork at a conference like this? How's Lyft looking at this? >> For sure, and I mean, we're obviously starting to look into autonomous a lot, obviously Uber's doing that a fair amount, and if you actually start looking at the sheer amount of data that is generated by these cars when they're actually moving around, it's terabytes and terabytes of data, you start thinking through the complexity of ingesting that data from the cars into a cloud and actually analyzing it and doing things with it either offline or in real-time, it's pretty incredible. So, yes, I think that these are just more massive scale real-time systems that require more data, more hard drives, more networks, and as you manage more things with more people, it becomes more complicated for sure. >> What are you doing inside Lyft, your job. I mean obviously, you're involved in open source. Like, what are you coding specifically these days, what's the current assignment? >> Yeah, so I'm a software engineer at Lyft, I lead our networking team. Our networking team owns obviously all the stuff that we do with Envoy, we own our edge system, so basically how internet traffic comes into Lyft, all of our service discovery systems, rate limiting, auth between services. We're increasingly owning our GRPC communications, so how people define their APIs, moving from a more polling-based API to a more push-based API. So our team essentially owns the end-to-end pipe from all of our back-end services to the client, so that's APIs, analytics, stats, logging, >> So to the app >> Yeah, right, right, to the app, so, on the phone. So that's my job. I also help a lot with general kind of infrastructure architecture, so we're increasingly moving towards Kubernetes, so that's a big thing that we're doing at Lyft. Like many companies of Lyft's kind of age range, we started on VMs and AWS and we used SaltStack and you know, it's the standard story from companies that were probably six or eight years old. >> Classic dev ops. >> Right, and >> Gen One devops. >> And now we're trying to move into the, as you say, Gen Two world, which is pretty fantastic. So this is becoming, probably, the most applicable conference for us, because we're obviously doing a lot with service mesh, and we're leading the way with Envoy. But as we integrate with technologies like Istio and increasingly use Kubernetes, and all of the different related technologies, we are trying to kind of get rid of all of our bespoke stuff that many companies like Lyft had, and we're trying to get on that general train. >> I mean you guys, I mean this is going to be written in the history books, you look at this time in a generation, I mean this is going to define open source for a long, long time, because, I say Gen one kind of sounds pejorative but it's not. It's really, you need to build your own, you couldn't just buy Oracle database, because, you probably have some maybe Oracle in there, but like, you build your own. Facebook did it, you guys are doing it. Why, because you're badass, you had to. Otherwise you don't build customers. >> Right and I absolutely agree about that. I think we are in a very unique time right now, and I actually think that if you look out 10 years, and you look at some of the services that are coming online, and like Amazon just did Fargate, that whole container scheduling system, and Azure has one, and I think Google has one, but the idea there is that in 10 years' time, people are really going to be writing business logic, they're going to insert that business logic >> They may do a powerpoint slides. >> That would be nice. >> I mean it's easy to me, like powerpoint, it's so easy, that's, I'm not going to say that's coding, but that's the way it should be. >> I absolutely agree, and we'll keep moving towards that, but the way that's going to happen is, more and more plumbing if you will, will get built into these clouds, so that people don't have to worry about all this stuff. But we're in this intermediate time, where people are building these massive scale systems, and the pieces that they need is not necessarily there. >> I've been saying in theCUBE now for multiple events, all through this last year, kind of crystallized and we were talking about with Kelsey about this, Hightower, yesterday, craft is coming back to programming. So you've got software engineering, and you've got craftsmanship. And so, there's real software engineering being done, it's engineering. Application development is going to go back to the old school of real craft. I mean, Agile, all it did was create a treadmill of de-risking rapid build scale, by listening to data and constantly iterating, but it kind of took the craft out of it. >> I agree. >> But that turned into engineering. Now you have developers working on say business logic or just solving, building a healthcare app. That's just awesome software. Do you agree with this craft? >> I absolutely agree, and actually what we say about Envoy, so kind of the catchword buzz phrase of Envoy is to make the network transparent to applications. And I think most of what's happening in infrastructure right now is to get back to a time where application developers can focus on business logic, and not have to worry about how some of this plumbing actually works. And what you see around the industry right now, is it is just too painful for people to operate some of these large systems. And I think we're heading in the right direction, all of the trends are there, but it's going to take a lot more time to actually make that happen. >> I remember when I was graduating college in the 80s, sound old but, not to date myself, but the jobs were for software engineering. I mean that is what they called it, and now we're back to this devops brought it, cloud, the systems kind of engineering, really at a large scale, because you got to think about these things. >> Yeah, and I think what's also kind of interesting is that companies have moved toward this devops culture, or expecting developers to operate their systems, to be on call for them and I think that's fantastic, but what we're not doing as an industry is we're not actually teaching and helping people how to do this. So like we have this expectation that people know how to be on-call and know how to make dashboards, and know how to do all this work, but they don't learn it in school, and actually we come into organizations where we may not help them learn these skills. >> Every company has different cultures, that complicates things. >> So I think we're also, as an industry, we are figuring out how to train people and how to help them actually do this in a way that makes sense. >> Well, fascinating conversation Matt. Congratulations on all your success. Obviously a big fan of Lyft, one of the board members gave a keynote, she's from Palo Alto, from Floodgate. Great investors, great fans of the company. Congratulations, great success story, and again open source, this is the new playbook, community scale contribution, innovation. TheCUBE's doing it's share here live in Austin, Texas, for KubeKon, for Kubernetes conference and CloudNativeCon. I'm John Furrrier, for Stu Miniman, we'll be back with more after this short break. (futuristic music)
SUMMARY :
Brought to you by Red Hat, the Linux Foundation, and KubeKon, for Kubernetes' Conference. and all those tools, you had a problem you had to solve, Talk about the problem you solved. and caching, you know, all those technologies. some of that basic software, or the basic pieces But, I mean Lyft, you know, really does operate and why that happened. is the human scale, so you know, so we have a lot of people where you led to service mesh, and Istio specifically that actually tells all the proxies what to do, you know I have a lot of stuff, maybe not the kind of scale is that I think sometimes we push people towards you don't know how to use the tech. But the key takeaway is that as you bring on, on the kind of the distributed natured systems, you know, amount, and if you actually start looking at the sheer Like, what are you coding specifically these days, from all of our back-end services to the client, and you know, it's the standard story from companies And now we're trying to move into the, as you say, in the history books, you look at this time and I actually think that if you look out 10 years, They may do a powerpoint I mean it's easy to me, like powerpoint, it's so easy, and the pieces that they need is not necessarily there. Application development is going to go back Now you have developers working on say business logic And what you see around the industry right now, I mean that is what they called it, and now we're back and know how to do all this work, but they don't learn it that complicates things. and how to help them actually do this in a way Obviously a big fan of Lyft, one of the board members
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt Klein | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
John Furrrier | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Red Hat | ORGANIZATION | 0.99+ |
Matt | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
10 developers | QUANTITY | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
two pieces | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
six languages | QUANTITY | 0.99+ |
50 developers | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
Austin Texas | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
eight years | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
10 years' | QUANTITY | 0.99+ |
Conduit | ORGANIZATION | 0.99+ |
100 | QUANTITY | 0.99+ |
CloudNativeConference | EVENT | 0.99+ |
hundreds | QUANTITY | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
last year | DATE | 0.98+ |
Austin, Texas | LOCATION | 0.98+ |
Envoy | ORGANIZATION | 0.98+ |
this week | DATE | 0.98+ |
KubeCon | EVENT | 0.98+ |
CloudNativeCon | EVENT | 0.98+ |
Linker D | ORGANIZATION | 0.98+ |
yesterday | DATE | 0.98+ |
Kelsey | PERSON | 0.98+ |
KubeKon | EVENT | 0.98+ |
Istio | ORGANIZATION | 0.97+ |
six different languages | QUANTITY | 0.97+ |
PHP | TITLE | 0.97+ |
MongoDB | TITLE | 0.97+ |
80s | DATE | 0.97+ |
Envoy | TITLE | 0.96+ |
two different types | QUANTITY | 0.96+ |
one place | QUANTITY | 0.94+ |
NGINX | TITLE | 0.94+ |
TheCUBE | ORGANIZATION | 0.93+ |
second scale | QUANTITY | 0.92+ |
CloudNativeCon 2017 | EVENT | 0.92+ |
Floodgate | ORGANIZATION | 0.92+ |
about three years ago | DATE | 0.92+ |
Kalyan Ramanathan, Sumo Logic - AWS Summit SF 2017 - #AWSSummit - #theCUBE
>> Announcer: Live, from San Francisco, it's theCUBE, covering AWS Summit 2017, brought to you by Amazon Web Services. (bouncy techno music) >> Hi, welcome back to theCUBE, live in San Francisco at the AWS Summit here. I'm Lisa Martin, joined by my co-host Jeff Frick. Our next guest is from Sumo Logic. We have the VP of Product Marketing, Kalyan Ramanathan. Welcome to theCUBE! >> Thank you very much. Very excited to be here. >> Very excited to have you here. So, tell us a little bit about what Sumo Logic is doing with AWS and machine data. What services are you delivering, who's your target audience, all that good stuff. >> Yeah, absolutely. We are a cloud native, i.e., SaaS-based, machine data analytics platform, and what we do is to help our customers manage the operations and security of their machine-critical applications. Right, so we are an entirely AWS-based customer, we've been using AWS since our inception. What we do is to provide machine data and machine learning so that our customers can manage the performance of their applications, right. So, what is machine data, you might ask. So machine data typically includes logs, metrics, events, anything that your application is generating when it is running, when it is serving the enterprise's customers. And what Sumo Logic excels at is to ingest this data. We collect and ingest this data, and then we apply a lot of analytics on that data. We have some patented machine learning technologies that helps us correlate this data, get insights from this data, and then using this data, our customers manage the applications that they are providing to their end customers. >> And it's not just their applications that are co-located at AWS with your application, it's beyond that, I assume. >> Absolutely, I mean, we have customers from, you know, very different walks of life, we have customers who are on-prem, customers who are down the hybrid path and moving to AWS, and customers who are on an AWS. You know, I can rattle off a queue of great names, Pinterest, Twitter, Airbnb, are examples of customers who are born in the cloud. They run on AWS from the very get-go. And they use us today to manage the security and performance of their applications. We have other customers who have migrated to AWS, Scripps Network, the guys behind HGTV, it's a great example of a customer who was running applications in their on-prem data center, and then one day decided that they are a content company, and they don't want to be running their own data center. >> Right. >> And so they wanted to move their applications to the cloud, and they used Sumo Logic to help migrate their applications to AWS. >> What are some of the barriers that you help customers overcome when it comes to maybe that daunting task of migrating services? >> Yeah, that's a great question. You know, the first thing that someone has to do before they start to migrate their applications to the cloud is to understand what is it that they have within the data centers, right. If I don't know what I have, how do I even migrate that to the cloud? The first task is obviously provide visibility into what is within their data center. And that's where Sumo Logic comes in, right. If you deploy Sumo Logic, and if you implement Sumo Logic as a SaaS service, the first thing that we do is to provide you complete visibility into your applications. All the application components, the infrastructures that the application is deployed on, the services that the application may be using. The next thing that you want to do is start to migrate your workload to the cloud. But you want to do this in a very thoughtful way, and what that means is that you start to move your applications and your infrastructure to AWS, but then you do this cut of work to AWS, only when you are convinced about the performance as well as the security of that application in this new environment. So the ability to baseline what you have in your current environment, and then compare it to what it might look like in this new environment within AWS is extremely critical, and what Sumo Logic helped Scripps Network do is to essentially compare and contrast how they are performing in this new environment. And when they were extremely comfortable that their security and their performance was no less in this new environment compared to what they were doing in the data center, they were able to flip that switch and complete the move over to AWS. >> You guys are in an interesting position, because you were born in AWS, essentially, cloud-native, and you have a lot of customers that are running in AWS. And so you guys did a survey, a report, really kind of taking a look at what's actually happening with cloud-native companies running their apps in AWS. I wonder if you can kind of give-- What did you guys find in this thing? >> Yeah, absolutely Jeff. And this is, the report that we put out towards the end of last year, I think is one of the first start leadership reports that gives, you know, people in AWS, a birds-eye view into how are their peers, you know, deploying, architecting, and managing their applications within the AWS environment. So, how did we put this report together? Sumo Logic has over 1200 customers under management today and more than 80% of our customers are, you know, using AWS today. They are implementing their applications within AWS. So what we did was to anonymously mine data from our customers, and publish a report that provides the set of best practices, and the commonly-used techniques and architectures that, you know, the leaders are doing and implementing today as they move to AWS. Now there were some great learnings that we found out as we put this report together, alright. First and foremost, we discovered that the stack, that a customer typically deploys in AWS, is very unlike the stack that they deploy within their on-premise data center. So, how does that work out? I mean, so, many of the AWS customers that we mined here, happen to use Docker extensively within their AWS environment. In fact, 18% of our customers, this was last year, already are using Docker, you know, for the production application. Which is pretty amazing, given that Docker is just, you know, two or three years-- >> Well hopefully Solomon and Ben are watching, we actually have another crew with Docker-- >> Absolutely. >> Right now. >> We'll have to report that back. >> You know, Docker is all the rage, no doubt about, and we are seeing, you know, increased adoption of Docker across the board, among all of, for AWS customer. The other thing that we found very interesting was that the applications that you may typically expect to succeed in your data center, are not quite doing that well in the AWS world. I'll give you a good example, in the database world, you would expect to see Oracle and SQL Server, you know, ruling the root within a typical data center today. You go on AWS, that's not the case at all. The NoSQL databases, right, are the leading vendors of databases within the AWS world. MongoDB, Redis, you know, are well ahead of Oracle and SQL Server when it comes to AWS. When it comes to web server technologies. You know, Nginx and Apache, you know, are well ahead of IAS, which happens to be the web server of choice within the data center world. Now we've also seen, you know, pretty amazing adoption of Lambda Technologies within AWS. I mean, that's to be expected, a certain bit, because I know AWS is definitely pushing it, but again, 12% use it within a production environment. You know, one year into Lambda, GA in some sense, is pretty astonishing numbers, so-- >> What was your takeaway? Was it because of the applications that are deployed, is it because, kind of, historical legacy of what Amazon offered, kind of for an on-prem versus an on-prem, you know, those early business decisions, not so much today, but, you know, years ago, when there was the security and public cloud, you know, it was a very different conversation three years ago. What were some of your takeaways as to the why? >> The takeaways that I think, there's a meta takeaway here, and let me start with that. The meta takeaway is that as people are building applications in AWS, native AWS applications, or as they are migrating their applications from an on-prem data center to, let's say, AWS, this is giving IT architects the opportunity rethink how their applications are constructed. You know, they are no longer bound by the old shackles of, if I have to use a database, it's Oracle or SQL Server. If I have to use a IIS web server, it's IIS or some other option. >> Right. >> So, once you are unchained from these shackles, you have the ability now to rethink and re-architect your application from scratch to target and to focus on this amazing new world that the cloud, you know, offers. So that's the, that's a big meta takeaway for us, and, what we have learned is that once you are unbound, you can come up with new technologies and new ways of doing things that are adopted and better suitable for this new space. That's one. The second thing that we do see, obviously, is that the vendors of yesterday are not yet focused on the cloud technologies. It may be heresy to say this, but, you know, Oracle has not found a cult religion until very recently. And that's why you see Oracle as not doing a lot, or not making a dent in, you know, in cloud places or in cloud technologies like AWS. >> Right, right, it's just interesting, that procurement angle, because, as anyone who's ever been at a relatively small company, trying to sell into a big company, one of the biggest hurdles is actually just getting on the procurement list, becoming an approved vendor. So, it's interesting to think about that from the other side as a consumer. That if now you are unshackled from the approved vendor list, and you, because if now the only approved vendor is Amazon, and now you have this whole breadth of things to choose from within that ecosystem, that, how that could really impact your behavior and what you actually buy, build, and deliver. >> Yeah, I mean, I think that's a great point too. I mean, there are economics involved here, there is the friction of adopting certain technologies to AWS, which also makes it a little harder to adopt some of the more traditional software applications in the AWS world. Now that's why AWS obviously has come up with the notion of a marketplace, and Sumo Logic, you know, we face the same challenges when we are signing up customers, right. We have some big-name customers who, you know, if we have to sell into those customers, you know, we have to get into their procurement list, we have to, you know, go through a few rigamaroles-- >> Jeff: Right, Right >> To even get into that list. That's where, you know, getting into the AWS marketplace has really helped us a lot. Now you have one vendor, you have one relationship, you have one payment terms, and that vendor is already on your approved list. And so, hey, Sumo Logic comes along with the rights. >> So, definitely a simplification there, which was one of the themes in the keynote this morning, as well as this unshackling. What are your objectives for the report, are you going to be either going back to some of your existing customers or to new customers to show them all of these best practices that you've developed? >> Yeah, I mean, I think our goal of this report, obviously, first thing from us is to make this an annual report, we plan to do this every year, write it on reinvent. And what we want to do is to provide our community, who are mostly AWS shops today. We do have a few Microsoft Azure customers, and we are starting to see some Google Cloud platform customers too. But what we want to do is become the hot leader, who not only serves his customers, but also provides them a road map, in terms of, you know, how should they be adopting these cloud technologies. >> Jeff: Right. >> What are their leading-edge peers like the Twitters and the Airbnbs and the Pinterests of the world starting to do. Obviously, in a anonymized way, we don't want to be calling out any of our customers by name, but here is how you need to think about architecting your applications in the cloud. There is an opportunity, as we said, to, you know, break open from the chains of the past, redo this. We want to help our customer redo this well. >> I'd love to get your perspective, what are the, you know, and I think we're past the security and some of those kind of historic impediments, to you will, to public cloud adoption, but one of the ones that still comes up all the time is the rent versus buy, and you know I think it goes back to the tested roots of, yes, it's great to rent for awhile, but at some point in time, when you hit some scale-- >> Kalyan: Right. >> The business model flips and now it's more economical to buy and operate your own. But what we see in the slide that Werner showed today, there's plenty of customers, Netflix, of course always being the flagship, that are giant, and must have a giant AWS bill every month, who have chosen to still leverage them as their IT platform, and not flip the switch to a purchase. So you know, kind of either from the survey or anecdotally with your own customers, and you as a company, you know, what impacts that decision and do you have, like this review every couple of years, when those CFOs go, "Ah, we're paying these guys a lot of money," should we build our own stuff, but clearly you haven't gone that route. >> I mean, there are definitely enterprises who are still on-prem today, I think the last stat that I heard from Gartner is that 20% of enterprises have flipped over to public cloud infrastructure. 80% is still running things in the cloud, you know, within the data center, maybe a private cloud or maybe in the traditional old ways of running applications. But that tide is definitely turning. And what we see from many of our customers is that there are many reasons for customers or enterprises to now start adopting public cloud. Economics is obviously one, I mean, there is a big advantage of going from Capex to Opex, it obviously makes a lot of sense to do that. The second thing is that what we see is that it's not just about moving the application to the cloud, it's also having the right tooling around the application that can now allow you to manage that application, manage the performance of that application, the security of that application, the deployment of that application in the public cloud environment. And that has taken a while to mature, and I think we are already there, I mean, in an event like this, you can see so many companies come up with new, innovative ways of managing applications within the public cloud environment. And I think we are there now, I mean, the pendulum has swung, and we have enough technologies now to do this with a very high level of confidence. The third thing I would say, and you know, we keep hearing this from our customers again and again, and you know, I brought up Scripps as a great example, you know, we just did a public webinar with a company called Hootsuite, and, you know, they are a social media management platform company, and one of the comments from the Hootsuite VP of Operations was very telling, he said, "Look, I can do this, I can run my own stuff, but do I really want to do it, right? I am a social media company, I want to provide the best application to my customers. I'm not in the business of running a management technology, you know, on-prem or even, for that matter, you know, within the four walls of the company itself. What I want to do is focus on where I can deliver the best value to my customer, and that is by delivering a great social media application." >> Lisa: Exactly. >> "And I want to let the infrastructure game, the management game to the experts," right. >> Focusing on their core competencies to really drive more business. >> I mean I think we are definitely starting to see that, there are certain verticals that have adopted this, you know, wholeheartedly, retail is a good one, media is a good one, there are also cost pressures in those verticals that are forcing them to adopt this at a much faster pace. Financial is kicking and screaming, but they are also getting on board. >> But definitely from a thematic perspective, you talk about maturation, maturation of the services, maturation of the technologies, and maturation of the user. So we want to thank you so much for stopping by theCUBE, great to have you here. >> Thank you very much, I mean, it's been a great conversation with you guys, and it's a great event. >> Excellent, well for my co-host Jeff Frick, I am Lisa Martin, you're watching this on theCUBE live in San Francisco as the AWS Summit. Stick around, we'll be right back. (bouncy techno music)
SUMMARY :
brought to you by Amazon Web Services. We have the VP of Product Marketing, Kalyan Ramanathan. Thank you very much. Very excited to have you here. So, what is machine data, you might ask. that are co-located at AWS with your application, from, you know, very different walks of life, migrate their applications to AWS. So the ability to baseline what you have and you have a lot of customers that are running in AWS. that gives, you know, people in AWS, and we are seeing, you know, increased adoption not so much today, but, you know, years ago, If I have to use a IIS web server, that the cloud, you know, offers. and what you actually buy, build, and deliver. we have to, you know, go through a few rigamaroles-- That's where, you know, are you going to be either going back in terms of, you know, how should There is an opportunity, as we said, to, you know, break and not flip the switch to a purchase. and you know, I brought up Scripps as a great example, the management game to the experts," right. to really drive more business. you know, wholeheartedly, retail is a good one, for stopping by theCUBE, great to have you here. it's been a great conversation with you guys, in San Francisco as the AWS Summit.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Solomon | PERSON | 0.99+ |
Kalyan Ramanathan | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
12% | QUANTITY | 0.99+ |
18% | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Opex | ORGANIZATION | 0.99+ |
Hootsuite | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Ben | PERSON | 0.99+ |
last year | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Capex | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Airbnb | ORGANIZATION | 0.99+ |
more than 80% | QUANTITY | 0.99+ |
Scripps Network | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
three years ago | DATE | 0.99+ |
first task | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
one year | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.98+ |
SQL | TITLE | 0.98+ |
Docker | TITLE | 0.98+ |
GA | LOCATION | 0.98+ |
Apache | ORGANIZATION | 0.97+ |
AWS Summit | EVENT | 0.97+ |
third thing | QUANTITY | 0.97+ |
Werner | PERSON | 0.97+ |
over 1200 customers | QUANTITY | 0.97+ |
today | DATE | 0.97+ |