Austin Parker, Lightstep | AWS re:Invent 2022
(lively music) >> Good afternoon cloud community and welcome back to beautiful Las Vegas, Nevada. We are here at AWS re:Invent, day four of our wall to wall coverage. It is day four in the afternoon and we are holding strong. I'm Savannah Peterson, joined by my fabulous co-host Paul Gillen. Paul, how you doing? >> I'm doing well, fine Savannah. You? >> You look great. >> We're in the home stretch here. >> Yeah, (laughs) we are. >> You still look fresh as a daisy. I don't know how you do it. >> (laughs) You're too kind. You're too kind, but I'm vain enough to take that compliment. I'm very excited about the conversation that we're going to have up next. We get to get a little DevRel and we got a little swagger on the stage. Welcome, Austin. How you doing? >> Hey, great to be here. Thanks for having me. >> Savannah: Yeah, it's our pleasure. How's the show been for you so far? >> Busy, exciting. Feels a lot like, you know it used to be right? >> Yeah, I know. A little reminiscent of the before times. >> Well, before times. >> Before we dig into the technical stuff, you're the most intriguingly dressed person we've had on the show this week. >> Austin: I feel extremely underdressed. >> Well, and we were talking about developer fancy. Talk to me a little bit about your approach to fashion. Wasn't expecting to lead with this, but I like this but I like this actually. >> No, it's actually good with my PR. You're going to love it. My approach, here's the thing, I give free advice all the time about developer relations, about things that work, have worked, and don't work in community and all that stuff. I love talking about that. Someone came up to me and said, "Where do you get your fashion tips from? What's the secret Discord server that I need to go on?" I'm like, "I will never tell." >> Oh, okay. >> This is an actual trait secret. >> Top secret. Wow! Talk about. >> If someone else starts wearing the hat, then everyone's going to be like, "There's so many white guys." Look, I'm a white guy with a beard that works in technology. >> Savannah: I've never met one of those. >> Exactly, there's none of them at all. So, you have to do something to kind stand out from the crowd a little bit. >> I love it, and it's a talk trigger. We're talking about it now. Production team loved it. It's fantastic. >> It's great. >> So your DevRel for Lightstep, in case the audience isn't familiar tell us about Lightstep. >> So Lightstep is a cloud native observability platform built at planet scale, and it powers observability at some places you've heard of like Spotify, GitHub, right? We're designed to really help developers that are working in the cloud with Kubernetes, with these huge distributed systems, understand application performance and being able to find problems, fix problems. We're also part of the ServiceNow family and as we all know ServiceNow is on a mission to help the world of work work better by powering digital transformation around IT and customer experiences for their many, many, many global 2000 customers. We love them very much. >> You know, it's a big love fest here. A lot of people have talked about the collaboration, so many companies working together. You mentioned unified observability. What is unified observability? >> So if you think about a tradition, or if you've heard about this traditional idea of observability where you have three pillars, right? You have metrics, and you have logs, and you have traces. All those three things are different data sources. They're picked up by different tools. They're analyzed by different people for different purposes. What we believe and what we're working to accomplish right now is to take all that and if you think those pillars, flip 'em on their side and think of them as streams of data. If we can take those streams and integrate them together and let you treat traces and metrics and logs not as these kind of inviolate experiences where you're kind of paging between things and going between tab A to tab B to tab C, and give you a standard way to query this, a standard way to display this, and letting you kind of find the most relevant data, then it really unlocks a lot of power for like developers and SREs to spend less time like managing tools. You know, figuring out where to build their query or what dashboard to check, more just being able to like kind of ask a question, get an answer. When you have an incident or an outage that's the most important thing, right? How quickly can you get those answers that you need so that you can restore system health? >> You don't want to be looking in multiple spots to figure out what's going on. >> Absolutely. I mean, some people hear unified observability and they go to like tool consolidation, right? That's something I hear from a lot of our users and a lot of people in re:Invent. I'll talk to SREs, they're like, "Yeah, we've got like six or seven different metrics products alone, just on services that they cover." It is important to kind of consolidate that but we're really taking it a step lower. We're looking at the data layer and trying to say, "Okay, if the data is all consistent and vendor neutral then that gives you flexibility not only from a tool consolidation perspective but also you know, a consistency, reliability. You could have a single way to deploy your observability out regardless of what cloud you're on, regardless if you're using Kubernetes or Fargate or whatever else. or even just Bare Metal or EC2 Bare Metal, right? There's been so much historically in this space. There's been a lot of silos and we think that unify diversability means that we kind of break down those silos, right? The way that we're doing it primarily is through a project called OpenTelemetry which you might have heard of. You want to talk about that in a minute? . >> Savannah: Yeah, let's talk about it right now. Why don't you tell us about it? Keep going, you're great. You're on a roll. >> I am. >> Savannah: We'll just hang out over here. >> It's day four. I'm going to ask the questions and answer the questions. (Savannah laughs) >> Yes, you're right. >> I do yeah. >> Open Tele- >> OpenTelemetry . >> Explain what OpenTelemetry is first. >> OpenTelemetry is a CNCF project, Cloud Native Computing Foundation. The goal is to make telemetry data, high quality telemetry data, a builtin feature of cloud native software right? So right now if you wanted to get logging data out, depending on your application stack, depending on your application run time, depending on language, depending on your deployment environment. You might have a lot... You have to make a lot of choices, right? About like, what am I going to use? >> Savannah: So many different choices, and the players are changing all the time. >> Exactly, and a lot of times what people will do is they'll go and they'll say like, "We have to use this commercial solution because they have a proprietary agent that can do a lot of this for us." You know? And if you look at all those proprietary agents, what you find very quickly is it's very commodified right? There's no real difference in what they're doing at a code level and what's stopped the industry from really adopting a standard way to create this logs and metrics and traces, is simply just the fact that there was no standard. And so, OpenTelemetry is that standard, right? We've got dozens of companies many of them like very, many of them here right? Competitors all the same, working together to build this open standard and implementation of telemetry data for cloud native software and really any software right? Like we support over 12 languages. We support Kubernetes, Amazon. AWS is a huge contributor actually and we're doing some really exciting stuff with them on their Amazon distribution of OpenTelemetry. So it's been extremely interesting to see it over the past like couple years go from like, "Hey, here's this like new thing that we're doing over here," to really it's a generalized acceptance that this is the way of the future. This is what we should have been doing all along. >> Yeah. >> My opinion is there is a perception out there that observability is kind of a commodity now that all the players have the same set of tools, same set of 15 or 17 or whatever tools, and that there's very little distinction in functionality. Would you agree with that? >> I don't know if I would characterize it that way entirely. I do think that there's a lot of duplicated effort that happens and part of the reason is because of this telemetry data problem, right? Because you have to wind up... You know, there's this idea of table stakes monitoring that we talk about right? Table stakes monitoring is the stuff that you're having to do every single day to kind of make sure your system is healthy to be able to... When there's an alert, gets triggered, to see why it got triggered and to go fix it, right? Because everyone has the kind of work on that table stake stuff and then build all these integrations, there's very little time for innovation on top of that right? Because you're spending all your time just like working on keeping up with technology. >> Savannah: Doing the boring stuff to make sure the wheels don't fall off, basically. >> Austin: Right? What I think the real advantage of OpenTelemetry is that it really, from like a vendor perspective, like it unblocks us from having to kind of do all this repetitive commodified work. It lets us help move that out to the community level so that... Instead of having to kind of build, your Kubernetes integration for example, you can just have like, "Hey, OpenTelemetry is integrated into Kubernetes and you just have this data now." If you are a commercial product, or if you're even someone that's interested in fixing a, scratching a particular itch about observability. It's like, "I have this specific way that I'm doing Kubernetes and I need something to help me really analyze that data. Well, I've got the data now I can just go create a project. I can create an analysis tool." I think that's what you'll see over time as OpenTelemetry promulgates out into the ecosystem is more people building interesting analysis features, people using things like machine learning to analyze this large amount, large and consistent amount of OpenTelemetry data. It's going to be a big shakeup I think, but it has the potential to really unlock a lot of value for our customers. >> Well, so you're, you're a developer relations guy. What are developers asking for right now out of their observability platforms? >> Austin: That's a great question. I think there's two things. The first is that they want it to just work. It's actually the biggest thing, right? There's so many kind of... This goes back to the tool proliferation, right? People have too much data in too many different places, and getting that data out can still be really challenging. And so, the biggest thing they want is just like, "I want something that I can... I want a lot of these questions I have to ask, answered already and OpenTelemetry is going towards it." Keep in mind it's the project's only three years old, so we obviously have room to grow but there are people running it in production and it works really well for them but there's more that we can do. The second thing is, and this isn't what really is interesting to me, is it's less what they're asking for and more what they're not asking for. Because a lot of the stuff that you see people, saying around, "Oh, we need this like very specific sort of lower level telemetry data, or we need this kind of universal thing." People really just want to be able to get questions or get questions answered, right? They want tools that kind of have these workflows where you don't have to be an expert because a lot of times this tooling gets locked behind sort of is gate kept almost in a organization where there are teams that's like, "We're responsible for this and we're going to set it up and manage it for you, and we won't let you do things outside of it because that would mess up- >> Savannah: Here's your sandbox and- >> Right, this is your sandbox you can play in and a lot of times that's really useful and very tuned for the problems that you saw yesterday, but people are looking at like what are the problems I'm going to get tomorrow? We're deploying more rapidly. We have more and more intentional change happening in the system. Like it's not enough to have this reactive sort of approach where our SRE teams are kind of like or this observability team is building a platform for us. Developers want to be able to get in and have these kind of guided workflows really that say like, "Hey, here's where you're starting at. Let's get you to an answer. Let's help you find the needle in the haystack as it were, without you having to become a master of six different or seven different tools." >> Savannah: Right, and it shouldn't be that complicated. >> It shouldn't be. I mean we've certainly... We've been working on this problem for many years now, starting with a lot of our team that started at Google and helped build Google's planet scale monitoring systems. So we have a lot of experience in the field. It's actually one... An interesting story that our founder or now general manager tells BHS, Ben Sigelman, and he told me this story once and it's like... He had built this really cool thing called Dapper that was a tracing system at Google, and people weren't using it. Because they were like, "This is really cool, but I don't know how to... but it's not relevant to me." And he's like, the one thing that we did to get to increase usage 20 times over was we just put a link. So we went to the place that people were already looking for that data and we added a link that says, "Hey, go over here and look at this." It's those simple connections being able to kind of draw people from like point A to point B, take them from familiar workflows into unfamiliar ones. You know, that's how we think about these problems right? How is this becoming a daily part of someone's usage? How is this helping them solve problems faster and really improve their their life? >> Savannah: Yeah, exactly. It comes down to quality of life. >> Warner made the case this morning that computer architecture should be inherently event-driven and that we are moving toward a world where the person matters less than what the software does, right? The software is triggering events. Does this complicate observability or simplify it? >> Austin: I think that at the end of the day, it's about getting the... Observability to me in a lot of ways is about modeling your system, right? It's about you as a developer being able to say this is what I expect the system to do and I don't think the actual application architecture really matters that much, right? Because it's about you. You are building a system, right? It can be event driven, can be support request response, can be whatever it is. You have to be able to say, "This is what I expect to... For these given inputs, this is the expected output." Now maybe there's a lot of stuff that happens in the middle that you don't really care about. And then, I talk to people here and everyone's talking about serverless right? Everyone... You can see there's obviously some amazing statistics about how many people are using Lambda, and it's very exciting. There's a lot of stuff that you shouldn't have to care about as a developer, but you should care about those inputs and outputs. You will need to have that kind of intermediate information and understand like, what was the exact path that I took through this invented system? What was the actual resources that were being used? Because even if you trust that all this magic behind the scenes is just going to work forever, sometimes it's still really useful to have that sort of lower level abstraction, to say like, "Well, this is what actually happened so that I can figure out when I deployed a new change, did I make performance better or worse?" Or being able to kind of segregate your data out and say like... Doing AB testing, right? Doing canary releases, doing all of these things that you hear about as best practices or well architected applications. Observability is at the core of all that. You need observability to kind of do any of, ask any of those higher level interesting questions. >> Savannah: We are here at ReInvent. Tell us a little bit more about the partnership with AWS. >> So I would have to actually probably refer you to someone at Service Now on that. I know that we are a partner. We collaborate with them on various things. But really at Lightstep, we're very focused on kind of the open source part of this. So we work with AWS through the OpenTelemetry project, on things like the AWS distribution for OpenTelemetry which is really... It's OpenTelemetry, again is really designed to be like a neutral standard but we know that there are going to be integrators and implementers that need to package up and bundle it in a certain way to make it easy for their end users to consume it. So that's what Amazon has done with ADOT which is the shortening for it. So it's available in several different ways. You can use it as like an SDK and drop it into your application. There's Lambda layers. If you want to get Lambda observability, you just add this extension in and then suddenly you're getting OpenTelemetry data on the other side. So it's really cool. It's been a really exciting to kind of work with people on the AWS side over the past several years. >> Savannah: It's exciting, >> I've personally seen just a lot of change. I was talking to a PM earlier this week... It's like, "Hey, two years ago I came and talked to you about OpenTelemetry and here we are today. You're still talking about OpenTelemetry." And they're like, "What changes?" Our customers have started coming to us asking for OpenTelemetry and we see the same thing now. >> Savannah: Timing is right. >> Timing is right, but we see the same thing... Even talking to ServiceNow customers who are... These very big enterprises, banks, finance, healthcare, whatever, telcos, it used to be... You'd have to go to them and say like, "Let me tell you about distributed tracing. Let me tell you about OpenTelemetry. Let me tell you about observability." Now they're coming in and saying, "Yeah, so we're standard." If you think about Kubernetes and how Kubernetes, a lot of enterprises have spent the past five-six years standardizing, and Kubernetes is a way to deploy applications or manage containerized applications. They're doing the same journey now with OpenTelemetry where they're saying, "This is what we're betting on and we want partners we want people to help us go along that way." >> I love it, and they work hand in hand in all CNCF projects as well that you're talking about. >> Austin: Right, so we're integrated into Kubernetes. You can find OpenTelemetry and things like kept in which is application standards. And over time, it'll just like promulgate out from there. So it's really exciting times. >> A bunch of CNCF projects in this area right? Prometheus. >> Prometheus, yeah. Yeah, so we inter-operate with Prometheus as well. So if you have Prometheus metrics, then OpenTelemetry can read those. It's a... OpenTelemetry metrics are like a super set of Prometheus. We've been working with the Prometheus community for quite a while to make sure that there's really good compatibility because so many people use Prometheus you know? >> Yeah. All right, so last question. New tradition for us here on theCUBE. We're looking for your 32nd hot take, Instagram reel, biggest theme, biggest buzz for those not here on the show floor. >> Oh gosh. >> Savannah: It could be for you too. It could be whatever for... >> I think the two things that are really striking to me is one serverless. Like I see... I thought people were talking about servers a lot and they were talking about it more than ever. Two, I really think it is observability right? Like we've gone from observability being kind of a niche. >> Savannah: Not that you're biased. >> Huh? >> Savannah: Not that you're biased. >> Not that I'm biased. It used to be a niche. I'd have to go niche thing where I would go and explain what this is to people and nowpeople are coming up. It's like, "Yeah, yeah, we're using OpenTelemetry." It's very cool. I've been involved with OpenTelemetry since the jump, since it was started really. It's been very exciting to see and gratifying to see like how much adoption we've gotten even in a short amount of time. >> Yeah, absolutely. It's a pretty... Yeah, it's been a lot. That was great. Perfect soundbite for us. >> Austin: Thanks, I love soundbites. >> Savannah: Yeah. Awesome. We love your hat and your soundbites equally. Thank you so much for being on the show with us today. >> Thank you for having me. >> Savannah: Hey, anytime, anytime. Will we see you in Amsterdam, speaking of KubeCon? Awesome, we'll be there. >> There's some real exciting OpenTelemetry stuff coming up for KubeCon. >> Well, we'll have to get you back on theCUBE. (talking simultaneously) Love that for us. Thank you all for tuning in two hour wall to wall coverage here, day four at AWS re:Invent in fabulous Las Vegas, Nevada, with Paul Gillin. I'm Savannah Peterson and you're watching theCUBE, the leader in high tech coverage. (lively music)
SUMMARY :
and we are holding strong. I'm doing well, fine Savannah. I don't know how you do it. and we got a little swagger on the stage. Hey, great to be here. How's the show been for you so far? Feels a lot like, you A little reminiscent of the before times. on the show this week. Well, and we were talking server that I need to go on?" Talk about. then everyone's going to be like, something to kind stand out and it's a talk trigger. in case the audience isn't familiar and being able to find about the collaboration, and going between tab A to tab B to tab C, in multiple spots to and they go to like tool Why don't you tell us about it? Savannah: We'll just and answer the questions. The goal is to make telemetry data, and the players are changing all the time. Exactly, and a lot of and that there's very little and part of the reason is because of this boring stuff to make sure but it has the potential to really unlock What are developers asking for right now and we won't let you for the problems that you saw yesterday, Savannah: Right, and it And he's like, the one thing that we did It comes down to quality of life. and that we are moving toward a world is just going to work forever, about the partnership with AWS. that need to package up and talked to you about OpenTelemetry and Kubernetes is a way and they work hand in hand and things like kept in which A bunch of CNCF projects So if you have Prometheus metrics, We're looking for your 32nd hot take, Savannah: It could be for you too. that are really striking to me and gratifying to see like It's a pretty... on the show with us today. Will we see you in Amsterdam, OpenTelemetry stuff coming up I'm Savannah Peterson and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Comcast | ORGANIZATION | 0.99+ |
Elizabeth | PERSON | 0.99+ |
Paul Gillan | PERSON | 0.99+ |
Jeff Clark | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Savannah | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Richard | PERSON | 0.99+ |
Micheal | PERSON | 0.99+ |
Carolyn Rodz | PERSON | 0.99+ |
Dave Vallante | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Eric Seidman | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Keith | PERSON | 0.99+ |
Chris McNabb | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
Carolyn | PERSON | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Alice | PERSON | 0.99+ |
2006 | DATE | 0.99+ |
John | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
congress | ORGANIZATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Elizabeth Gore | PERSON | 0.99+ |
Paul Gillen | PERSON | 0.99+ |
Madhu Kutty | PERSON | 0.99+ |
1999 | DATE | 0.99+ |
Michael Conlan | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Michael Candolim | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
Yvonne Wassenaar | PERSON | 0.99+ |
Mark Krzysko | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Willie Lu | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Yvonne | PERSON | 0.99+ |
Hertz | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Linthicum, Deloitte | VMware Explore 2022
>>Welcome back everyone to the cubes coverage here live in San Francisco for VMware Explorer. Formerly got it. World. We've been to every world since 2010. Now is VMware Explorer. I'm John furier host with Dave ante with Dave lium here. He's the chief cloud strategy officer at Deloitte. Welcome to the cube. Thanks for coming on. Appreciate your time. >>Thanks for having me. It's >>Epic keynote today on stage all seven minutes of your great seven minutes >>Performance discussion. Yes. Very, very, very, very quick to the order. I brought everybody up to speed and left. >>Well, Dave's great to have you on the cube one. We follow your work. We've been following for a long time. Thank you. A lot of web services, a lot of SOA, kind of in your background, kind of the old web services, AI, you know, samples, RSS, web services, all that good stuff. Now it's, it's now we're in kind of web services on steroids. Cloud came it's here. We're NextGen. You wrote a great story on Metacloud. You've been following the Supercloud with Dave. Does VMware have it right? >>Yeah, they do. Because I'll tell you what the market is turning toward. Anything that sit above and between the clouds. So things that don't exist in the hyperscaler, things that provide common services above the cloud providers are where the growth's gonna happen. We haven't really solved that problem yet. And so there's lots of operational aspects, security aspects, and the ability to have some sort of a brokering service that'll scale. So multi-cloud, which is their strategy here is not about cloud it's about things that exist in between cloud and making those things work. So getting to another layer of abstraction and automation to finally allow us to make use out of all these hyperscaler services that we're signing on today. Dave, >>Remember the old days back in the eighties, when we were young bucks coming into the business, the interoperability wave was coming. Remember that? Oh yeah, I got a deck mini computer. I got an IBM was gonna solve that unex. And then, you know, this other thing over here and lands and all and everything started getting into this whole, okay. Networking. Wasn't just coax. You started to see segment segments. Interoperability was a huge, what 10 year run. It feels like that's kind of like the vibe going on here. >>Yeah. We're not focused on having these things interop operate onto themselves. So what we're doing is putting a layer of things which allows them to interop operate. That's a different, that's a different problem to solve. And it's also solvable. We were talking about getting all these very distinct proprietary systems to communicate one to another and interate one to another. And that never really happened. Right? Cause you gotta get them to agree on interfaces and protocols. But if you put a layer above it, they can talk down to whatever native interfaces that are there and deal with the differences between the heterogeneity and abstract yourself in the complexity. And that's, that's kind of the different that works. The ability to kind of get everybody, you know, clunk their heads together and make them work together. That doesn't seem to scale couple >>And, and people gotta be motivated for that. Not many people might not >>Has me money. In other words has to be a business for them in doing so. >>A couple things I wanna follow up on from work, you know, this morning they used the term cloud chaos. When you talk to customers, you know, when they have multiple clouds, do they, are they saying to you, Hey, we have cloud chaos are, do they have cloud chaos? And they don't know it or do they not have cloud chaos? What's the mix. >>Yeah. I don't think the word chaos is used that much, but they do tell me they're hitting a complexity wall, which you do here out there as a term. So in other words, they're getting to a point where they can't scale operations to deal with a complexity and heterogeneity that they're, that they're bringing into the organization because using multiple clouds. So that is chaotic. So I guess that, you know, it is another way to name complexity. So there's so many services are moving from a thousand cloud services, under management to 3000 cloud services under management. They don't have the operational team, the skill, skill levels to do it. They don't have the tooling to do it. That's a wall. And you have to be able to figure out how to get beyond that wall to make those things work. So >>When, when we had our conversation about Metacloud and Supercloud, we we've, I think very much aligned in our thinking. And so now you've got this situation where you've got these abstraction layers, but, and there, but my question is, are we gonna have multiple abstraction layers? And will they talk to each other or are standards emerging? Will they be able to, >>No, we can't have multiple abstraction layers else. We just, we don't solve the problem. We go from complexity of exists at the native cloud levels to complexity of exists, that this thing we're dealing with to deal with complexity. So if you do that, we're screwing up. We have to go back and fix it. So ultimately this is about having common services, common security, layers, common operational layers, and things like that that are really reduced redundancy within the system. So instead of having a, you know, five different security layers and five different cloud providers, we're layering one and providing management and orchestration capabilities to make that happen. If we don't do that, we're not succeeding. >>What do you think about the marketplace? I know there's a lot of things going on that are happening around this. Wanna get your thoughts on obviously the industry dynamics, vendors preserving their future. And then you've got customers who have been leveraging the CapEx, goodness of say Amazon and then have to solve their whole distributed environment problem. So when you look at this, is it really solving? Is it is the order of operations first common layer abstraction because you know, it seems like the vendor, I won't say desperation move, but like their first move is we're gonna be the control plane or, you know, I think Cisco has a vision in their mind that no, no we're gonna have that management plane. I've heard a lot of people talking about, we're gonna be the management interface into something. How do you see that playing out? Because the order of operations to do the abstraction is to get consensus, right, right. First not competition. Right. So how do you see that? What's your reaction to that? And what's your observation. >>I think it's gonna be tough for the people who are supplying the underlying services to also be the orchestration and abstraction layers, because they're, they're kind of conflicted in making that happen. In other words, it's not in their best interest to make all these things work and interoperate one to another, but it's their best interest to provide, provide a service that everybody's going to leverage. So I see the layers here. I'm certainly the hyperscalers are gonna play in those layers and then they're welcome to play in those layers. They may come up with a solution that everybody picks, but ultimately it's about independence and your ability to have an objective way of, of allowing all these things to communicate together and driving this, driving this stuff together, to reduce the complexity again, to reduce. >>So a network box, for instance, maybe have hooks into it, but not try to dominate it >>Or that's right. Yeah, that's right. I think if you're trying to own everything and I get that a lot when I write about Supercloud and, and Metacloud they go, well, we're the Metacloud, we're the Supercloud you can't be other ones. That's a huge problem to solve. I know you don't have a solution for that. Okay. It's gonna be many different products to make that happen. And the reality is people who actually make that work are gonna have to be interdependent independent of the various underlying services. They're gonna, they can support them, but they really can't be them. They have to be an interate interop. They have to interoperate with those services. >>Do you, do you see like a w three C model, like the worldwide web consortium, remember that came out around 96, came to the us and MIT and then helped for some of those early standards in, in, in the internet, not DNS, but like the web, but DNS was already there and internet was already there, but like the web standards HTML kind of had, I think wasn't really hardcore get you in the headlock, but at least it was some sort of group that said, Hey, intellectually be honest, you see that happening in this area. >>I hope not. And here's >>Why not. >>Yeah. >>Here's, here's why the reality is is that when these consortiums come into play, it freezes the market. Everybody waits for the consortium to come up with some sort of a solution that's gonna save the world. And that solution never comes because you can't get these organizations through committee to figure out some sort of a technology stack that's gonna be working. So I'd rather see the market figure that out. Not a consortium when >>I, you mean the ecosystem, not some burning Bush. >>Yeah. Not some burning Bush. And it just hasn't worked. I mean, if it worked, it'd be great. And >>We had a, an event on August 9th, it was super cloud 22 and we had a security securing the super cloud panel. And one of my was a great conversation as you remember, John, but it was kind of depressing in that, like we're never gonna solve this problem. So what are you seeing in the security front? You know, it seems to like that's a main blocker to the Metacloud the Supercloud >>Yeah. The reality is you can't build all the security services in, in the Metacloud. You have to basically leverage the security services on the native cloud and leverage them as they exist. So this idea that we're gonna replace all of these security services with one layer of abstraction, that's gonna provide the services. So you don't need these underlying security systems that won't work. You have to leverage the native security systems, native governance, native operating interfaces, native APIs of all the various native clouds using the terms that they're looking to leverage. And that's the mistake. I think people are going to make, you don't need to replace something that's working. You just may need to make it easier to >>Use. Let's ask Dave about the, sort of the discussion that was on Twitter this morning. So when VMware announced their, you know, cross cloud services and, and the whole new Tansu one, three, and, and, and, and aria, there was a little chatter on Twitter basically saying, yeah, but VMware they'll never win the developers. And John came and said, well, hi, hang on. You know, if, if you've got open tools and you're embracing those, it's really about the ops and having standards on the op side. And so my question to you is, does VMware, that's >>Not exactly what I said, but close enough, >>Sorry. I mean, I'm paraphrasing. You can fine tune it, but, but does VMware have to win the developers or are they focused on kind of the right areas that whole, you know, op side of DevOps >>Focused on the op side, cuz that's the harder problem to solve. Developers are gonna use whatever tools they need to use to build these applications and roll them out. And they're gonna change all the time. In other words, they're gonna change the tools and technologies to do it in the supply chain. The ops problem is the harder problem to solve the ability to get these things working together and, and running at a certain point of reliability where the failure's not gonna be there. And I think that's gonna be the harder issue and doing that without complexity. >>Yeah. That's the multi-cloud challenge right there. I agree. The question I want to also pivot on that is, is that as we look at some of the reporting we've done and interviews, data and security really are hard areas. People are tune tuning up DevOps in the developer S booming, everyone's going fast, fast and loose. Shifting left, all that stuff's happening. Open source, booming Toga party. Everyone's partying ops is struggling to level up. So I guess the question is what's the order of operations from a customer. So a lot of customers have lifted and shift. The, some are going all in on say, AWS, yeah, I got a little hedge with Azure, but I'm not gonna do a full development team. As you talk to customers, cuz they're the ones deploying the clouds that want to get there, right? What's the order of operations to do it properly in your mind. And what's your advice as you look at as a strategy to, to do it, right? I mean, is there a playbook or some sort of situational, you know, sequence, >>Yes. One that works consistently is number one, you think about operations up front and if you can't solve operations, you have no business rolling out other applications and other databases that quite frankly can't be operated and that's how people are getting into trouble. So in other words, if you get into these very complex architectures, which is what a multicloud is, complex distributed system. Yeah. And you don't have an understanding of how you're gonna operationalize that system at scale, then you have no business in building the system. You have no business of going in a multicloud because you are going to run into that wall and it's gonna lead to a, an outage it's gonna lead to a breach or something that's gonna be company killing. >>So a lot of that's cultural, right. Having, having the cultural fortitude to say, we're gonna start there. We're gonna enforce these standards. >>That's what John CLE said. Yeah. CLE is famous line. >>Yeah, you're right. You're right. So, so, so what happens if the, if that as a consultant, if you, you probably have to insist on that first, right? Or, I mean, I don't know, you probably still do the engagement, but you, you're gonna be careful about promising an outcome aren't you, >>You're gonna have to insist on the fact they're gonna have to do some advanced planning and come up with a very rigorous way in which they're gonna roll it out. And the reality is if they're not doing that, then the advice would be you're gonna fail. So it's not a matter of when it's, when it's gonna happen. We're gonna, but at some point you're gonna fail either. Number one, you're gonna actually fail in some sort of a big disastrous event or more likely or not. You're gonna end up building something that's gonna cost you $10 million more a month to run and it's gonna be underoptimized. And is >>That effective when you, when you say that to a client or they say, okay, but, or do they say yes, you're >>Right. I view my role as a, someone like a doctor and a lawyer. You may not want to hear what I'm telling you. But the thing is, if I don't tell you the truth and I'm not doing my job as a trusted advisor. And so they'll never get anything but that from us, you know, as a firm and the reality is they can make their own decisions and will have to help them, whatever path they want to go. But we're making the warnings in place to make. >>And, and also also situationally it's IQ driven. Are they ready? What's their makeup. Are they have the kind of talent to execute. And there's a lot of unbeliev me. I totally think agree with on the op side, I think that's right on the money. The question I want to ask you is, okay, assume that someone has the right makeup of team. They got some badass people in there, coding away, DevOps, SREs, you name it. Everyone lined up platform teams, as they said today on stage, all that stuff. What's the CXO conversation at the boardroom that you, you have around business strategy. Cuz if you assume that cloud is here and you do things right and you get the right advisors in the next step is what does it transform my business into? Because you're talking about a fully digitalized business that converges it's not just, it helps you run an app back office with some terminal it's full blown business edge app business model innovation is it that the company becomes a cloud on their own and they have scale. And they're the super cloud of their category servicing a power law of second place, third place, SMB market. So I mean, Goldman Sachs could be the service provider cloud for financial services maybe. Or is that the dream? What, what's the dream for the, the, the CXO staff take us through the, >>What they're trying to do is get a level of automation with every able to leverage best breed technology to be as innovative as they possibly can. Using an architecture that's near a hundred percent optimized. It'll never be a hundred percent optimized. Therefore it's able to run, bring the best value to the business for the least amount of money. That's the big thing. If they want to become a cloud, that's, that's not a, not necessarily a good idea. If they're finance company be a finance company, just build these innovations around how to make a finance company be innovative and different for them. So they can be a disruptor without being disrupted. I see where see a lot of companies right now, they're gonna be exposed in the next 10 years because a lot of these smaller companies are able to weaponize technology to bring them to the next level, digital transformations, whatever, to create a business value. That's gonna be more compelling than the existing player >>Because they're on the CapEx back of Amazon or some technical innovation. Is that what the smaller guys, what's the, what's the lever that beats the >>It's the ability to use whatever technology you need to solve your issues. So in other words, I can use anything that exists on the cloud because it's part of the multi-cloud I'm I able to find the services that I need, the best AI system, the best database systems, the fastest transaction processing system, and assemble these syncs together to solve more innovative problems in my competitor. If I'm able to do that, I'm gonna win the game. So >>It's a buffet of technology. Pick your yes, your meal, come on, >>Case spray something, this operations, first thing in my head, remember Alan NA, when he came in the Cub and he said, listen, if you're gonna do cloud, you better change the operating model or you you're gonna make, you know, you'll drop millions to the bottom line. He was at CIO of Phillips at the time. You're not gonna drop billions. And it's all about, you know, the zeros, right? So do you find yourself in a lot of cases, sort of helping people rearchitect their operating model as a function of, of, of what cloud can, can enable? >>Yeah. Every, every engagement that we go into has operating model change op model changes, and typically it's gonna be major surgery. And so it's re reevaluating the skill sets, reevaluating, the operating model, reevaluating the culture. In fact, we have a team of people who come in and that's all they focus on. And so it used to be just kind of an afterthought. We'd put this together, oh, by the way, I think you need to do this and this and this. And here's what we recommend you do. But people who can go in and get cultural changes going get the operating models systems, going to get to the folks where they're gonna be successful with it. Reality. If you don't do that, you're gonna fail because you're not gonna have the ability to adapt to a cloud-based a cloud-based infrastructure. You can leverage this scale. >>David's like a masterclass here on the cube at VMware explore. Thanks for coming on. Thanks for spending the valuable time. Just what's going on in your world right now, take a quick minute to plug what's going on with you. What are you working on? What are you excited about? What what's happening, >>Loving life. I'm just running around doing, doing things like this, doing a lot of speaking, you know, still have the blog on in info world and have that for the last 12 years and just loving the fact that we're innovating and changing the world. And I'm trying to help as many people as I can, as quickly as I can. What's >>The coolest thing you've seen this year in terms of cloud kind of either weirdness coolness or something that made you fall outta your chair. Wow. That >>Was cool. I think the AI capabilities and application of AI, I'm just seeing use cases in there that we never would've thought about the ability to identify patterns that we couldn't identify in the past and do so for, for the good, I've been an AI analyst. It was my first job outta college and I'm 60 years old. So it's, it's matured enough where it actually impresses me. And so we're seeing applications >>Right now. That's NLP anymore. Is it? >>No, no, not list. That's what I was doing, but it's, we're able to take this technology to the next level and do, do a lot of good with it. And I think that's what just kind of blows me on the wall. >>Ah, I wish we had 20 more minutes, >>You know, one, one more masterclass sound bite. So we all kind of have kids in college, David and I both do young ones in college. If you're coming outta college, CS degree or any kind of smart degree, and you have the plethora of now what's coming tools and unlimited ways to kind of clean canvas up application, start something. What would you do if you were like 22? Right now, >>I would focus on being a multi-cloud architect. And I would learn a little about everything. Learn a little about at the various cloud providers. And I would focus on building complex distributed systems and architecting those systems. I would learn about how all these things kind of kind of run together. Don't learn a particular technology because that technology will ultimately go away. It'll be displaced by something else, learn holistically what the technologies is able to do and become the orchestrator of that technology. It's a harder problem to solve, but you'll get paid more for it. And it'll be more fun job. >>Just thinking big picture, big >>Picture, how everything comes together. True architecture >>Problems. All right, Dave is on the queue masterclass here on the cube. Bucha for Dave ante Explorer, 2022. Live back with our next segment. After this short break.
SUMMARY :
Welcome back everyone to the cubes coverage here live in San Francisco for VMware Thanks for having me. I brought everybody up to Well, Dave's great to have you on the cube one. security aspects, and the ability to have some sort of a brokering service that'll And then, you know, this other thing over The ability to kind of get everybody, you know, clunk their heads together and make them work together. And, and people gotta be motivated for that. In other words has to be a business for them in doing so. A couple things I wanna follow up on from work, you know, this morning they used the term cloud chaos. They don't have the operational team, the skill, skill levels to do it. And so now you've got this situation where you've got these abstraction layers, exists at the native cloud levels to complexity of exists, that this thing we're dealing with to deal with complexity. Because the order of operations to do the abstraction is to get consensus, So I see the layers here. And the reality is people who actually make that work are gonna have to be interdependent get you in the headlock, but at least it was some sort of group that said, Hey, intellectually be honest, And here's And that solution never comes because you can't get these organizations through committee to And it just hasn't worked. So what are you seeing in the security front? I think people are going to make, you don't need to replace something that's working. And so my question to you is, you know, op side of DevOps Focused on the op side, cuz that's the harder problem to solve. What's the order of operations to do it properly in your mind. So in other words, if you get into these very complex Having, having the cultural fortitude to say, That's what John CLE said. Or, I mean, I don't know, you probably still do the engagement, And the reality is if they're not doing that, then the advice would be you're gonna fail. And so they'll never get anything but that from us, you know, as a firm and the reality is they can make their own The question I want to ask you is, a lot of these smaller companies are able to weaponize technology to bring them to the next level, Is that what the smaller guys, what's the, what's the lever that beats the It's the ability to use whatever technology you need to solve your issues. It's a buffet of technology. And it's all about, you know, the zeros, right? get cultural changes going get the operating models systems, going to get to the folks where they're gonna be successful with it. take a quick minute to plug what's going on with you. you know, still have the blog on in info world and have that for the last 12 years and just loving the something that made you fall outta your chair. in the past and do so for, for the good, I've been an AI analyst. That's NLP anymore. And I think that's what just kind of blows me on the wall. CS degree or any kind of smart degree, and you have the plethora of now what's coming tools and unlimited And I would focus on building complex distributed systems and Picture, how everything comes together. Live back with our next segment.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Raj | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Caitlyn | PERSON | 0.99+ |
Pierluca Chiodelli | PERSON | 0.99+ |
Jonathan | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Lynn Lucas | PERSON | 0.99+ |
Caitlyn Halferty | PERSON | 0.99+ |
$3 | QUANTITY | 0.99+ |
Jonathan Ebinger | PERSON | 0.99+ |
Munyeb Minhazuddin | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Christy Parrish | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Ed Amoroso | PERSON | 0.99+ |
Adam Schmitt | PERSON | 0.99+ |
SoftBank | ORGANIZATION | 0.99+ |
Sanjay Ghemawat | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Ashley | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Greg Sands | PERSON | 0.99+ |
Craig Sanderson | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Cockroach Labs | ORGANIZATION | 0.99+ |
Jim Walker | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Blue Run Ventures | ORGANIZATION | 0.99+ |
Ashley Gaare | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Rob Emsley | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Lynn | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Allen Crane | PERSON | 0.99+ |
Ed Walsh, ChaosSearch | AWS re:Inforce 2022
(upbeat music) >> Welcome back to Boston, everybody. This is the birthplace of theCUBE. In 2010, May of 2010 at EMC World, right in this very venue, John Furrier called it the chowder and lobster post. I'm Dave Vellante. We're here at RE:INFORCE 2022, Ed Walsh, CEO of ChaosSearch. Doing a drive by Ed. Thanks so much for stopping in. You're going to help me wrap up in our final editorial segment. >> Looking forward to it. >> I really appreciate it. >> Thank you for including me. >> How about that? 2010. >> That's amazing. It was really in this-- >> Really in this building. Yeah, we had to sort of bury our way in, tunnel our way into the Blogger Lounge. We did four days. >> Weekends, yeah. >> It was epic. It was really epic. But I'm glad they're back in Boston. AWS was going to do June in Houston. >> Okay. >> Which would've been awful. >> Yeah, yeah. No, this is perfect. >> Yeah. Thank God they came back. You saw Boston in summer is great. I know it's been hot, And of course you and I are from this area. >> Yeah. >> So how you been? What's going on? I mean, it's a little crazy out there. The stock market's going crazy. >> Sure. >> Having the tech lash, what are you seeing? >> So it's an interesting time. So I ran a company in 2008. So we've been through this before. By the way, the world's not ending, we'll get through this. But it is an interesting conversation as an investor, but also even the customers. There's some hesitation but you have to basically have the right value prop, otherwise things are going to get sold. So we are seeing longer sales cycles. But it's nothing that you can't overcome. But it has to be something not nice to have, has to be a need to have. But I think we all get through it. And then there is some, on the VC side, it's now buckle down, let's figure out what to do which is always a challenge for startup plans. >> In pre 2000 you, maybe you weren't a CEO but you were definitely an executive. And so now it's different and a lot of younger people haven't seen this. You've got interest rates now rising. Okay, we've seen that before but it looks like you've got inflation, you got interest rates rising. >> Yep. >> The consumer spending patterns are changing. You had 6$, $7 gas at one point. So you have these weird crosscurrents, >> Yup. >> And people are thinking, "Okay post-September now, maybe because of the recession, the Fed won't have to keep raising interest rates and tightening. But I don't know what to root for. It's like half full, half empty. (Ed laughing) >> But we haven't been in an environment with high inflation. At least not in my career. >> Right. Right. >> I mean, I got into 92, like that was long gone, right?. >> Yeah. >> So it is a interesting regime change that we're going to have to deal with, but there's a lot of analogies between 2008 and now that you still have to work through too, right?. So, anyway, I don't think the world's ending. I do think you have to run a tight shop. So I think the grow all costs is gone. I do think discipline's back in which, for most of us, discipline never left, right?. So, to me that's the name of the game. >> What do you tell just generally, I mean you've been the CEO of a lot of private companies. And of course one of the things that you do to retain people and attract people is you give 'em stock and it's great and everybody's excited. >> Yeah. >> I'm sure they're excited cause you guys are a rocket ship. But so what's the message now that, Okay the market's down, valuations are down, the trees don't grow to the moon, we all know that. But what are you telling your people? What's their reaction? How do you keep 'em motivated? >> So like anything, you want over communicate during these times. So I actually over communicate, you get all these you know, the Sequoia decks, 2008 and the recent... >> (chuckles) Rest in peace good times, that one right? >> I literally share it. Why? It's like, Hey, this is what's going on in the real world. It's going to affect us. It has almost nothing to do with us specifically, but it will affect us. Now we can't not pay attention to it. It does change how you're going to raise money, so you got to make sure you have the right runway to be there. So it does change what you do, but I think you over communicate. So that's what I've been doing and I think it's more like a student of the game, so I try to share it, and I say some appreciate it others, I'm just saying, this is normal, we'll get through this and this is what happened in 2008 and trust me, once the market hits bottom, give it another month afterwards. Then everyone says, oh, the bottom's in and we're back to business. Valuations don't go immediately back up, but right now, no one knows where the bottom is and that's where kind of the world's ending type of things. >> Well, it's interesting because you talked about, I said rest in peace good times >> Yeah >> that was the Sequoia deck, and the message was tighten up. Okay, and I'm not saying you shouldn't tighten up now, but the difference is, there was this period of two years of easy money and even before that, it was pretty easy money. >> Yeah. >> And so companies are well capitalized, they have runway so it's like, okay, I was talking to Frank Slootman about this now of course there are public companies, like we're not taking the foot off the gas. We're inherently profitable, >> Yeah. >> we're growing like crazy, we're going for it. You know? So that's a little bit of a different dynamic. There's a lot of good runway out there, isn't there? >> But also you look at the different companies that were either born or were able to power through those environments are actually better off. You come out stronger in a more dominant position. So Frank, listen, if you see what Frank's done, it's been unbelievable to watch his career, right?. In fact, he was at Data Domain, I was Avamar so, but look at what he's done since, he's crushed it. Right? >> Yeah. >> So for him to say, Hey, I'm going to literally hit the gas and keep going. I think that's the right thing for Snowflake and a right thing for a lot of people. But for people in different roles, I literally say that you have to take it seriously. What you can't be is, well, Frank's in a different situation. What is it...? How many billion does he have in the bank? So it's... >> He's over a billion, you know, over a billion. Well, you're on your way Ed. >> No, no, no, it's good. (Dave chuckles) Okay, I want to ask you about this concept that we've sort of we coined this term called Supercloud. >> Sure. >> You could think of it as the next generation of multi-cloud. The basic premises that multi-cloud was largely a symptom of multi-vendor. Okay. I've done some M&A, I've got some Shadow IT, spinning up, you know, Shadow clouds, projects. But it really wasn't a strategy to have a continuum across clouds. And now we're starting to see ecosystems really build, you know, you've used the term before, standing on the shoulders of giants, you've used that a lot. >> Yep. >> And so we're seeing that. Jerry Chen wrote a seminal piece on Castles in The Cloud, so we coined this term SuperCloud to connote this abstraction layer that hides the underlying complexities and primitives of the individual clouds and then adds value on top of it and can adjudicate and manage, irrespective of physical location, Supercloud. >> Yeah. >> Okay. What do you think about that concept?. How does it maybe relate to some of the things that you're seeing in the industry? >> So, standing on shoulders of giants, right? So I always like to do hard tech either at big company, small companies. So we're probably your definition of a Supercloud. We had a big vision, how to literally solve the core challenge of analytics at scale. How are you going to do that? You're not going to build on your own. So literally we're leveraging the primitives, everything you can get out of the Amazon cloud, everything get out of Google cloud. In fact, we're even looking at what it can get out of this Snowflake cloud, and how do we abstract that out, add value to it? That's where all our patents are. But it becomes a simplified approach. The customers don't care. Well, they care where their data is. But they don't care how you got there, they just want to know the end result. So you simplify, but you gain the advantages. One thing's interesting is, in this particular company, ChaosSearch, people try to always say, at some point the sales cycle they say, no way, hold on, no way that can be fast no way, or whatever the different issue. And initially we used to try to explain our technology, and I would say 60% was explaining the public, cloud capabilities and then how we, harvest those I guess, make them better add value on top and what you're able to get is something you couldn't get from the public clouds themselves and then how we did that across public clouds and then extracted it. So if you think about that like, it's the Shoulders of giants. But what we now do, literally to avoid that conversation because it became a lengthy conversation. So, how do you have a platform for analytics that you can't possibly overwhelm for ingest. All your messy data, no pipelines. Well, you leverage things like S3 and EC2, and you do the different security things. You can go to environments say, you can't possibly overrun me, I could not say that. If I didn't literally build on the shoulders giants of all these public clouds. But the value. So if you're going to do hard tech as a startup, you're going to build, you're going to be the principles of Supercloud. Maybe they're not the same size of Supercloud just looking at Snowflake, but basically, you're going to leverage all that, you abstract it out and that's where you're able to have a lot of values at that. >> So let me ask you, so I don't know if there's a strict definition of Supercloud, We sort of put it out to the community and said, help us define it. So you got to span multiple clouds. It's not just running in each cloud. There's a metadata layer that kind of understands where you're pulling data from. Like you said you can pull data from Snowflake, it sounds like we're not running on Snowflake, correct? >> No, complimentary to them in their different customers. >> Yeah. Okay. >> They want to build on top of a data platform, data apps. >> Right. And of course they're going cross cloud. >> Right. >> Is there a PaaS layer in there? We've said there's probably a Super PaaS layer. You're probably not doing that, but you're allowing people to bring their own, bring your own PaaS sort of thing maybe. >> So we're a little bit different but basically we publish open APIs. We don't have a user interface. We say, keep the user interface. Again, we're solving the challenge of analytics at scale, we're not trying to retrain your analytics, either analysts or your DevOps or your SOV or your Secop team. They use the tools they already use. Elastic search APIs, SQL APIs. So really they program, they build applications on top of us, Equifax is a good example. Case said it coming out later on this week, after 18 months in production but, basically they're building, we provide the abstraction layer, the quote, I'm going to kill it, Jeff Tincher, who owns all of SREs worldwide, said to the effect of, Hey I'm able to rethink what I do for my data pipelines. But then he also talked about how, that he really doesn't have to worry about the data he puts in it. We deal with that. And he just has to, just query on the other side. That simplicity. We couldn't have done that without that. So anyway, what I like about the definition is, if you were going to do something harder in the world, why would you try to rebuild what Amazon, Google and Azure or Snowflake did? You're going to add things on top. We can still do intellectual property. We're still doing patents. So five grand patents all in this. But literally the abstraction layer is the simplification. The end users do not want to know that complexity, even though they ask the questions. >> And I think too, the other attribute is it's ecosystem enablement. Whereas I think, >> Absolutely >> in general, in the Multicloud 1.0 era, the ecosystem wasn't thinking about, okay, how do I build on top and abstract that. So maybe it is Multicloud 2.0, We chose to use Supercloud. So I'm wondering, we're at the security conference, >> RE: INFORCE is there a security Supercloud? Maybe Snyk has the developer Supercloud or maybe Okta has the identity Supercloud. I think CrowdStrike maybe not. Cause CrowdStrike competes with Microsoft. So maybe, because Microsoft, what's interesting, Merritt Bear was just saying, look, we don't show up in the spending data for security because we're not charging for most of our security. We're not trying to make a big business. So that's kind of interesting, but is there a potential for the security Supercloud? >> So, I think so. But also, I'll give you one thing I talked to, just today, at least three different conversations where everyone wants to log data. It's a little bit specific to us, but basically they want to do the security data lake. The idea of, and Snowflake talks about this too. But the idea of putting all the data in one repository and then how do you abstract out and get value from it? Maybe not the perfect, but it becomes simple to do but hard to get value out. So the different players are going to do that. That's what we do. We're able to, once you land it in your S3 or it doesn't matter, cloud of choice, simple storage, we allow you to get after that data, but we take the primitives and hide them from you. And all you do is query the data and we're spinning up stateless computer to go after it. So then if I look around the floor. There's going to be a bunch of these players. I don't think, why would someone in this floor try to recreate what Amazon or Google or Azure had. They're going to build on top of it. And now the key thing is, do you leave it in standard? And now we're open APIs. People are building on top of my open APIs or do you try to put 'em in a walled garden? And they're in, now your Supercloud. Our belief is, part of it is, it needs to be open access and let you go after it. >> Well. And build your applications on top of it openly. >> They come back to snowflake. That's what Snowflake's doing. And they're basically saying, Hey come into our proprietary environment. And the benefit is, and I think both can win. There's a big market. >> I agree. But I think the benefit of Snowflake's is, okay, we're going to have federated governance, we're going to have data sharing, you're going to have access to all the ecosystem players. >> Yep. >> And as everything's going to be controlled and you know what you're getting. The flip side of that is, Databricks is the other end >> Yeah. >> of that spectrum, which is no, no, you got to be open. >> Yeah. >> So what's going to happen, well what's happening clearly, is Snowflake's saying, okay we've got Snowpark. we're going to allow Python, we're going to have an Apache Iceberg. We're going to have open source tooling that you can access. By the way, it's not going to be as good as our waled garden where the flip side of that is you get Databricks coming at it from a data science and data engineering perspective. And there's a lot of gaps in between, aren't there? >> And I think they both win. Like for instance, so we didn't do Snowpark integration. But we work with people building data apps on top of Snowflake or data bricks. And what we do is, we can add value to that, or what we've done, again, using all the Supercloud stuff we're done. But we deal with the unstructured data, the four V's coming at you. You can't pipeline that to save. So we actually could be additive. As they're trying to do like a security data cloud inside of Snowflake or do the same thing in Databricks. That's where we can play. Now, we play with them at the application level that they get some data from them and some data for us. But I believe there's a partnership there that will do it inside their environment. To us they're just another large scaler environment that my customers want to get after data. And they want me to abstract it out and give value. >> So it's another repository to you. >> Yeah. >> Okay. So I think Snowflake recently added support for unstructured data. You chose not to do Snowpark because why? >> Well, so the way they're doing the unstructured data is not bad. It's JSON data. Basically, This is the dilemma. Everyone wants their application developers to be flexible, move fast, securely but just productivity. So you get, give 'em flexibility. The problem with that is analytics on the end want to be structured to be performant. And this is where Snowflake, they have to somehow get that raw data. And it's changing every day because you just let the developers do what they want now, in some structured base, but do what you need to do your business fast and securely. So it completely destroys. So they have large customers trying to do big integrations for this messy data. And it doesn't quite work, cause you literally just can't make the pipelines work. So that's where we're complimentary do it. So now, the particular integration wasn't, we need a little bit deeper integration to do that. So we're integrating, actually, at the data app layer. But we could, see us and I don't, listen. I think Snowflake's a good actor. They're trying to figure out what's best for the customers. And I think we just participate in that. >> Yeah. And I think they're trying to figure out >> Yeah. >> how to grow their ecosystem. Because they know they can't do it all, in fact, >> And we solve the key thing, they just can't do certain things. And we do that well. Yeah, I have SQL but that's where it ends. >> Yeah. >> I do the messy data and how to play with them. >> And when you talk to one of their founders, anyway, Benoit, he comes on the cube and he's like, we start with simple. >> Yeah. >> It reminds me of the guy's some Pure Storage, that guy Coz, he's always like, no, if it starts to get too complicated. So that's why they said all right, we're not going to start out trying to figure out how to do complex joins and workload management. And they turn that into a feature. So like you say, I think both can win. It's a big market. >> I think it's a good model. And I love to see Frank, you know, move. >> Yeah. I forgot So you AVMAR... >> In the day. >> You guys used to hate each other, right? >> No, no, no >> No. I mean, it's all good. >> But the thing is, look what he's done. Like I wouldn't bet against Frank. I think it's a good message. You can see clients trying to do it. Same thing with Databricks, same thing with BigQuery. We get a lot of same dynamic in BigQuery. It's good for a lot of things, but it's not everything you need to do. And there's ways for the ecosystem to play together. >> Well, what's interesting about BigQuery is, it is truly cloud native, as is Snowflake. You know, whereas Amazon Redshift was sort of Parexel, it's cobbled together now. It's great engineering, but BigQuery gets a lot of high marks. But again, there's limitations to everything. That's why companies like yours can exist. >> And that's why.. so back to the Supercloud. It allows me as a company to participate in that because I'm leveraging all the underlying pieces. Which we couldn't be doing what we're doing now, without leveraging the Supercloud concepts right, so... >> Ed, I really appreciate you coming by, help me wrap up today in RE:INFORCE. Always a pleasure seeing you, my friend. >> Thank you. >> All right. Okay, this is a wrap on day one. We'll be back tomorrow. I'll be solo. John Furrier had to fly out but we'll be following what he's doing. This is RE:INFORCE 2022. You're watching theCUBE. I'll see you tomorrow.
SUMMARY :
John Furrier called it the How about that? It was really in this-- Yeah, we had to sort of bury our way in, But I'm glad they're back in Boston. No, this is perfect. And of course you and So how you been? But it's nothing that you can't overcome. but you were definitely an executive. So you have these weird crosscurrents, because of the recession, But we haven't been in an environment Right. that was long gone, right?. I do think you have to run a tight shop. the things that you do But what are you telling your people? 2008 and the recent... So it does change what you do, and the message was tighten up. the foot off the gas. So that's a little bit But also you look at I literally say that you you know, over a billion. Okay, I want to ask you about this concept you know, you've used the term before, of the individual clouds and to some of the things So I always like to do hard tech So you got to span multiple clouds. No, complimentary to them of a data platform, data apps. And of course people to bring their own, the quote, I'm going to kill it, And I think too, the other attribute is in the Multicloud 1.0 era, for the security Supercloud? And now the key thing is, And build your applications And the benefit is, But I think the benefit of Snowflake's is, you know what you're getting. which is no, no, you got to be open. that you can access. You can't pipeline that to save. You chose not to do Snowpark but do what you need to do they're trying to figure out how to grow their ecosystem. And we solve the key thing, I do the messy data And when you talk to So like you say, And I love to see Frank, you know, move. So you AVMAR... it's all good. but it's not everything you need to do. there's limitations to everything. so back to the Supercloud. Ed, I really appreciate you coming by, I'll see you tomorrow.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Tincher | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
2008 | DATE | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
Frank | PERSON | 0.99+ |
Frank Slootman | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two years | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
John Furrier | PERSON | 0.99+ |
Houston | LOCATION | 0.99+ |
2010 | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
Benoit | PERSON | 0.99+ |
Ed | PERSON | 0.99+ |
60% | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
ChaosSearch | ORGANIZATION | 0.99+ |
June | DATE | 0.99+ |
May of 2010 | DATE | 0.99+ |
BigQuery | TITLE | 0.99+ |
Castles in The Cloud | TITLE | 0.99+ |
September | DATE | 0.99+ |
Data Domain | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
$7 | QUANTITY | 0.99+ |
each cloud | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
over a billion | QUANTITY | 0.99+ |
Multicloud 2.0 | TITLE | 0.99+ |
four days | QUANTITY | 0.99+ |
M&A | ORGANIZATION | 0.98+ |
one repository | QUANTITY | 0.98+ |
Python | TITLE | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
Merritt Bear | PERSON | 0.98+ |
Supercloud | ORGANIZATION | 0.98+ |
Azure | ORGANIZATION | 0.97+ |
SQL | TITLE | 0.97+ |
EC2 | TITLE | 0.97+ |
one | QUANTITY | 0.96+ |
Fed | ORGANIZATION | 0.96+ |
S3 | TITLE | 0.96+ |
five grand patents | QUANTITY | 0.96+ |
Snowpark | ORGANIZATION | 0.96+ |
Multicloud 1.0 | TITLE | 0.95+ |
billion | QUANTITY | 0.94+ |
Avamar | ORGANIZATION | 0.93+ |
EMC World | LOCATION | 0.93+ |
Snowflake | PERSON | 0.93+ |
one point | QUANTITY | 0.93+ |
Supercloud | TITLE | 0.93+ |
Equifax | ORGANIZATION | 0.92+ |
92 | QUANTITY | 0.91+ |
Super PaaS | TITLE | 0.91+ |
Snowflake | TITLE | 0.89+ |
Sean Scott, PagerDuty | PagerDuty Summit 2022
>> Welcome back to theCube's coverage of PagerDuty Summit 22. Lisa Martin with you here on the ground. I've got one of our alumni back with me. Sean Scott joins me, the Chief Product Officer at PagerDuty. It's great to have you here in person. >> Super great to be here in person. >> Isn't it nice? >> Quite a change, quite a change. >> It is a change. We were talking before we went live about it. That's that readjustment to actually being with another human, but it's a good readjustment to have >> Awesome readjustment. I've been traveling more and more in the past few weeks and just speaking the offices, seeing the people the energy we get is the smiles, it's amazing. So it's so much better than just sitting at your home and. >> Oh, I couldn't agree more. For me it's the energy and the CEO of DocuSign talked about that with Jennifer during her fireside chat this morning, but yes, finally, someone like me who doesn't like working from home but as one of the things that you talked about in your keynote this morning was the ways traditionally that we've been working are no longer working. Talk to me about the future of work. What does it look like from PagerDuty's lens? >> Sure. So there's a few things. If we just take a step back and think about, what your day looks like from all the different slacks, chats, emails, you have your dashboards, you have more slacks coming in, you have more emails coming in, more chat and so just when you start the day off, you think you know what you're doing and then it kind of blows up out of the gate and so what we're all about is really trying to revolutionize operations so how do you help make sense of all the chaos that's happening and how do you make it simpler so you can get back to doing the more meaningful work and leave the tedium to the machines and just automate. >> That would be critical. One of the things that such an interesting dynamic two years that we've had obviously here we are in San Francisco with virtual event this year but there's so many problems out there that customer landscape's dealing with the great resignation. The data deluge, there's just data coming in everywhere and we have this expectation when we're on the consumer side, that we're going to be that a business will know us and have enough context to make us that the next best offer that actually makes sense but now what we're seeing is like the great resignation and the data overload is really creating for many organizations, this operational complexity that's now a problem really amorphously across the organization. It's no longer something that the back office has to deal with or just the front office, it's really across. >> Yeah, that's right. So you think about just the customer's experience, their expectations are higher than ever. I think there's been a lot of great consumer products that have taught the world, what good looks like, and I came from a consumer background and we measured the customer experience in milliseconds and so customers talking about minutes or hours of outages, customers are thinking in milliseconds so that's the disconnect and so, you have to be focused at that level and have everybody in your organization focused, thinking about milliseconds of customer experience, not seconds, minutes, hours, if that's where you're at, then you're losing customers. And then you think about, you mentioned the great resignation. Well, what does that mean for a given team or organization? That means lost institutional knowledge. So if you have the experts and they leave now, who's the experts? And do you have the processes and the tools and the runbooks to make sure that nothing falls on the ground? Probably not. Most of the people that we talk to, they're trying to figure it out as they go and they're getting better but there's a lot of institutional knowledge that goes out the door when people leave. And so part of our solution is also around our runbook automation and our process automation and some of our announcements today really help address that problem to keep the business running, keep the operations running, keep everything kind of moving and the customers happy ultimately and keep your business going where it needs to go. >> That customer experience is critical for organizations in every industry these days because we don't to your point. We'll tolerate milliseconds, but that's about it. Talk to me about you did this great keynote this morning that I had a chance to watch and you talked about how PagerDuty is revolutionizing operations and I thought, I want you to be able to break that down for this audience who may not have heard that. What are those four tenants or revolutionizing operations that PagerDuty is delivering to ORGS? >> Sure, so it starts with the data. So you mentioned the data deluge that's happening to everybody, right? And so we actually do, we integrate with over 650 systems to bring all that data in, so if you have an API or webhook, you can actually integrate with PagerDuty and push this data into PagerDuty and so that's where it starts, all these integrations and it's everything from a develop perspective, your CI/CD pipelines, your code repositories, from IT we have those systems are instrumented as well, even marketing, more tech stacks we can actually instrument and pull data in. The next step is now we have all this data, how do we make sense of it? So, we think we have machine learning algorithms that really help you focus your attention and kind of point you to the really relevant work, part of that is also noise suppression. So, our algorithms can suppress noise about 98% of the noise can just be eliminated and that helps you really focus where you need to spend your time 'cause if you think about human time and attention, it's pretty expensive and it's probably one of your company's most precious resources is that human time and so you want the humans doing the really meaningful work. Next step is automation, which is okay. We want the humans doing the special work, so what's the TDM? What's the toil that we can get rid of and push that to the machines 'cause machines are really good at doing very easy, repetitive task and there's a lot of them that we do day in, day out. The next step is just orchestrating the work and putting, getting everybody in the organization on the same page and that's where this morning I talked about our customer service operations product into the customer service is on the front lines and they're often getting signals from actual customers that nobody else in the organization may not even be aware of it yet So, I was running a system before and all our metrics are good and you get a customer feedback saying, "This isn't working for me," and you go look at the metrics and your dashboards and all looks good and then you go back and talk to the customer some more and they're like, "No, it's still not working," and you go back to your data, you're back to your dashboards, back to your metrics and sure enough, we had an instrumentation issue but the customer was giving us that feedback and so customer service is really on the front lines and they're often the kind of the unsung heroes for your customers but they're actually really helping and make sure that everything, the right signals are coming to the dev team, to the owners that own it and even in the case when you think you have everything instrumented, you may be missing something and that's where they can really help but our customer service operations product really helps bring everybody on the same page and then as the development teams and the IT teams and the SRA has pushed information back to customer service, then they're equipped, empowered to go tell the customer, "Okay, we know about the issue. Thank you." We should have it up in the next 30 minutes or whatever it is, five minutes, hopefully it's faster than longer, but they can inform the customer so to help that customer experience as opposed to the customer saying, "Oh, I'm just going to go shop somewhere else," or "I'm going to go buy somewhere else or do something else." And the last part is really around, how do we really enable our customers with the best practices? So those million users, the 21,000 companies in organizations we're working with, we've learned a lot around what good looks like. And so we've really embedded that back into our product in terms of our service standards which is really helps SRES and developers set quality standards for how services should be implemented at their company and then they can actually monitor and track across all their teams, what's the quality of the services and this team against different teams in their organization and really raise the quality of the overall system. >> So for businesses and like I mentioned, DocuSign was on this morning, I know some great brand customers that you guys have. I've seen on the website, Peloton Slack, a couple that popped out to me. When you're able to work with a customer to help them revolutionize operations, what are some of the business impacts? 'Cause some of the things that jump out to me would be like reduction in churn, retention rate or some of those things that are really overall impactful to the revenue of a business. >> Absolutely. And so there's a couple different parts of it. One is, all the work what PagerDuty is known for is orchestrating the work for a service outage or a website outage and so that's actually easy to measure 'cause you can measure your revenue that's coming in or missed revenue and how much we've shortened that. So that's the, I guess that's our kind of the history and our legacy but now we've moved into a lot of the cost side as well. So, helping customers really understand from an outage perspective where to focus our time as opposed to just orchestrating the work. Well now, we can say, we think we have a new feature we launched last year called Probable Origin. So when you have an outage, we can actually narrow in where we think the outage and just give you a few clues of this looks anomalous, for example. So let's start here. So that still focus on the top line and then from an automation perspective, there's lots and lots of just toil and noise that people are dealing with on a day in, day out basis and then some of it's easy work, some of it's harder work. One of the ones I really like is our automated diagnostics. So, if you have an incident, one of the first things you have to do is you have to go gather telemetry of what's actually happening on the servers, to say, is the CPU look good? Does the memory look good? Does the disc look good? Does the network look good? And that's all perfect work for automation. And so we can run our automated diagnostics and have all that data pumped directly into the incident so when the responder engages, it's all right there waiting for them and they don't have to go do all that basic task of getting data, cutting and pasting into the incident or if you're using one of those old ticketing systems, cutting and pacing into a a tickety system, it's all right there waiting for you. And that's on average 15 minutes during an outage of time that's saved. And the nice thing about that is that can all be kicked off at time zero so you can actually call from our event orchestration product, you can call directly into automation actions right there when that event first comes in. So you think about, there's a warning for a CPU and instantly it kicks off this diagnostics and then within seconds or even minutes, it's in the incident waiting for you to take action. >> One of the things that you also shared this morning that I loved was one of the stats around customer sale point that they had 60 different alerts coming in and PagerDuty was able to reduce that to one alert. So, 60 X reduction in alerts, getting rid of a lot of noise allowing them to focus on really those probably key high escalations that are going to make the biggest impact to their customers and to their business. >> That's right. You think about, you have a high severity incident like they actually had a database failure and so, when you're in the heat of the moment and you start getting these alerts, you're trying to figure out, is that one incident? Is it 10 incidents? Is it a hundred incidents that I'm having to deal with? And you probably have a good feeling like there's, I know it's probably this thing but you're not quite sure and so, with our machine learning we're able to eliminate a lot of the noise and in this case it was, going from 60 alerts down to one, just to let you know, this is the actual incident, but then also to focus your attention on where we think may be the cause and you think about all the different teams that historically have been had to pull in for a large scale incident. We can quickly narrow into the root cause and just get the right people involved. So we don't have these conference bridges of a hundred people on which you hear about. When these large cottages happen that everyone's on a call across the entire company and it's not just the dev teams and IT teams, you have PR, you have legal, you have everybody's involved in these. And so the more that we can workshop their work and get smarter about using machine learning, some of these other technologies then the more efficient it is for our customers and ultimately the better it is for their customers. >> Right and hopefully, PR, HR, legal doesn't have to be some of those incident response leaders that right now we're seeing across the organization. >> Exactly. Exactly. >> So when you're talking with customers and some of the things that you announced, you mentioned automated actions, incident workflows, what are you hearing from the voice of the customer as the chief product officer and what influence did that have in terms of this year's vision for the PagerDuty Summit? >> Sure. We listen to our customers all the time. It's one of our leadership principles and really trying to hear their feedback and it was interesting. I got sent some of the chat threads during the keynote afterwards, and there's a lot of excitement about the products we announced. So the first one is incident workflows, and this is really, it's a no code workflow based on or a recent acquisition of a company called Catalytic and what it does is it's, you can think of as kind of our next generation of response plays so you can actually go in and and build a workflow using no code tooling to say, when this incident happens or this type of incident happens, here's what that process looks like and so back to your original comment around the great resignation that loss institutional knowledge, well now, you're building all this into your processes through your incident response. And so, I think the incident workflows, if you want to create a incident specific slack channel or an incident specific zoom bridge, or even just in status updates, all that is right there for you and you can use our out of the box orchestrations or you can define your own 'cause we have back to the, our customer list, we have some of the biggest companies in the world, as customers and we have a very opinionated product and so if you're new to the whole DevOps and full service ownership, we help you through that. But then, a lot of our companies are evolving along that continuum, the operational maturity model continuum. And at the other end, we have customers that say "This is great, but we want to extend it. We want to like call this person or send this or update this system here." And so that's where the incident workflows is really powerful and it lets our customers just tailor it to their processes and really extend it for them. >> And that's GA later this year? >> Later this year, yes, we'll start ING probably the next few months and then GA later this year. >> Got it. Last question, as we're almost out of time here, what are some of the things that as you talk to customers day in and day out, as you see you saw the chats from this morning's live keynote, the excitement, the trust that PagerDuty is building with its customers, its partners, et cetera, What excites you about the future? >> So it's really why I came to PagerDuty. I've been here about a year and a half now, but revolutionizing operations, that's a big statement and I think we need it. I think Jennifer said in her keynote today, work is broken and I think our data, we surveyed our customers earlier this year and 42% of the respondents were working more hours in 2021 compared to 2020. And I don't think anyone goes home and if I could only work more hours, I think there's some and if I could only do more of this like TDM, the TDM, more toils, if I could only do more of that, I think life would be so good. We don't hear that. We don't hear that a lot. We hear about there's a lot of noise. We have a massive attrition that every company does. That's the type of feedback that we get and so we're really, that's what gets me excited about, the tools that we're building that and especially when I think just seeing the chat even this morning about some of the announcements, it shows we've been listening and it shows the excitement in our customers when they're, lots of I'm going to use this tool, that tool, I can just use PagerDuty which is awesome. >> The momentum is clear and it's palpable and I love being a part of that. Thank you so much Sean for joining me on theCube this afternoon, talking about what's new, what's exciting and how you guys are fixing work that's broken that validated me thinking the work was broken so thank you. >> Happy to be here and thanks for having me. >> My pleasure. For Sean Scott. I'm Lisa Martin, you're watching theCube's coverage of PagerDuty Summit 22 on the ground from the San Francisco. (soft music)
SUMMARY :
It's great to have you here in person. but it's a good readjustment to have and just speaking the offices, and the CEO of DocuSign talked about that and leave the tedium to the that the back office has to deal with and the tools and the runbooks and I thought, I want you to and even in the case 'Cause some of the things and so that's actually easy to measure and to their business. and it's not just the across the organization. Exactly. and so back to your original comment and then GA later this year. that as you talk to and 42% of the respondents the work was broken Happy to be here and of PagerDuty Summit 22 on the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jennifer | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Sean Scott | PERSON | 0.99+ |
Sean | PERSON | 0.99+ |
10 incidents | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
60 alerts | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
PagerDuty | ORGANIZATION | 0.99+ |
2021 | DATE | 0.99+ |
DocuSign | ORGANIZATION | 0.99+ |
21,000 companies | QUANTITY | 0.99+ |
60 different alerts | QUANTITY | 0.99+ |
Catalytic | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
one alert | QUANTITY | 0.99+ |
five minutes | QUANTITY | 0.99+ |
ING | ORGANIZATION | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
60 X | QUANTITY | 0.99+ |
one incident | QUANTITY | 0.99+ |
42% | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two years | QUANTITY | 0.99+ |
SRA | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
over 650 systems | QUANTITY | 0.98+ |
about 98% | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
million users | QUANTITY | 0.97+ |
PagerDuty Summit 22 | EVENT | 0.97+ |
first one | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
Peloton Slack | ORGANIZATION | 0.96+ |
Later this year | DATE | 0.96+ |
first | QUANTITY | 0.96+ |
GA | LOCATION | 0.96+ |
four tenants | QUANTITY | 0.96+ |
later this year | DATE | 0.96+ |
PagerDuty Summit | EVENT | 0.95+ |
this morning | DATE | 0.95+ |
next few months | DATE | 0.94+ |
this afternoon | DATE | 0.91+ |
earlier this year | DATE | 0.91+ |
PagerDuty Summit 2022 | EVENT | 0.87+ |
hundred incidents | QUANTITY | 0.85+ |
hundred people | QUANTITY | 0.84+ |
about a year and a half | QUANTITY | 0.83+ |
couple | QUANTITY | 0.83+ |
theCube | ORGANIZATION | 0.8+ |
SRES | ORGANIZATION | 0.8+ |
Probable Origin | TITLE | 0.79+ |
first things | QUANTITY | 0.78+ |
things | QUANTITY | 0.68+ |
next 30 minutes | DATE | 0.67+ |
PagerDuty | TITLE | 0.58+ |
runbooks | ORGANIZATION | 0.53+ |
past | DATE | 0.53+ |
year | DATE | 0.49+ |
weeks | DATE | 0.48+ |
zero | QUANTITY | 0.46+ |
Pat Conte, Opsani | AWS Startup Showcase
(upbeat music) >> Hello and welcome to this CUBE conversation here presenting the "AWS Startup Showcase: "New Breakthroughs in DevOps, Data Analytics "and Cloud Management Tools" featuring Opsani for the cloud management and migration track here today, I'm your host John Furrier. Today, we're joined by Patrick Conte, Chief Commercial Officer, Opsani. Thanks for coming on. Appreciate you coming on. Future of AI operations. >> Thanks, John. Great to be here. Appreciate being with you. >> So congratulations on all your success being showcased here as part of the Startups Showcase, future of AI operations. You've got the cloud scale happening. A lot of new transitions in this quote digital transformation as cloud scales goes next generation. DevOps revolution as Emily Freeman pointed out in her keynote. What's the problem statement that you guys are focused on? Obviously, AI involves a lot of automation. I can imagine there's a data problem in there somewhere. What's the core problem that you guys are focused on? >> Yeah, it's interesting because there are a lot of companies that focus on trying to help other companies optimize what they're doing in the cloud, whether it's cost or whether it's performance or something else. We felt very strongly that AI was the way to do that. I've got a slide prepared, and maybe we can take a quick look at that, and that'll talk about the three elements or dimensions of the problem. So we think about cloud services and the challenge of delivering cloud services. You've really got three things that customers are trying to solve for. They're trying to solve for performance, they're trying to solve for the best performance, and, ultimately, scalability. I mean, applications are growing really quickly especially in this current timeframe with cloud services and whatnot. They're trying to keep costs under control because certainly, it can get way out of control in the cloud since you don't own the infrastructure, and more importantly than anything else which is why it's at the bottom sort of at the foundation of all this, is they want their applications to be a really a good experience for their customers. So our customer's customer is actually who we're trying to solve this problem for. So what we've done is we've built a platform that uses AI and machine learning to optimize, meaning tune, all of the key parameters of a cloud application. So those are things like the CPU usage, the memory usage, the number of replicas in a Kubernetes or container environment, those kinds of things. It seems like it would be simple just to grab some values and plug 'em in, but it's not. It's actually the combination of them has to be right. Otherwise, you get delays or faults or other problems with the application. >> Andrew, if you can bring that slide back up for a second. I want to just ask one quick question on the problem statement. You got expenditures, performance, customer experience kind of on the sides there. Do you see this tip a certain way depending upon use cases? I mean, is there one thing that jumps out at you, Patrick, from your customer's customer's standpoint? Obviously, customer experience is the outcome. That's the app, whatever. That's whatever we got going on there. >> Sure. >> But is there patterns 'cause you can have good performance, but then budget overruns. Or all of them could be failing. Talk about this dynamic with this triangle. >> Well, without AI, without machine learning, you can solve for one of these, only one, right? So if you want to solve for performance like you said, your costs may overrun, and you're probably not going to have control of the customer experience. If you want to solve for one of the others, you're going to have to sacrifice the other two. With machine learning though, we can actually balance that, and it isn't a perfect balance, and the question you asked is really a great one. Sometimes, you want to over-correct on something. Sometimes, scalability is more important than cost, but what we're going to do because of our machine learning capability, we're going to always make sure that you're never spending more than you should spend, so we're always going to make sure that you have the best cost for whatever the performance and reliability factors that you you want to have are. >> Yeah, I can imagine. Some people leave services on. Happened to us one time. An intern left one of the services on, and like where did that bill come from? So kind of looked back, we had to kind of fix that. There's a ton of action, but I got to ask you, what are customers looking for with you guys? I mean, as they look at Opsani, what you guys are offering, what's different than what other people might be proposing with optimization solutions? >> Sure. Well, why don't we bring up the second slide, and this'll illustrate some of the differences, and we can talk through some of this stuff as well. So really, the area that we play in is called AIOps, and that's sort of a new area, if you will, over the last few years, and really what it means is applying intelligence to your cloud operations, and those cloud operations could be development operations, or they could be production operations. And what this slide is really representing is in the upper slide, that's sort of the way customers experience their DevOps model today. Somebody says we need an application or we need a feature, the developers pull down something from get. They hack an early version of it. They run through some tests. They size it whatever way they know that it won't fail, and then they throw it over to the SREs to try to tune it before they shove it out into production, but nobody really sizes it properly. It's not optimized, and so it's not tuned either. When it goes into production, it's just the first combination of settings that work. So what happens is undoubtedly, there's some type of a problem, a fault or a delay, or you push new code, or there's a change in traffic. Something happens, and then, you've got to figure out what the heck. So what happens then is you use your tools. First thing you do is you over-provision everything. That's what everybody does, they over-provision and try to soak up the problem. But that doesn't solve it because now, your costs are going crazy. You've got to go back and find out and try as best you can to get root cause. You go back to the tests, and you're trying to find something in the test phase that might be an indicator. Eventually your developers have to hack a hot fix, and the conveyor belt sort of keeps on going. We've tested this model on every single customer that we've spoken to, and they've all said this is what they experience on a day-to-day basis. Now, if we can go back to the side, let's talk about the second part which is what we do and what makes us different. So on the bottom of this slide, you'll see it's really a shift-left model. What we do is we plug in in the production phase, and as I mentioned earlier, what we're doing is we're tuning all those cloud parameters. We're tuning the CPU, the memory, the Replicas, all those kinds of things. We're tuning them all in concert, and we're doing it at machine speed, so that's how the customer gets the best performance, the best reliability at the best cost. That's the way we're able to achieve that is because we're iterating this thing in machine speed, but there's one other place where we plug in and we help the whole concept of AIOps and DevOps, and that is we can plug in in the test phase as well. And so if you think about it, the DevOps guy can actually not have to over-provision before he throws it over to the SREs. He can actually optimize and find the right size of the application before he sends it through to the SREs, and what this does is collapses the timeframe because it means the SREs don't have to hunt for a working set of parameters. They get one from the DevOps guys when they send it over, and this is how the future of AIOps is being really affected by optimization and what we call autonomous optimization which means that it's happening without humans having to press a button on it. >> John: Andrew, bring that slide back up. I want to just ask another question. Tuning in concert thing is very interesting to me. So how does that work? Are you telegraphing information to the developer from the autonomous workload tuning engine piece? I mean, how does the developer know the right knobs or where does it get that provisioning information? I see the performance lag. I see where you're solving that problem. >> Sure. >> How does that work? >> Yeah, so actually, if we go to the next slide, I'll show you exactly how it works. Okay, so this slide represents the architecture of a typical application environment that we would find ourselves in, and inside the dotted line is the customer's application namespace. That's where the app is. And so, it's got a bunch of pods. It's got a horizontal pod. It's got something for replication, probably an HPA. And so, what we do is we install inside that namespace two small instances. One is a tuning pod which some people call a canary, and that tuning pod joins the rest of the pods, but it's not part of the application. It's actually separate, but it gets the same traffic. We also install somebody we call Servo which is basically an action engine. What Servo does is Servo takes the metrics from whatever the metric system is is collecting all those different settings and whatnot from the working application. It could be something like Prometheus. It could be an Envoy Sidecar, or more likely, it's something like AppDynamics, or we can even collect metrics off of Nginx which is at the front of the service. We can plug into anywhere where those metrics are. We can pull the metrics forward. Once we see the metrics, we send them to our backend. The Opsani SaaS service is our machine learning backend. That's where all the magic happens, and what happens then is that service sees the settings, sends a recommendation to Servo, Servo sends it to the tuning pod, and we tune until we find optimal. And so, that iteration typically takes about 20 steps. It depends on how big the application is and whatnot, how fast those steps take. It could be anywhere from seconds to minutes to 10 to 20 minutes per step, but typically within about 20 steps, we can find optimal, and then we'll come back and we'll say, "Here's optimal, and do you want to "promote this to production," and the customer says, "Yes, I want to promote it to production "because I'm saving a lot of money or because I've gotten "better performance or better reliability." Then, all he has to do is press a button, and all that stuff gets sent right to the production pods, and all of those settings get put into production, and now he's now he's actually saving the money. So that's basically how it works. >> It's kind of like when I want to go to the beach, I look at the weather.com, I check the forecast, and I decide whether I want to go or not. You're getting the data, so you're getting a good look at the information, and then putting that into a policy standpoint. I get that, makes total sense. Can I ask you, if you don't mind, expanding on the performance and reliability and the cost advantage? You mentioned cost. How is that impacting? Give us an example of some performance impact, reliability, and cost impacts. >> Well, let's talk about what those things mean because like a lot of people might have different ideas about what they think those mean. So from a cost standpoint, we're talking about cloud spend ultimately, but it's represented by the settings themselves, so I'm not talking about what deal you cut with AWS or Azure or Google. I'm talking about whatever deal you cut, we're going to save you 30, 50, 70% off of that. So it doesn't really matter what cost you negotiated. What we're talking about is right-sizing the settings for CPU and memory, Replica. Could be Java. It could be garbage collection, time ratios, or heap sizes or things like that. Those are all the kinds of things that we can tune. The thing is most of those settings have an unlimited number of values, and this is why machine learning is important because, if you think about it, even if they only had eight settings or eight values per setting, now you're talking about literally billions of combinations. So to find optimal, you've got to have machine speed to be able to do it, and you have to iterate very, very quickly to make it happen. So that's basically the thing, and that's really one of the things that makes us different from anybody else, and if you put that last slide back up, the architecture slide, for just a second, there's a couple of key words at the bottom of it that I want to want to focus on, continuous. So continuous really means that we're on all the time. We're not plug us in one time, make a change, and then walk away. We're actually always measuring and adjusting, and the reason why this is important is in the modern DevOps world, your traffic level is going to change. You're going to push new code. Things are going to happen that are going to change the basic nature of the software, and you have to be able to tune for those changes. So continuous is very important. Second thing is autonomous. This is designed to take pressure off of the SREs. It's not designed to replace them, but to take the pressure off of them having to check pager all the time and run in and make adjustments, or try to divine or find an adjustment that might be very, very difficult for them to do so. So we're doing it for them, and that scale means that we can solve this for, let's say, one big monolithic application, or we can solve it for literally hundreds of applications and thousands of microservices that make up those applications and tune them all at the same time. So the same platform can be used for all of those. You originally asked about the parameters and the settings. Did I answer the question there? >> You totally did. I mean, the tuning in concert. You mentioned early as a key point. I mean, you're basically tuning the engine. It's not so much negotiating a purchase SaaS discount. It's essentially cost overruns by the engine, either over burning or heating or whatever you want to call it. I mean, basically inefficiency. You're tuning the core engine. >> Exactly so. So the cost thing is I mentioned is due to right-sizing the settings and the number of Replicas. The performance is typically measured via latency, and the reliability is typically measured via error rates. And there's some other measures as well. We have a whole list of them that are in the application itself, but those are the kinds of things that we look for as results. When we do our tuning, we look for reducing error rates, or we look for holding error rates at zero, for example, even if we improve the performance or we improve the cost. So we're looking for the best result, the best combination result, and then a customer can decide if they want to do so to actually over-correct on something. We have the whole concept of guard rail, so if performance is the most important thing, or maybe some customers, cost is the most important thing, they can actually say, "Well, give us the best cost, "and give us the best performance and the best reliability, "but at this cost," and we can then use that as a service-level objective and tune around it. >> Yeah, it reminds me back in the old days when you had filtering white lists of black lists of addresses that can go through, say, a firewall or a device. You have billions of combinations now with machine learning. It's essentially scaling the same concept to unbelievable. These guardrails are now in place, and that's super cool and I think really relevant call-out point, Patrick, to kind of highlight that. At this kind of scale, you need machine learning, you need the AI to essentially identify quickly the patterns or combinations that are actually happening so a human doesn't have to waste their time that can be filled by basically a bot at that point. >> So John, there's just one other thing I want to mention around this, and that is one of the things that makes us different from other companies that do optimization. Basically, every other company in the optimization space creates a static recommendation, basically their recommendation engines, and what you get out of that is, let's say it's a manifest of changes, and you hand that to the SREs, and they put it into effect. Well, the fact of the matter is is that the traffic could have changed then. It could have spiked up, or it could have dropped below normal. You could have introduced a new feature or some other code change, and at that point in time, you've already instituted these changes. They may be completely out of date. That's why the continuous nature of what we do is important and different. >> It's funny, even the language that we're using here: network, garbage collection. I mean, you're talking about tuning an engine, am operating system. You're talking about stuff that's moving up the stack to the application layer, hence this new kind of eliminating of these kind of siloed waterfall, as you pointed out in your second slide, is kind of one integrated kind of operating environment. So when you have that or think about the data coming in, and you have to think about the automation just like self-correcting, error-correcting, tuning, garbage collection. These are words that we've kind of kicking around, but at the end of the day, it's an operating system. >> Well in the old days of automobiles, which I remember cause I'm I'm an old guy, if you wanted to tune your engine, you would probably rebuild your carburetor and turn some dials to get the air-oxygen-gas mix right. You'd re-gap your spark plugs. You'd probably make sure your points were right. There'd be four or five key things that you would do. You couldn't do them at the same time unless you had a magic wand. So we're the magic wand that basically, or in modern world, we're sort of that thing you plug in that tunes everything at once within that engine which is all now electronically controlled. So that's the big differences as you think about what we used to do manually, and now, can be done with automation. It can be done much, much faster without humans having to get their fingernails greasy, let's say. >> And I think the dynamic versus static is an interesting point. I want to bring up the SRE which has become a role that's becoming very prominent in the DevOps kind of plus world that's happening. You're seeing this new revolution. The role of the SRE is not just to be there to hold down and do the manual configuration. They had a scale. They're a developer, too. So I think this notion of offloading the SRE from doing manual tasks is another big, important point. Can you just react to that and share more about why the SRE role is so important and why automating that away through when you guys have is important? >> The SRE role is becoming more and more important, just as you said, and the reason is because somebody has to get that application ready for production. The DevOps guys don't do it. That's not their job. Their job is to get the code finished and send it through, and the SREs then have to make sure that that code will work, so they have to find a set of settings that will actually work in production. Once they find that set of settings, the first one they find that works, they'll push it through. It's not optimized at that point in time because they don't have time to try to find optimal, and if you think about it, the difference between a machine learning backend and an army of SREs that work 24-by-seven, we're talking about being able to do the work of many, many SREs that never get tired, that never need to go play video games, to unstress or whatever. We're working all the time. We're always measuring, adjusting. A lot of the companies we talked to do a once-a-month adjustment on their software. So they put an application out, and then they send in their SREs once a month to try to tune the application, and maybe they're using some of these other tools, or maybe they're using just their smarts, but they'll do that once a month. Well, gosh, they've pushed code probably four times during the month, and they probably had a bunch of different spikes and drops in traffic and other things that have happened. So we just want to help them spend their time on making sure that the application is ready for production. Want to make sure that all the other parts of the application are where they should be, and let us worry about tuning CPU, memory, Replica, job instances, and things like that so that they can work on making sure that application gets out and that it can scale, which is really important for them, for their companies to make money is for the apps to scale. >> Well, that's a great insight, Patrick. You mentioned you have a lot of great customers, and certainly if you have your customer base are early adopters, pioneers, and grow big companies because they have DevOps. They know that they're seeing a DevOps engineer and an SRE. Some of the other enterprises that are transforming think the DevOps engineer is the SRE person 'cause they're having to get transformed. So you guys are at the high end and getting now the new enterprises as they come on board to cloud scale. You have a huge uptake in Kubernetes, starting to see the standardization of microservices. People are getting it, so I got to ask you can you give us some examples of your customers, how they're organized, some case studies, who uses you guys, and why they love you? >> Sure. Well, let's bring up the next slide. We've got some customer examples here, and your viewers, our viewers, can probably figure out who these guys are. I can't tell them, but if they go on our website, they can sort of put two and two together, but the first one there is a major financial application SaaS provider, and in this particular case, they were having problems that they couldn't diagnose within the stack. Ultimately, they had to apply automation to it, and what we were able to do for them was give them a huge jump in reliability which was actually the biggest problem that they were having. We gave them 5,000 hours back a month in terms of the application. They were they're having pager duty alerts going off all the time. We actually gave them better performance. We gave them a 10% performance boost, and we dropped their cloud spend for that application by 72%. So in fact, it was an 80-plus % price performance or cost performance improvement that we gave them, and essentially, we helped them tune the entire stack. This was a hybrid environment, so this included VMs as well as more modern architecture. Today, I would say the overwhelming majority of our customers have moved off of the VMs and are in a containerized environment, and even more to the point, Kubernetes which we find just a very, very high percentage of our customers have moved to. So most of the work we're doing today with new customers is around that, and if we look at the second and third examples here, those are examples of that. In the second example, that's a company that develops websites. It's one of the big ones out in the marketplace that, let's say, if you were starting a new business and you wanted a website, they would develop that website for you. So their internal infrastructure is all brand new stuff. It's all Kubernetes, and what we were able to do for them is they were actually getting decent performance. We held their performance at their SLO. We achieved a 100% error-free scenario for them at runtime, and we dropped their cost by 80%. So for them, they needed us to hold-serve, if you will, on performance and reliability and get their costs under control because everything in that, that's a cloud native company. Everything there is cloud cost. So the interesting thing is it took us nine steps because nine of our iterations to actually get to optimal. So it was very, very quick, and there was no integration required. In the first case, we actually had to do a custom integration for an underlying platform that was used for CICD, but with the- >> John: Because of the hybrid, right? >> Patrick: Sorry? >> John: Because it was hybrid, right? >> Patrick: Yes, because it was hybrid, exactly. But within the second one, we just plugged right in, and we were able to tune the Kubernetes environment just as I showed in that architecture slide, and then the third one is one of the leading application performance monitoring companies on the market. They have a bunch of their own internal applications and those use a lot of cloud spend. They're actually running Kubernetes on top of VMs, but we don't have to worry about the VM layer. We just worry about the Kubernetes layer for them, and what we did for them was we gave them a 48% performance improvement in terms of latency and throughput. We dropped their error rates by 90% which is pretty substantial to say the least, and we gave them a 50% cost delta from where they had been. So this is the perfect example of actually being able to deliver on all three things which you can't always do. It has to be, sort of all applications are not created equal. This was one where we were able to actually deliver on all three of the key objectives. We were able to set them up in about 25 minutes from the time we got started, no extra integration, and needless to say, it was a big, happy moment for the developers to be able to go back to their bosses and say, "Hey, we have better performance, "better reliability. "Oh, by the way, we saved you half." >> So depending on the stack situation, you got VMs and Kubernetes on the one side, cloud-native, all Kubernetes, that's dream scenario obviously. Not many people like that. All the new stuff's going cloud-native, so that's ideal, and then the mixed ones, Kubernetes, but no VMs, right? >> Yeah, exactly. So Kubernetes with no VMs, no problem. Kubernetes on top of VMs, no problem, but we don't manage the VMs. We don't manage the underlay at all, in fact. And the other thing is we don't have to go back to the slide, but I think everybody will remember the slide that had the architecture, and on one side was our cloud instance. The only data that's going between the application and our cloud instance are the settings, so there's never any data. There's never any customer data, nothing for PCI, nothing for HIPPA, nothing for GDPR or any of those things. So no personal data, no health data. Nothing is passing back and forth. Just the settings of the containers. >> Patrick, while I got you here 'cause you're such a great, insightful guest, thank you for coming on and showcasing your company. Kubernetes real quick. How prevalent is this mainstream trend is because you're seeing such great examples of performance improvements. SLAs being met, SLOs being met. How real is Kubernetes for the mainstream enterprise as they're starting to use containers to tip their legacy and get into the cloud-native and certainly hybrid and soon to be multi-cloud environment? >> Yeah, I would not say it's dominant yet. Of container environments, I would say it's dominant now, but for all environments, it's not. I think the larger legacy companies are still going through that digital transformation, and so what we do is we catch them at that transformation point, and we can help them develop because as we remember from the AIOps slide, we can plug in at that test level and help them sort of pre-optimize as they're coming through. So we can actually help them be more efficient as they're transforming. The other side of it is the cloud-native companies. So you've got the legacy companies, brick and mortar, who are desperately trying to move to digitization. Then, you've got the ones that are born in the cloud. Most of them aren't on VMs at all. Most of them are on containers right from the get-go, but you do have some in the middle who have started to make a transition, and what they've done is they've taken their native VM environment and they've put Kubernetes on top of it so that way, they don't have to scuttle everything underneath it. >> Great. >> So I would say it's mixed at this point. >> Great business model, helping customers today, and being a bridge to the future. Real quick, what licensing models, how to buy, promotions you have for Amazon Web Services customers? How do people get involved? How do you guys charge? >> The product is licensed as a service, and the typical service is an annual. We license it by application, so let's just say you have an application, and it has 10 microservices. That would be a standard application. We'd have an annual cost for optimizing that application over the course of the year. We have a large application pack, if you will, for let's say applications of 20 services, something like that, and then we also have a platform, what we call Opsani platform, and that is for environments where the customer might have hundreds of applications and-or thousands of services, and we can plug into their deployment platform, something like a harness or Spinnaker or Jenkins or something like that, or we can plug into their their cloud Kubernetes orchestrator, and then we can actually discover the apps and optimize them. So we've got environments for both single apps and for many, many apps, and with the same platform. And yes, thanks for reminding me. We do have a promotion for for our AWS viewers. If you reference this presentation, and you look at the URL there which is opsani.com/awsstartupshowcase, can't forget that, you will, number one, get a free trial of our software. If you optimize one of your own applications, we're going to give you an Oculus set of goggles, the augmented reality goggles. And we have one other promotion for your viewers and for our joint customers here, and that is if you buy an annual license, you're going to get actually 15 months. So that's what we're putting on the table. It's actually a pretty good deal. The Oculus isn't contingent. That's a promotion. It's contingent on you actually optimizing one of your own services. So it's not a synthetic app. It's got to be one of your own apps, but that's what we've got on the table here, and I think it's a pretty good deal, and I hope your guys take us up on it. >> All right, great. Get Oculus Rift for optimizing one of your apps and 15 months for the price of 12. Patrick, thank you for coming on and sharing the future of AIOps with you guys. Great product, bridge to the future, solving a lot of problems. A lot of use cases there. Congratulations on your success. Thanks for coming on. >> Thank you so much. This has been excellent, and I really appreciate it. >> Hey, thanks for sharing. I'm John Furrier, your host with theCUBE. Thanks for watching. (upbeat music)
SUMMARY :
for the cloud management and Appreciate being with you. of the Startups Showcase, and that'll talk about the three elements kind of on the sides there. 'cause you can have good performance, and the question you asked An intern left one of the services on, and find the right size I mean, how does the and the customer says, and the cost advantage? and that's really one of the things I mean, the tuning in concert. So the cost thing is I mentioned is due to in the old days when you had and that is one of the things and you have to think about the automation So that's the big differences of offloading the SRE and the SREs then have to make sure and certainly if you So most of the work we're doing today "Oh, by the way, we saved you half." So depending on the stack situation, and our cloud instance are the settings, and get into the cloud-native that are born in the cloud. So I would say it's and being a bridge to the future. and the typical service is an annual. and 15 months for the price of 12. and I really appreciate it. I'm John Furrier, your host with theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Emily Freeman | PERSON | 0.99+ |
Patrick | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Pat Conte | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Patrick Conte | PERSON | 0.99+ |
15 months | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
nine steps | QUANTITY | 0.99+ |
30 | QUANTITY | 0.99+ |
Oculus | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
72% | QUANTITY | 0.99+ |
48% | QUANTITY | 0.99+ |
10 microservices | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
second slide | QUANTITY | 0.99+ |
first case | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
one | QUANTITY | 0.99+ |
20 services | QUANTITY | 0.99+ |
Prometheus | TITLE | 0.99+ |
second example | QUANTITY | 0.99+ |
second one | QUANTITY | 0.99+ |
five key | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
80-plus % | QUANTITY | 0.99+ |
eight settings | QUANTITY | 0.99+ |
Opsani | PERSON | 0.99+ |
third examples | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
services | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
eight values | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
nine | QUANTITY | 0.98+ |
three elements | QUANTITY | 0.98+ |
Servo | ORGANIZATION | 0.98+ |
80% | QUANTITY | 0.98+ |
opsani.com/awsstartupshowcase | OTHER | 0.98+ |
first one | QUANTITY | 0.98+ |
two small instances | QUANTITY | 0.98+ |
10 | QUANTITY | 0.97+ |
three things | QUANTITY | 0.97+ |
once a month | QUANTITY | 0.97+ |
one time | QUANTITY | 0.97+ |
70% | QUANTITY | 0.97+ |
GDPR | TITLE | 0.97+ |
zero | QUANTITY | 0.97+ |
Servo | TITLE | 0.97+ |
about 20 steps | QUANTITY | 0.97+ |
12 | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.96+ |
four times | QUANTITY | 0.96+ |
Michael Kearns, Virtasant | Cloud City Live 2021
(upbeat music) >> Okay, we're back here at theCUBE on this floor in CLOUD CITY, the center of all the action at Mobile World Congress. I'm John Brown your host. Michael current CTO of Virta San is here with me remote because this is a virtual event as well, this is a hybrid event. The first industry hybrid event, Greg would be back in real life on the floor, Michael, you coming in remotely. Thanks for joining us here in the cube in cloud city. >> Thanks for having me and said the beer. >> We were just talking on camera about. He went to Michigan and football, all that good time while we were waiting from Adam to pseudo great stuff, but let's get into what you guys are doing. You've got a great cloud news, we're going to get to, but take a minute to explain what you guys do first. >> So Virtasant helps organizations of any size thrive in the cloud. So we have a unique combination of proprietary technologies, such as our cloud optimization platform that we'll talk about in a minute and a global team of experts that helps companies make the most of the cloud from getting to the cloud and building the cloud to optimizing the cloud all the way to managing the cloud at scale. >> Well, you got a lot of experience dealing with the enterprise, a lot of customer growth over the years, great leader. The cloud dynamic here is the big story at mobile world congress, this year, the change over, I won't say change over per se, but certainly the shift or growth of cloud on top of telco, you guys have some news here at mobile world congress. Let's share the news, what's the big scoop? >> So we have an automated cloud optimization platform that helps companies automatically understand your usage patterns and do spend fully, automatically. And we focus first on AWS is the biggest cloud provider, but starting this week, we wanted announces we're actually going live with our GCP product, which means people who are on the GCP cloud platform can now leverage our platform to constantly understand usage patterns and spend and automatically take action to reduce spend. So we typically see customers save over 50% when they use our platform. So now GCP customers can take advantage of the same capabilities that our AWS customers take advantage of every day. >> Talk about the relationships as you get deeper. And this seems to be the pattern I want to just unpack it. You don't mind a little bit the relationship with Google and this announcement and Amazon you're tightly coupled with them, is it more integration? Talk about what makes these deals different and special for your customers? What's what's, what's about them. What's the big deal? >> Well, I think for us, obviously we think that, you know, the public cloud's the future, right? And obviously cloud city and all the different companies there agree with us, and we think that much like, you know, you don't, you don't generate your own electricity. We don't think you're going to generate you're to you're going to build your own technology infrastructure. For the most part, we think that pretty much all compute will be in the public cloud. And obviously AWS is the market leader in the largest cloud provider in the, but you know, GCP, especially with telecom has some compelling offerings. And we think that, you know, organizations are going to want choice. Many will go multicloud, meaning they'll have 1, 2, 3 of the big providers and move workloads across those. But even those who choose one cloud provider, you know, each cloud provider has their strengths and different companies will choose different providers. And they're all, you know, they've all got strong capabilities and their uniqueness. So we want to make sure that whether, you know, an organization goes across all cloud providers or they choose one that we can support them no matter what the workloads look like, and so for us, you know, developing deep relationships with each of the public cloud providers, but also, you know, expanding our full set of capabilities to support all of them is critically important because we do think that there's going to be, you know, a handful of large public cloud providers and obviously AWS and GCP are among them. >> Yeah, I mean, I talk to people all the time and even, you know, we're an Amazon customer, pretty robust cloud in the bills out of control is what's, what's this charge for it. There's more services to tap into, you know, it's like first one's on me, you know? And then next thing you know, you're, you're consuming a hell of a lot of new services, but there's value there and there's breaths a minute for the cloud, we all love that. But just as a random aside here, I want to get your thoughts real quick, if you don't mind, this idea of a cloud economist has become part of a new role in an organization, certainly SRS is DevOps. Then you starting to get into people who actually can squint through the data and understand the consumption and be more on the economics side, because people are changing how they report their earnings. They're changing how they report their KPIs based upon the usage and costs, and... What, is this real? what's your thoughts on that? I know that's a little random, but I want to get your, get your thoughts on that. >> Well, yeah, it's interesting that that's been a development. What I will say is, you know, the economics of cloud are complicated and they're still changing and still emerging, so I think that's probably more of a reaction to how dynamic the environment is then kind of a long-term trend. I mean, admittedly for us we hope that, you know, a lot of that analysis and the data that's required and will be provided by our platform. So you can think about it as, you know, a digital or AI powered cloud economist. So I don't, I don't know, hopefully our customers can use the platform and get everything they need and they won't need to go out and hire a cloud economist. That sounds expensive. >> Well, I think one of the things that sounds like great opportunities to make that go away, where you don't have to waste a resource to go through the cost side. I want to get your thoughts on this. This comes up all the time, certainly on Twitter, I'm always riffing on it. It comes up on a lot of my interviews and private chats with people about their, their cloud architecture, spend can get out of control pretty quickly. And data is a big part of it. Moving data is always going to be... Especially Amazon and Google, moving data in and out of the cloud is great. Now with the edge, I just talked to Bill Vass at a Amazon web service. He's the VP of engineering. You can literally bring the cloud to the edge and all the clouds are going to be doing this, these edge hubs. So that's going to process data at the edge, but it's also going to open up more services, right? So, you know, it's complicated enough as it is, spend is getting out of control. And it's only seems to be getting out of control even more. How do you talk to customers? I'd want to not be afraid they want to jump in, but they also want to have a hedge. Yeah, what's your, what's your take on your story? >> I think there's a lot of debate right now as to whether or not, you know, moving to the cloud from a cost perspective is cost-effective or more costly. And there's a pretty healthy debate going on at the moment. I think that the reality is, you know, yes, the cloud makes it easier for you to take on new services and bring on new things, and that of course drives spend, but it also unlocks incredible possibilities. What we try to do is help organizations take advantage of those possibilities and kind of the capabilities of the cloud while managing spend, and it's a complex problem, but it's a solvable problem. So for us, we think that, you know the job of the cloud providers is to, you know, continue to innovate and continue to bring more and more capability to bear so that organizations can transform through technology, the job of the teams using that technologies is to really leverage those capabilities, to build and to innovate and to serve their customers. And what we want to do is enable them to do that in a cost-effective manner, and we believe, and we have data to prove that if you do public cloud, right, it's cheaper because you know, those, those organizations, you know, much like, you know, at the turn of the industrial revolution, factories used to have their own power plants because you couldn't effectively reliably and kind of cost-effectively generate power at scale. Obviously no one does that now. And I think with the cloud providers, that's the same thing. I mean, they're investing in proprietary hardware, tons of software, tons of automation. They're highly secure. You know, at the end of the day, they're going to always be able to provide a given capability at a lower cost point. Like, of course they need to make profit. So there's a bit of margin in there, but, you know, at the end of the day, we think that both the flexibility and capability of it combined with their ability to operate at scale gives you a better value proposition, especially if you do it right. And that's what we want to focus on is, you know, the answer is there. You just need the right data and the right intelligence to find it. >> Totally, I totally agree with you. In fact, I had a big debate with Martine Casada at Andreessen Horowitz about cloud repatriation, and he was calling his paradigm. Do you focus on the cost or the revenue? And obviously they have Dropbox, which is a big example of that, and I even interviewed the Zynga guys and they actually went back to Amazon, although they didn't report that, but I'm a big believer that if you can't get the new revenue, then you're in cosmos then, and there are the issues, but again, I don't want to go there right now. I'll talk about that another time, but I want to get your, I want to get the playbook, so first of all, I love what you do, I think it's an opportunity to take that heavy lifting away from customers around understanding cost optimization. A lot of people don't know how to do it. So take us through a playbook. What are some best practices that you guys have seen to help people figure this out? What do you say to somebody, help me, Michael, I'm in a world of hurt, what do I do? What's the playbook? Can you give some examples of day in the life? >> Sure, so I think, I think the first thing is know what you're spending money on which sounds obvious, but you know, there's cloud environments are complicated, especially at scale. There's hundreds of thousands of skews and lots of different usage patterns. And I think the first thing is understand what you're spending money on. Number two is understand what you're getting for that spend. So, you know, what value are you driving with that spend? And then number three is put the information in the hands of the people who can do something about it. And I think that is, is one of the things that we really focus on is, you know, we built our product from an engineering focus first. It was engineers solving the problem of understanding how to keep cloud costs in control. And so our whole principle is give the people, working with the technology, the data to make good decisions and give them the power to act on it. And so, you know, a lot of companies say, "Oh, we're spending more over here. Or maybe we should look at that." But, but what we believe is actually be specific, where are you spending money? Where exactly are you spending too much? And what should you do about that? And give that information to the people who can take action, which are the engineers. And then lastly make it important in the organization because there's a ton of competing priorities. And what we've found is that, you know, where there's leadership support there's results. And so I think if you do those four things, you know, results will follow. Now, obviously, you know, you need to understand specific utilization patterns and know what to do with different kinds of resources and all of that stuff is complicated, but there are certainly solutions out there. Ours included who helped you with that. So if you get the other four things, right, plus you have some help, you can keep it under control and actually not just keep it under control, but operate in an environment that's much cheaper than hosting all this technology yourself and much more flexible. >> That's a great point, I mean, the fact that you mentioned earlier, the engineering piece that is so true people I've talked to, you mean our experiences and it's pretty common. The DevOps team tends to get involved in things like making sure you're buying reserve instances or all kinds of ways to optimize patterns, and that's also an issue, right? I mean, first of all, it makes sense that they're doing it, but also engineering time is being spent on essentially accounting at that point. Demonstrates the shift, I'm not saying it's good or bad. I'm just saying that got to be realistic. It's a time sink for the engineering when they're not engineering accounting, or should they, this is a legit question, it's not so much they should or shouldn't, I mean, if you say to someone, "Hey, you're paid to build and write software and you're spending your time solving accounting problems." That's obviously a mismatch. But when you talk about SREs and DevOps, Michael, it's kind of what might not be a bad thing, right? I mean, so how do people react to that? Are they kind of scratching their head on the same way? Or are you guys the solution to that? >> Well, I think that at first they are, but for us, at least it's, you know, we don't want them trying to understand the intricacies of a savings plan or understanding kind of the different options for compute instances. What we want them to do is we give them all the information. So our approach is give them all the information. They need to quickly make a decision, let them make a decision, like push a button and then let the change happen automatically. So if you think about it, you know, the amount of time they spend is, is a minute. That's the goal because then we can use their expertise. So it's not a finance person or an accountant doing research and making decisions that may or may not make technical sense and then looping in a bunch of people and they all talk, and then all that, that kind of whole process it's now here is a data-driven observation and recommendation. You have context to say yes or no, if you push the button and then you say, yes, then, you know, the change happens. If you say, no, the system learns. >> It's building right into the pipeline and they're shifting left to security, it's the same concept. It's really a great thing. I really think you're onto something big.,I love this story. It's kind of one of those things where reality's there. Michael, we've got 30 seconds left. I want to get your thoughts to share what put a plug in for the company, what you guys are doing, what are you looking at higher? You got a 30 second plug, go plug the company, what do you got? >> Well, you know, we think that, you know, for any organization, big or small, trying to make the most of the public cloud and be cloud first, you know, we, we bring a unique set of expertise, automation, and technology capabilities to bear, to help them thrive in the cloud and make the most of it. So, you know, obviously we would love to work with any company that, that wants to be cloud first and fully embrace the public cloud. I think we've got all the tools to help them thrive. >> Yeah, and I think, I think the confluence of business logic technology engineering working together is a home run. It's only going to get more stronger, so congratulations. Thanks for coming on theCUBE. >> Thank you. >> Adam, back to you in the studio for more action, theCUBE is out, we'll see you later.
SUMMARY :
center of all the action into what you guys are doing. the cloud from getting to the you guys have some news here take advantage of the same And this seems to be the pattern going to be, you know, to tap into, you know, we hope that, you know, the cloud to the edge as to whether or not, you know, I love what you do, I And what we've found is that, you know, the fact that you mentioned earlier, at least it's, you know, the company, what you guys are doing, think that, you know, It's only going to get more Adam, back to you in
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Michael | PERSON | 0.99+ |
Zynga | ORGANIZATION | 0.99+ |
Bill Vass | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Michael Kearns | PERSON | 0.99+ |
John Brown | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Adam | PERSON | 0.99+ |
30 second | QUANTITY | 0.99+ |
Greg | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.99+ |
Martine Casada | PERSON | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
Michigan | LOCATION | 0.99+ |
Dropbox | ORGANIZATION | 0.99+ |
Virtasant | PERSON | 0.99+ |
this year | DATE | 0.99+ |
over 50% | QUANTITY | 0.99+ |
Virta San | ORGANIZATION | 0.98+ |
each | QUANTITY | 0.98+ |
SRS | ORGANIZATION | 0.98+ |
Virtasant | ORGANIZATION | 0.97+ |
GCP | ORGANIZATION | 0.97+ |
this week | DATE | 0.96+ |
SREs | ORGANIZATION | 0.95+ |
four things | QUANTITY | 0.95+ |
ORGANIZATION | 0.94+ | |
2 | QUANTITY | 0.94+ |
3 | QUANTITY | 0.94+ |
first | QUANTITY | 0.93+ |
first one | QUANTITY | 0.93+ |
2021 | DATE | 0.92+ |
each cloud provider | QUANTITY | 0.91+ |
hundreds of thousands of skews | QUANTITY | 0.9+ |
Cloud City Live | TITLE | 0.88+ |
Mobile World Congress | EVENT | 0.87+ |
DevOps | ORGANIZATION | 0.87+ |
Number two | QUANTITY | 0.86+ |
first thing | QUANTITY | 0.86+ |
one cloud provider | QUANTITY | 0.85+ |
Andreessen | ORGANIZATION | 0.85+ |
Horowitz | PERSON | 0.85+ |
a minute | QUANTITY | 0.84+ |
CLOUD CITY | LOCATION | 0.83+ |
both | QUANTITY | 0.82+ |
CTO | PERSON | 0.82+ |
first industry | QUANTITY | 0.7+ |
number three | QUANTITY | 0.69+ |
1 | QUANTITY | 0.68+ |
mobile world congress | ORGANIZATION | 0.65+ |
theCUBE | ORGANIZATION | 0.56+ |
eople | ORGANIZATION | 0.54+ |
software | QUANTITY | 0.51+ |
minute | QUANTITY | 0.49+ |
2021 027 Jim Walker
(bright upbeat music) >> Hello, and welcome back to the DockerCon 2021 virtual coverage. I'm John Furrie host of theCUBE here in Palo Alto with a remote interview with a great guest Cuban alumni, Jim Walker VP of Product Marketing at Cockroach Labs. Jim, great to see you remotely coming into theCUBE normally we're in person, soon we'll be back in real life. Great to see you. >> Great to see you as well John, I miss you. I miss senior live and in person. So this has got to do, I guess right? >> We we had the first multi-cloud event in New York city. You guys had was I think one of the last events that was going on towards the end of the year before the pandemic hit. So a lot's happened with Cockroach Labs over the past few years, accelerated growth, funding, amazing stuff here at DockerCon containerization of the world, containers everywhere and all places hybrid, pure cloud, edge everywhere. Give us the update what's going on with Cockroach Labs and then we'll get into what's going on at DockerCon. >> Yeah Cockroach Labs, this has been a pretty fun ride. I mean, I think about two and a half years now and John it's been phenomenal as the world kind of wakes up to a distributed systems and the containerization of everything. I'm happy we're at DockerCon talking about containerization 'cause I think it has radically changed the way we think about software, but more importantly it's starting to take hold. I think a lot of people would say, oh, it's already taken hold but if you start to think about like just, these kind of modern applications that are depending on data and what does containerization mean for the database? Well, Cockroach has got a pretty good story. I mean, gosh, before Escape I think the last time I talked to you, I was at CoreOS and we were playing the whole Kubernetes game and I remember Alex Povi talking about GIFEE Google infrastructure for everyone or for everyone else I should say. And I think that's what we've seen that kind of happened with the infrastructure layer but I think that last layer of infrastructure is the database. Like I really feel like the database is that dividing line between the business logic and infrastructure. And it's really exciting to see, just massive huge customers come to Cockroach to rethink what the database means in cloud, right? What does the database mean when we moved to distributed systems and that sort of thing, and so, momentum has been building here, we are, upwards of, oh gosh, over 300 paying customers now, thousands of Cockroach customers in the wild out there but we're seeing this huge massive attraction to CockroachCloud which is a great name. Come on, Johnny, you got to say, right? And our database as a service. So getting that out there and seeing the uptake there has just been, it's been phenomenal over the past couple of years. >> Yeah and you've got to love the Cockroach name, love it, survive nuclear war and winter all that good stuff as they say, but really the reality is that it's kind of an interesting play on words because one of the trends that we've been talking about, I mean, you and I've been telling this for years with our CUBE coverage around Amazon Web Services early on was very clear about a decade ago that there wasn't going to be one database to rule the world. They're going to many, many databases. And as you started getting into these cloud native deployments at scale, use your database of choice was the developer ethos just whatever it takes to get the job done. Now you start integrating this in a horizontally scalable way with the cloud, you have now new kinds of scale, cloud scale. And it kind of changed the game on the always on availability question which is how do I get high availability? How do I keep things running? And that is the number one developer challenge whether it's infrastructure as code, whether it's security shifting left, it all comes down to making sure stuff's running at scale and secure. Talk about that. >> Yeah, absolutely and it's interesting it's been, like I said, this journey in this arc towards distributed systems and truly like delivery of what people want in the cloud, it's been a long arc and it's been a long journey and I think we're getting to the point where people, they are starting to kind of bake resilience and scale into their applications and I think that's kind of this modern approach. Look we're taking legacy databases today. There are people are kind of lift and shift, move them into the cloud, try to run them there but they aren't just built for that infrastructure like the there's a fundamentally different approach and infrastructure when it talks, when you talk about cloud it's one of the reasons why John early on your conversations with the AWS Team and what they did, it's like, yeah, how do we give resilient and ubiquitous and always on scalable kind of infrastructure people. Well, that's great for those layers but when you start to get into the software that's running on these things, it isn't lift and shift and it's not even move and improve. You can't like just take a legacy system and change one piece of it to make it kind of take advantage of the scale and the resilience and the ubiquity of the cloud, because there's very very explicit challenges. For us, it's about re-architect and rebuild. Let's tear the database down and let's rethink it and build from the ground up to be cloud native. And I think the technologies that have done that, that have kind of built from scratch, to be cloud native are the ones that are I believe, three years from now that's what we're going to be talking about. I mean, this comes back to again, like the Genesis of what we did is Google Cloud Spanner. Spanner white paper and what Google did, they didn't build, they didn't use an existing database because they needed something for a transactional relational database. They hire a bunch of really incredible engineers, right? And I got like Jeff Dean and Sanjay Ghemawat over there, like designing and doing all these cool things, they build and I think that's what we're seeing and I think that's, to me the exciting part about data in the cloud as we move forward. >> Yeah, and I think the Google cloud infrastructure, everyone I think that's the same mindset for Amazon is that I want all the scale, but I don't want to do it like over 10 years I to do it now, which I love I want to get back to in a second, but I want to ask you specifically this definition of containerization of the database. I've heard that kicked around, love the concept. I kind of understand what it means but I want you to define it for us. What does it mean when someone says containerizing the database? >> Yeah, I mean, simply put the database in container and run it and that's all that I can think that's like, maybe step one I think that's kind of lift and shift. Let's put it in a container and run it somewhere. And that's not that hard to do. I think I could do that. I mean, I haven't coded in a long time but I think I could figure that out. It's when you start to actually have multiple instances of a container, right? And that's where things get really, really tricky. Now we're talking about true distributed systems. We're talking about how do you coordinate data? How do you balance data across multiple instances of a database, right? How do you actually have fail over so that if one node goes down, a bunch of them are still available. How do you guarantee transactional consistency? You can't just have four instances of a database, all with the same information in it John without any sort of coordination, right? Like you hit one node and you hit another one in the same account which transaction wins. And so the concepts in distributed systems around there's this thing called the cap theorem, there's consistency, availability, and partition tolerance and actually understanding how these things work especially for data in distributed systems, to make sure that it's going to be consistent and available and you're going to scale those things are not simple to solve. And again, it comes back to this. I don't think you can do it with legacy database. You kind of have to re-architect and it comes down to where data is stored, it comes down to how it's replicated, it comes down to really ultimately where it's physically located. I think when you deploy a database you think about the logical model, right? You think about tables, and normalization and referential integrity. The physical location is extremely important as we kind of moved to that kind of containerized and distributed systems, especially around data. >> Well, you guys are here at DockerCon 2021 Cockroach Labs good success, love the architectural flexibility that you guys offer. And again, bringing that scale, like you mentioned it's awesome value proposition, especially if people want to just program the infrastructure. What's going on with with DockerCon specifically a lot of talk about developer productivity, a lot of talk about collaboration and trust with containers, big story around security. What's your angle here at DockerCon this year? What's the big reveal? What's the discussion? What's the top conversation? >> Yeah, I mean look at where we are a containerized database and we are an incredibly great choice for developers. For us, it's look at there's certain developer communities that are important on this planet, John, and this is one of them, right? This is I don't know a developer doesn't have that little whale up in their status bar, right? And for us, you know me man, I believe in this tech and I believe that this is something that's driven and greatly simplify our lives over the next two to three to 10 to 15 years. And for us, it's about awareness. And I think once people see Cockroach, they're like oh my God, how did I ever even think differently? And so for us, it's kind of moving in that direction. But ultimately our vision where we want to be, is we want to abstract the database to a SQL API in the cloud. We want to make it so simple that I just have this rest interface, there's end points all over the planet. And as a developer, I never have to worry about scale. I never have to worry about DR right? It's always going to be on. And most importantly, I don't have to worry about low latency access to data no matter where I'm at on the planet, right? I can give every user this kind of sub 50 millisecond access to data or sub 20 millisecond access to data. And that is the true delivery of the cloud, right? Like I think that's what the developer wants out of the cloud. They want to code against a service like, and it's got to be consumption-based and you secure and I don't want to have to pay for stuff I'm not using and that all those things. And so, for us, that's what we're building to, and interacting in this environment is critical for us because I think that's where audiences. >> I want to get your thoughts on you guys do have success with a couple of different personas and developers out there, groups, classic developers, software developers which is this show is that DockerCon full of developers KubeCon a lot of operators cool, and some dads, but mostly cloud native operations. Here's a developer shops. So you guys got to hit the developers which really care about building fast and building the scale and last with security. Architects you had success with, which is the classic, cloud architecture, which now distributed computing, we get that. But the third area I would call the kind of the role that both the architects and the developers had to take on which is being the DevOps person or then becomes the SRE in the group, right? So most startups have the DevOps team developers. They do DevOps natively and within every role. So they're the same people provisioning. But as you get larger and an enterprise, the DevOps role, whether it's in a team or group takes on this SRE site reliability engineer. This is a new dynamic that brings engineering and coding together. It's like not so much an ops person. It's much more of like an engineering developer. Why is that role so important? And we're seeing more of it in dev teams, right? Seeing an SRE person or a DevOps person inside teams, not a department. >> Yeah, look, John, we, yeah, I mean, we employ an army of SREs that manage and maintain our CockroachCloud, which is CockroachDB as a service, right? How do you deliver kind of a world-class experience for somebody to adopt a managed service a database such as ours, right? And so for us, yeah I mean, SREs are extremely important. So we have personal kind of an opinion on this but more importantly, I think, look at if you look at Cockroach and the architecture of what we built, I think Kelsey Hightower at one point said, I am going to probably mess this up but there was a tweet that he wrote. It's something like, CockroachDB is the Spanner as Kubernetes is the board. And if you think about that, I mean that's exactly what this is and we built a database that was actually amenable to the SRE, right? This is exactly what they want. They want it to scale up and down. They want it to just survive things. They want to be able to script this thing and basically script the world. They want to actually, that's how they want to manage and maintain. And so for us, I think our initial audience was definitely architects and operators and it's theCUBE con crowd and they're like, wow, this is cool. This is architected just like Kubernetes. In fact, like at etcd, which is a key piece of Kubernetes but we contribute back up to NCD our raft implementation. So there's a lot of the same tech here. What we've realized though John, with database is interesting. The architect is choosing a database sometimes but more often than not, a developer is choosing that database. And it's like they go out, they find a database, they just start building and that's what happens. So, for us, we made a very critical decision early on, this database is wire compatible with Postgres and it speaks to SQL syntax which if you look at some of the other solutions that are trying to do these things, those things are really difficult to do at the end. So like a critical decision to make sure that it's amenable so that now we can build the ORMs and all the tools that people would use and expect that of Postgres from a developer point of view, but let's simplify and automate and give the right kind of like the platform that the SREs need as well. And so for us the last year and a half is really about how do we actually build the right tooling for the developer crowd too. And we've really pushed really far in that world as well. >> Talk about the aspect of the scale of like, say startup for instance, 'cause you made this a great example borg to Kubernetes 'cause borg was Google's internal Kubernetes, like thing. So you guys have Spanner which everyone knows is a great product at Google had. You guys with almost the commercial version of that for the world. Is there, I mean, some people will say and I'll just want to challenge you on this and we'll get your thoughts. I'm not Google, I'll never be Google, I don't need that scale. Or so how do you address that point because some people say, well this might dismiss the notion of using it. How do you respond to that? >> Yeah, John, we get this all the time. Like, I'm not global. My application's not global. I don't need this. I don't need a tank, right? I just need, like, I just need to walk down the road. You know what I mean? And so, the funny thing is, even if you're in a single region and you're building a simple application, does it need to be always on does it need to be available. Can it survive the failure of a server or a rack or an AZ it doesn't have to survive the failure of a region but I tell you what, if you're successful, you're going to want to start actually deploying this thing across multiple regions. So you can survive a backhoe hit in a cable and the entire east coast going out, right? Like, and so with Cockroach, it's real easy to do that. So it's four little SQL commands and I have a database that's going to span all those regions, right? And I think that's important but more importantly, think about scale, when a developer wants to scale, typically it's like, okay, I'm going to spin up Postgres and I'm going to keep increasing my instance size. So I'm going to scale vertically until I run out of room. And then I'm going to have to start sharding this database. And when you start doing that, it adds this kind of application complexity that nobody really wants to deal with. And so forget it, just let the database deal with all that. So we find this thing extremely useful for the single developer in a very small application but the beauty thing is, if you want to go global, great just keep that in notes. Like when that application does take off and it's the next breakthrough thing, this database going to grow with you. So it's good enough to kind of start small but it's the scale fast, it'll go global if you want to, you have that option, I guess, right? >> I mean, why wouldn't you want optionality on this at all? So clearly a good point. Let me ask you a question, take me through a use case where with Cockroach, some scenario develops nicely, you can point to the visibility of the use case for the developer and then kind of how it played out and then compare that and contrast that to a scenario that doesn't go well, like where where we're at plays out well, for an example, and then if they didn't deploy it they got hung up and went sideways. >> Yeah like Cockroach was built for transactional workloads. That that's what we are like, we are optimized for the speed of light and consistent transactions. That's what we do, and we do it very well. At least I think so, right. But I think, like my favorite customer of all of ours is DoorDash and about a year ago DoorDash came to us and said, look at we have a transactional database that can't handle the right volume that we're getting and falls over. And they they'd significant challenges and if you think about DoorDash and DoorDash is business they're looking at an IPO in the summer and going through these, you can't have any issues. So like system's got to be up and running, right? And so for them, it was like we need something that's reliable. We need something that's not going to come down. We need something that's going to scale and handle burst and these sort of things and their business is big, their businesses not just let me deliver food all the time. It's deliver anything, like be that intermediary between a good and somebody's front door. That's what DoorDash wants to be. And for us, yeah, their transactions and that backend transactional system is built on Cockroach. And that's one year ago, they needed to get experienced. And once they did, they started to see that this was like very, very valuable and lots of different workloads they had. So anywhere there's any sort of transactional workload be it metadata, be it any sort of like inventory, or transaction stuff that we see in companies, that's where people are coming to us. And it's these traditional relational workloads that have been wrapped up in these transactional relational databases what built for the cloud. So I think what you're seeing is that's the other shoe to drop. We've seen this happen, you're watching Databricks, you're watching Snowflake kind of do this whole data cloud and then the analytical side John that's been around for a long time and there's that move to the cloud. That same thing that happened for OLAP, is got to happen for OLTP. Where we don't do well is when somebody thinks that we're an analytic database. That's not what we're built for, right? We're optimized for transactions and I think you're going to continue to see these two sides of the world, especially in cloud especially because I think that the way that our global systems are going to work you don't want to do analytics across multiple regions, it doesn't make sense, right? And so that's why you're going to see this, the continued kind of two markets OLAP and OLTP going on and we're just, we're squaring that OLTP side of the world. >> Yeah talking about the transaction processing side of it when you start to change a distributed architecture that goes from core edge, core on premises to edge. Edge being intelligent edge, industrial edge, whatever you're going to have more action happening. And you're seeing, Kubernetes already kind of talking about this and with the containers you got, so you've got kind of two dynamics. How does that change the nature of, and the level of volume of transactions? >> Well, it's interesting, John. I mean, if you look at something like Kubernetes it's still really difficult to do multi-region or multicloud Kubernetes, right? This is one of those things that like you start to move Kubernetes to the edge, you're still kind of managing all these different things. And I think it's not the volumes, it's the operational nightmare of that. For us, that's federate at the data layer. Like I could deploy Cockroach across multiple Kubernetes clusters today and you're going to have one single logical database running across those. In fact you can deploy Cockroach today on top of three public cloud providers, I can have nodes in AWS, I could have nodes in GCP, I could have nodes running on VMs in my data center. Any one of those nodes can service requests and it's going to look like a single logical database. Now that to me, when we talked about multicloud a year and a half ago or whatever that was John, that's an actual multicloud application and delivering data so that you don't have to actually deal with that in your application layer, right? You can do that down in the guts of the database itself. And so I think it's going to be interesting the way that these things gets consumed and the way that we think about where data lives and where our compute lives. I think that's part of what you're thinking about too. >> Yeah, so let me, well, I got you here. One of the things on my mind I think people want to maybe get clarification on is real quick while you're here. Take a minute to explain that you're seeing a CockroachDB and CockroachCloud. There are different products, you mentioned you've brought them both up. What's the difference for the developers watching? What's the difference of the two and when do I need to know the difference between the two? >> So to me, they're really one because CockroachCloud is CockroachDB as a service. It's our offering that makes it a world-class easy to consume experience of working with CockroachDB, where we take on all the hardware we take on the SRE role, we make sure it's up and running, right? You're getting connection, stringing your code against it. And I think, that's side of our world is really all about this kind of highly evolved database and delivering that as a service and you can actually use it's CockroachDB. I think it was just gets really interesting John is the next generation of what we're building. This serverless version of our database, where this is just an API in the cloud. We're going to have one instance of Cockroach with multi-tenant database in there and any developer can actually spin up on that. And to me, that gets to be a really interesting world when the world turns serverless, and we have, we're running our compute in Lambda and we're doing all these great things, right? Or we're using cloud run and Google, right? But what's the corresponding database to actually deal with that? And that to me is a fundamentally different database 'cause what is scale in the serverless world? It's autonomous, right? What scale in the current, like Cockroach world but you kind of keep adding nodes to it, you manage, you deal with that, right? What does resilience mean in a serverless world? It's just, yeah, its there all the time. What's important is latency when you get to kind of serverless like where are these things deployed? And I think to me, the interesting part of like the two sides of our world is what we're doing with serverless and kind of this and how we actually expose the core value of CockroachDB in that way. >> Yeah and I think that's one of the things that is the Nirvana or the holy grail of infrastructure as code is making it, I won't say irrelevant, but invisible if you're really dealing with a database thing, hey I'm just scaling and coding and the database stuff is just working with compute, just whatever, how that's serverless and you mentioned Lambda that's the action because you don't want the file name and deciding what the database is just having it happen is more productivity for the developers that kind of circles back to the whole productivity message for the developers. So I totally get that I think that's a great vision. The question I have for you Jim, is the big story here is developer simplicity. How you guys making it easier to just deploy. >> John is just an extension of the last part of the conversation. I don't want to developer to ever have to worry about a database. That's what Spencer and Peter and Ben have in their vision. It's how do I make the database so simple? It's simple, it's a SQL API in the cloud. Like it's a rest interface, I code against it, I run queries against it, I never have to worry about scaling the thing. I never have to worry about creating active, passive, and primary and secondary. All these like the DevOps side of it, all this operation stuff, it's just kind of done in the background dude. And if we can build it, and it's actually there now where we have it in beta, what's the role of the cost-based optimizer in this new world that we've had in databases? How are you actually ensuring data is located close to users and we're automating that so that, when John's in Australia doing a show, his data is going to follow him there. So he has fast access to that, right? And that's the kind of stuff that, we're talking about the next generation of infrastructure John, not like we're not building for today. Like, look at Cockroach Labs is not building for like 2021. Sure, do we have something that's great. We're building something that's 22 and 23 and 24, right? Like what do we need to be as a extremely productive set of engineers? And that's what we think about all day. How do we make data easy for the developer? >> Well, Jim, great to have you on VP of Product Marketing at Cockroach Labs, we've known each other for a long time. I got to ask you while I had got you here final question is, you and I have chatted about the many waves of in open source and in the computer industry, what's your take on where we are now. And I see you're looking at it from the Cockroach Labs perspective which is large scale distributed computing kind of you're on the new side of history, the right side of history, cloud native. Where are we right now? Compare and contrast for the folks watching who we're trying to understand the importance of where we are in the industry, where are we in and what's your take? >> Yeah John I feel fortunate to be in a company such as this one and the past couple that I've like been around and I feel like we are in the middle of a transformation. And it's just like the early days of this next generation. And I think we're seeing it in a lot of ways in infrastructure, for sure but we're starting to see it creep up into the application layer. And for me, it is so incredibly exciting to see the cloud was, remember when cloud was like this thing that people were like, oh boy maybe I'll do it. Now it's like, it's anything net new is going to be on cloud, right? Like we don't even think twice about it and the coming nature of cloud native and actually these technologies that are coming are going to be really interesting. I think the other piece that's really interesting John is the changing role of open source in this whole game, because I think of open source as code consumption and community, right? I think about those and then there's license of course, I think people were always there. A lot of people wrapped around the licensing. Consumption has changed, John. Back when we were talking to Dupe, consumption was like, oh, it's free, I get this thing I could just download it use it. Well consumption over the past three years, everybody wants everything as a service. And so we're ready to pay. For us, how do we bring free back to the service? And that's what we're doing. That's what I find like I am so incredibly excited to go through this kind of bringing back free beer to open source. I think that's going to be great 'cause if I can give you a database free up to five gig or 10 gig, man and it's available all over the planet has fully featured, that's coming, that's bringing our community and our code which is all open source and this consumption model back. And I'm super excited about that. >> Yeah, free beer who doesn't like free beer of course, developers love free beer and a great t-shirt too that's soft. Make sure you get that, get the soft >> You just don't want free puppy, you know what I mean? It was just like, yeah, that sounds painful. >> Well Jim, great to see you remotely. Can't wait to see you in person at the next event. And we've got the fall window coming up. We'll see some events. I think KubeCon in LA is going to be in-person re-invent a data breast for sure we'll be in person. I know that for a fact we'll be there. So we'll see you in person and congratulations on the work at Cockroach Labs. >> Thanks, John, great to see you again. All right, this keep coverage of DockerCon 2021. I'm John Furrie your host of theCUBE. Thanks for watching.
SUMMARY :
Jim, great to see you Great to see you as of the world, containers and the containerization of everything. And that is the number and I think that's, to of containerization of the database. and it comes down to where data is stored, that you guys offer. And that is the true the developers had to take on and basically script the world. of that for the world. and it's the next breakthrough thing, for the developer and then is that's the other shoe to drop. and the level of volume of transactions? and the way that we think One of the things on my mind And I think to me, the and the database stuff is And that's the kind of stuff I got to ask you while I had And it's just like the early and a great t-shirt too that's soft. puppy, you know what I mean? Well Jim, great to see you remotely. Thanks, John, great to see you again.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Raj | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Caitlyn | PERSON | 0.99+ |
Pierluca Chiodelli | PERSON | 0.99+ |
Jonathan | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Lynn Lucas | PERSON | 0.99+ |
Caitlyn Halferty | PERSON | 0.99+ |
$3 | QUANTITY | 0.99+ |
Jonathan Ebinger | PERSON | 0.99+ |
Munyeb Minhazuddin | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Christy Parrish | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Ed Amoroso | PERSON | 0.99+ |
Adam Schmitt | PERSON | 0.99+ |
SoftBank | ORGANIZATION | 0.99+ |
Sanjay Ghemawat | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Ashley | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Greg Sands | PERSON | 0.99+ |
Craig Sanderson | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Cockroach Labs | ORGANIZATION | 0.99+ |
Jim Walker | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Blue Run Ventures | ORGANIZATION | 0.99+ |
Ashley Gaare | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Rob Emsley | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Lynn | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Allen Crane | PERSON | 0.99+ |
Mike Cohen, Splunk | Leading with Observability
(upbeat music playing) >> Narrator: From theCUBE's studios in Palo Alto in Boston, connecting with thought leaders all around the world. This is a CUBE conversation. >> Hello, everyone, welcome to this CUBE conversation. I'm John Ferry, host of theCUBE. We're doing a content series called leading with observability. And this segment is network observability for distributed services. And we have CUBE alumni Mike Cohen, head of product management for network monitoring at Splunk. Mike, great to see you. It's been a while, going back to the open stack days, red hat summit. Now here talking about observability with Splunk. Great to see you. >> Thanks a lot for having me. >> So the world's right now observability is at the center of all the conversations from monitoring, investing infrastructure, on premises cloud and also cyber security. A lot of conversations, a lot of, broad reaching implications observability. You're at the head of product management, network observability at Splunk. This is where the conversation's going getting down at the network layer, getting down into the, as the packets move around. This is becoming important. Why is this the trend? What's the situation? >> Yeah, so we're seeing a couple of different trends that are really driving how people think about observability, right? One of them is this huge migration towards public cloud architecture. And you're running, you're running on an infrastructure that you don't own yourself. The other one is around how people are rebuilding and refactoring applications around service-based architectures scale-out models, cloud native paradigms. And both of these things is, they're really introducing a lot of new complexity into the applications and really increasing the service area of where problems can occur. And what this means is when you actually have gaps in visibility or places where you have a separate tool, you know, analyzing parts of your system. It really makes it very hard to debug when things go wrong and to figure out where problems occur. And really what we've seen is that, you know people really need an integrated solution to observability. And one that can really span from what your user is seeing but all the way to the deepest backend services. Where are the problems in some of the core in your infrastructure that you're operating? So that you can really figure out where, where problems occur. And really network observability is playing a critical role in kind of filling in one of those critical gaps. >> You know, you think about the 10 years past decade we've been on this wave. It feels like now more than ever, it's an inflection point because of how awesome cloud native has become from a value standpoint. Value creation, time to market all those things that you know why people are investing in modern applications. But then as you build out your architecture and your, your infrastructure to make that happen there's more things happening. Everything as a service creates new dependencies new things to document. This is an opportunity, certainly on one hand on the other hand, it's a technical challenge. So, you know, balancing out, technical dead end or deploying new stuff, you got to monitor it all. Right, monitoring has turned into observability which is just code word for cloud scale monitoring, I guess. I mean, is that how you see it? I mean, how could you, how do you talk about this? Because it's certainly a major shift happening right now and this transition is pretty obvious. >> Yeah. Yeah, no, absolutely. And we've, you know, we've seen a lot of new interests into the network visibility, network monitoring space. And really, again, the drivers of that, like you know, network infrastructure is actually becoming increasingly opaque as you move towards, you know public cloud. You know, kind of public cloud environments. And it's been sort of a fun thing to blame the network. And say, look Oh it's the network we don't know what's going on. But you know, it's not always the network. Sometimes it is, sometimes it isn't. You actually need to understand where these problems are really occurring to actually have the right level of visibility in your systems. But the other way we've started talking to people thinking about this is. The network has an empowering capability an untapped resource. That you can actually get new data about your distributed systems. You know, SREs are struggling to understand these complex environments, but by. You know with the capabilities we've seen and started taking advantage of things like EBPF and monitoring from the OS. We can actually get visibility into how processes and containers communicate and that can give us insights into our system. It's a new source of data that actually has not existed in the past. That is now available to help us with the broader observability problem. >> You mentioned SRE, Site Reliable Engineers, as it's known Google kind of pioneered this. It's become a kind of a standard persona in large scale kind of infrastructure, cloud environments and what not like massive scale. Are you seeing SREs, now that role become more mainstream in enterprises? I mean, cause some enterprises might not call on the SRE medical on the cloud architect. I mean, what can you just help as you know, if you could tie that together cause it is certainly happening. Is it becoming a proliferating? >> For sure, absolutely Yeah. No absolutely, I think SREs, you know, the title may vary across organizations as you point out. And sometimes the exact layout of you know, the organizational breakdown varies. But this role of someone who really cares about keeping the system up you know, and you know, caring for it and scaling it out and thinking about its architecture is now a really critical role. And sometimes that role sits alongside, it sits alongside developers who are writing the code. And this is really happening in almost every organization that, that we're dealing with today. It is becoming a mainstream occurrence. >> Yeah, it's interesting, I'm going to ask you a question about what businesses are missing when they think about how to, think about observability but since you brought up that, that piece. It's almost as if kubernetes created this kind of demarcation between the line. Between half the stack and the top of the half and bottom half of the stack. Where you can do a lot of engineering underneath the second half of the stack or the bottom of the stack up to say kubernetes and then above that you could just be infrastructure as code application developer. So it's almost, it's almost kind of like leveled out with nice lanes there. I mean, I'm oversimplifying it, but I mean how do you react to that? Do you see that evolving too? Because it's all seems cleaner now. It's like you're engineering below Kubernetes or above it. >> Oh, absolutely. It's definitely one of the ways you see sort of the deepest engagement in. As folks go towards Kubernetes, they start embracing containers. They you know, they start building microservices. You'll see development teams really accelerate the pace of innovation that they have, you know, in in their environment. And that's really the, you know kind of the driver behind this. So, you know, we do see that, that sort of rebuilding refactoring as some of the most, some of the biggest drivers behind, these initiatives. >> What are businesses missing around observability? Cause it seems to be, first of all a very overfunded segment, a lot of new startups coming in. A lot of security vendors over here, you're seeing network folks moving in. What's almost becoming a fabric feature piece of things. What is that mean to businesses? What, what are businesses missing or getting? How are people evaluating observability? How do you see that? >> Yeah. So I'll, for sure, I'll talk. I'll start initially to talk generically about it but then I'll talk a little bit about network areas specifically, right? That's I think one of the, one of the things people are realizing they need in observability is this approach as an integrated suite. So having a disparate set of tools can make it very hard for SREs to actually take advantage of all those tools, use the data within them to solve meaningful problems. And I think what we're, you know, what we're seeing as we've been talking to more people in the industry. They really want something that can bring all that data together and build it into an insight that can help them solve a problem more quickly. Right, so that, you know, I think that's the broader context of what's going on. And I think that's driving some of the work we're doing on the network side. Because, network is a powerful new data set that we can combine with other aspects of what people have already been doing in observability. >> What do you think about programmability? That's been a big topic, when you start to get into that kind of mindset. You're almost making the the software defined aspect come in here heavily. How does that play in, how do you what's your vision around, you know making the network adaptable, programmable, measurable, fully, fully surveilled? >> Yeah, yeah. So I think we'll work, well again, what we're focused on is the capabilities you can have in using, using the network as a means of visibility and observability for, for its systems. Networks are becoming highly flexible. A lot of people, once they get into a cloud environment they have a very rich set of networking capabilities. But what they want to be able to do is use that as a way of getting visibility into the system. So, to talk for, I can talk for a minute or two about some of the capabilities we're exposing. Use it in network observer, network observability. One of them is just being able to visual, visualize and optimize a service architecture. So really seeing what's connecting to what automatically. So we've been using a technology called EBPF, the Extended Berkeley Packet Filter. Part of everyone's Linux operating system, right? You know, you're running Linux you basically have this already. And it gives you an interesting touch point to observe the behavior of every processing container automatically. When you can actually see, with very little overhead what they're doing and correlate that with data from systems like Kubernetes to understand how distributed systems behave. To see how things connect to two other things. We can use this to build a complete service map of the system in seconds, automatically without developers having to do any additional work. Without having, without forcing anyone to change their code. They can get visibility across an entire system automatically. >> That's like the original value proposition of Splunk. When it came out, it was just a great tool for Splunk and the data from logs. Now, as data becomes more complex you're still instrumenting and those are critical services. And they're now microservices, the trends at the top of the stack and on, at the network layer. The network layer has always been a hard nut to crack. I got to ask you why now? Why do you feel, you mentioned earlier that everyone used to blame the network. Oh, it's not my problem. You really can't finger point when you start getting into full instrumentation of the, of the traffic patterns and the underlying processes. So it seems to be good magic going on here. What's the core issue? What are the, what's the, what's going on here? Why is it, why is it now? >> Mike: Yeah. >> Why is the time now? >> Yeah. So, yeah, well. So unreliable networks, slow network, DNS problems. These have always been present in systems. The problem is they're actually becoming exacerbated because people have less visibility into, into them. But also as you have these distributed systems the failure modes are getting more complex. So you'll actually have some of the longest, most challenging troubleshooting problems are these network issues, which tend to be transient which tend to bounce around the systems. They tend to cause other unrelated alerts to happen. Inside your application stack with multiple teams, troubleshooting the wrong problems that don't really exist. So, the network has actually caused some of the most painful outages that the teams, the teams see. And when these outages happen, what you really need to be able to know is, is it truly a network problem or is it something in another part of my system? If I'm running a distributed service, what, you know, which services are affected? Because that's the language now my team thinks about. As you mentioned now, they're in kubernetes. They're trying to think which Kubernetes services are actually going, affected by a potential network outage that I'm worried about? The other aspect is figuring out the scope of the impact. So, are there a couple instances in my cloud provider that aren't doing well? Is an entire availability zone, having problems? Is there a region of the, of the world that, that's an issue? Understanding the scope of this problem will actually help me as an SRE decide what the right mitigation is. And, you know, and by limiting it as much as possible, it can actually help me better hit my SLA. Because I won't have to hit something with a huge hammer when a really small one might solve the problem. >> Yeah, this is one of the things that comes up. Almost just hearing you talk I'm seeing how it could be complex for the customer just documenting the dependencies. I mean, as services come online someone of them are going to be very dynamic not just at the network, both the application level, we mentioned Kubernetes. And you've got service meshes and microservices. You're going to start to see the need to be tracking all this stuff. And that's a big, that's a big part of what's going on with the, with your suite right now. The ability to help there. How are you guys helping people do that? >> Yeah, absolutely. So, you know, just understanding dependencies is, you know, is one of the key aspects of these distributed systems. You know, this began as a simple problem. You have a monolithic application it kind of runs on one machine. You understand its behavior. Once you start moving towards microservices it's very easy for that to change from. Look, we have a handful of microservices to we have hundreds, to we have thousands and they can be running across thousands or tens of thousands of machines as you get bigger. And understanding that environment can become a major challenge and teams' role. They'll end up with the handwritten diagram that has the behavior of their services broken out. Or they'll find out that there's an interaction that they didn't expect to have happened. And that may be the source of an issue. So, you know, one of the capabilities we have using network monitoring out of the operating system with EBPF. Is, we can actually automatically discover every connection that's made. So if you're able to watch the sockets they're created in licks, you can actually see how containers interact with each other. Then you can use that to build automatic service dependency diagrams. So without the user having to change the code, to change anything about their system. You can automatically discover those dependencies and you'll find things you didn't expect. You'll find things that change over time, that weren't well-documented. And these are the critical, the critical level of understanding you need to get to and use the environment. >> Yeah. You know, it's interesting you mentioned that you might've missed them in the past. People have that kind of feeling at the network either because they weren't tracking it well or they used a different network tool. I mean, just packet loss by itself is one, service and host health is another. And if you could track everything, then you got to build it. So I got, so I love, love this direction. My question really is more of, okay how do you operationalize it? Okay, I'm a operator, am I getting alerts? Do I, does it just auto discover? How does this all work from a user, usability standpoint? How do I? >> Yeah. >> What are the key features that unlock, what gets unlocked from that, that kind of instrumentation? >> Yeah, well again, when you do this estimation correctly. It can be really, it can be automatic, right? You can actually put an agent that might run in one of your, on your instances collecting data based on the, that the traffic and the interactions that occur without you having to take any action that's really the Holy grail. And that's where some of the best value of these systems emerge. It just works out of the box. And then it'll pull data from other systems like your cloud provider from your Kubernetes environment and use that to build a picture of what's going on. And that's really where this is, where these systems get super valuable is they actually just, they just work without you having to do a ton of work behind the scenes. >> So Mike, I got to ask you a final question. Explain the distributed services aspect of observability. What should people walk away with from a main concept standpoint and how does it apply to their environment? What should they be thinking about? What is it and what's the real story there? >> Yeah, so I think the way we're thinking about this is. How can you turn, the network from a liability to a strength in the, in your, in these distributed environments, right? So, what it can, you know, by observing data at the network level and, out of the operating system. You can actually use it to automatically construct service maps. To learn about your system, improve the insight and understanding you have of your, of your complex systems. You can identify network problems that are occurring. You can understand how you're utilizing aspects of the network. It can drive things like, costs, cost optimization in your environment. So you can actually get better insights and, be able to troubleshoot problems better and handle the blame game of, is the network really the problem that I'm seeing or is it occurring somewhere else in my application? And though, that's really critical in these complex distributed environments. And critically you can do it in a way that doesn't actually add overhead to your development team. You don't have to change the code. You don't have to, take on a complex engineering task. You just, you can actually deploy agents. that'll act, that'll be able to collect this data automatically. >> Awesome, and take that complexity away and automate, help people get the job done. Great, great stuff. Mike, thanks for coming on theCUBE. Leading with observability, I'm John Ferry with theCUBE. Thanks for watching. >> Mike: Yeah, thanks a lot. (gentle music playing)
SUMMARY :
all around the world. to the open stack days, red hat summit. So the world's right So that you can really figure out where, I mean, is that how you see it? And we've, you know, we've seen I mean, what can you about keeping the system up you know, and bottom half of the stack. of innovation that they have, you know, in What is that mean to businesses? And I think what we're, you know, How does that play in, how do you of the system in seconds, automatically I got to ask you why now? of the most painful how it could be complex for the customer And that may be the source of an issue. And if you could track everything, that the traffic and the Explain the distributed services of the network. people get the job done. Mike: Yeah, thanks a lot.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Ferry | PERSON | 0.99+ |
Mike Cohen | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
one machine | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
CUBE | ORGANIZATION | 0.98+ |
Linux | TITLE | 0.98+ |
a minute | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.97+ |
half | QUANTITY | 0.94+ |
second half of | QUANTITY | 0.88+ |
today | DATE | 0.87+ |
tens of thousands | QUANTITY | 0.83+ |
SRE | ORGANIZATION | 0.82+ |
EBPF | TITLE | 0.79+ |
half the stack | QUANTITY | 0.79+ |
10 years past decade | DATE | 0.77+ |
two other things | QUANTITY | 0.74+ |
machines | QUANTITY | 0.73+ |
SREs | TITLE | 0.71+ |
Engineers | ORGANIZATION | 0.67+ |
Site | ORGANIZATION | 0.67+ |
Splunk | TITLE | 0.63+ |
SRE | TITLE | 0.62+ |
couple | QUANTITY | 0.62+ |
them | QUANTITY | 0.58+ |
Berkeley | COMMERCIAL_ITEM | 0.57+ |
red hat | EVENT | 0.54+ |
Splunk | PERSON | 0.51+ |
Kubernetes | ORGANIZATION | 0.5+ |
Robyn Bergeron and Matt Jones, Red Hat | AnsibleFest 2020
>> Announcer: From around the globe, it's theCUBE! With digital coverage of AnsibleFest 2020. Brought to you by Red Hat. >> Hello, everyone. Welcome back to theCUBE's coverage of AnsibleFest 2020. I'm your host with theCUBE John Furrier. And we've got two great guests. A CUBE alumni, Robyn Bergeron, senior manager, Ansible community team. Welcome back, she's with Ansible and Red Hat. Good to see you. And Matt Jones, chief architect for the Ansible Automation Platform. Again, both with Red Hat, Ansible was acquired by Red Hat. Robyn used to work for Red Hat, then went to Ansible. Ansible got bought by Red Hat. Robyn, great to see you, Matt, great to see you. >> Yep, thanks for having me back again. It's good to see you. >> We're not in person. It's the virtual event. Thanks for coming on remotely to our CUBE virtual, really appreciate it. I want to talk about the, and I brought that Red Hat kind of journey Robyn. We talked about it last year, but it really is an important point. The roots of Ansible and kind of where it's come from and what it's turned into and where it is today, is an interesting journey because the mission is still the same. I would like to get your perspectives because you know, Red Hat was acquired by IBM, Ansible's under Red Hat, all part of one big happy family. A lot's going on around the platform, Matt, you're the chief architect, Robyn you're on the community team. Collections, collections, collections, is the message, content, content, content, community, a lot going on. So take a minute, both of you explain the Ansible roots, where it is today, and the mission. >> Right, so beginning of Ansible was really, there was a small team of folks and they'd actually been through an iteration before that didn't use SSH called Funk, but you know, it was, let's make a piece of software that is open source that allows people to automate other things. And we knew at the time that, you know, based on a piece of research that we had seen out of Harvard that having a piece of software be architected in a modular fashion wasn't just great for the software, but it was also great for developing pathways and connections for the community to actually contribute stuff. If you have a car, this is always my analogy. If you have a car, you don't have to know how the engine works in order to swap out the windshield wipers or embed new windshield wipers, things like that. The nice thing about modular architectures is that it doesn't just mean that things can plug in. It means you can actually separate them into different spots to enable them to be plugged in. And that's sort of where we are today with collections, right? We've always had this sense of modules, but everything except for a couple of points in time, all of the modules, the ways that you connect Ansible to the vast array of technologies that you can use it with. All of those have always been in the full Ansible repository. Now we've separated out most of, you know, nearly everything that is not absolutely essential to having in a, you know, a very minimal Ansible installation, broken them out into separate repositories, that are usually grouped by function, right? So there's probably like a VMware something and a cloud something, and a IBM, z/OS something, things like that, right? Each in their own individual groups. So now, not only can contributors find what they want to contribute to in much smaller spots that are not a sea of 5,000 plus folks doing work. But now you can also choose to use your Ansible collections, update them, run them independently of just the singular release of Ansible, where you got everything, all the batteries included in one spot. >> Matt, this brings up the point about she's bringing in more advanced functionality, she's talking about collections. This has been kind of the Ansible formula from the beginning in its startup days, ease of use, easy, fast automation. Talk about the, you know, back in 2013 it was a startup. Now it's part of Red Hat. The game is still the same. Can you just share kind of what's the current guiding principles around Ansible this year? Because lots going on, like I said, faster, bigger, a lot going on, share your perspective. You've been there. >> Yeah, you know, what we're working on now is we're taking this great tool that has changed the way that automation works for a lot of people and we want to make it faster and bigger and better. We want it to scale better. We want it to automate more and be easier to automate, automate all the things that people want to do. And so we're really focusing on that scalability and flexibility. Robyn talked about content and collections, right? And what we want to enable is people to bring the content collections, the collections, the roles, the models, and use them in the way that they feel works best for them, leaving aside some of the things that they maybe aren't quite as interested in and put it together in a way that scales for them and scales for a global automation, automation everywhere. >> Yeah, I want to dig into the collections later, Robyn, for sure. And Matt, so let's, we'll put that on pause for a minute. I want to get into the event, the virtual event. Obviously we're not face to face, this year's virtual. You guys are both keynoting. Matt, we'll start with you. If you can each give 60 seconds, kind of a rundown of your keynote talk, give us the quick summary this year on the keynotes, Matt, we'll start with you. >> Yeah. That's, 60 seconds is- >> If you need a minute and a half, we'll give you 90 seconds, Robyn, that's going to be tough. Matt, we'll start with you. >> I'll try. So this year, and I mentioned the focus on scalability and flexibility, we on the product and on the platform, on the Ansible Automation Platform, the goal here is to bring content and flexibility of that content into the platform for you. We focused a lot on how you execute, how you run automation, how you manage your automation, and so bringing that content management automation into the system for you. It's really important to us. But what we're also noticing is that we, people are managing automation at a much larger scale. So we are updating the Ansible Tower, Ansible AWX, the automation platform, we're updating it to be more flexible in how it runs content, and where it can run content. We're making it so that execution of automation doesn't just have to happen in your data center, in one data center, we recognize that automation occurs globally, and we want to expand that automation execution capability to be able to run globally and all report back into your central business. We're also expanding over the next six months, a year, how well Ansible integrates with OpenShift and Kubernetes. This is a huge focus for us. We want that experience for automation to feel the same, whether you're automating at the edge, in devices and virtual machines and data centers, as well as clusters and Kubernetes clusters anywhere in the world. >> That's awesome. That's why I brought that up earlier. I wanted to get that out there because it's worth calling out that the Ansible mission from the beginning was similar scope, easy to do and simplify, but now it's larger scale. Again, it's everywhere, harder to do, hence complexity being extracted away. So thank you for sharing. We'll dig into that in a second. Okay, Robyn, 60 seconds or more, if you need it, your keynote this year at AnsibleFest, give us the quick rundown. >> All right. Well, I think we probably know at this point, one of the main themes this year is called automate to connect and, you know, the purpose of the community keynote is really to highlight the achievements of the community. So, you know, we are talking about, well, we are talking about collections, you know, going through some of the very broad highlights of that, and also how that has contributed, or, not contributed, how that is included as part of the recent release of Ansible 2.10, which was really the first release where we've got it very easy for people to actually start using collections and getting familiar with what that brings to them. A good portion of the keynote is also just about innovation, right? Like how we do things in open source and why we do things in certain ways in open source to accelerate us. And how that compares with the Red Hat, traditional product model, which is, we kind of, we do a lot of innovation upstream. We move quickly so that if something is maybe not the right idea, we can move on. And then in our products, that's sort of the thing that we give to our customers that is tried, tested and true. All of that kind of jazz. We also talk about, or I guess I also talk about the, all of our initiatives that we're doing around diversity and inclusiveness, including some of the code changes that we've made for better, more inclusive language in our projects and our downstream products, our diversity and inclusion working group that we have in the community land, which is, you know, just looking to embrace more and more people. It's a lot about connectivity, right? To one of Matt's points about all the things that we're trying to achieve and how it's similar to the original principles, the third one was, it's always, we need to have it to be easy to contribute to. It doesn't necessarily just mean in our community, right? Like we see in all of these workplaces, which is one of the reasons why we brought in Automation Hub, that folks inside large organizations, companies, government, whatever it is, are using Ansible and there's more and more, and, you know, there's one person, they tell their friend, they tell another friend, and next thing you know, it's the whole department. And then you find people in other departments and then you've got a ton of people doing stuff. And we all know that you can do a bunch of stuff by yourself, but you can accomplish a lot more together. And so, making it easy to contribute inside your organization is not much different than being able to contribute inside the community. So this is just a further recognition, I think, of what we see as just a natural extension of open source. >> I think the community angle is super important 'cause you have the community in terms of people contributing, but you also have multiple vendors now, multiple clouds, multiple integrations, the stakeholders of collaboration have increased. It was just like, "Oh, here's the upstream and et cetera, we're done, and have meetings, do all that stuff." And Matt, that brings me to my next question. Can you talk about some of the recent releases that have changed the content experience for the Ansible users in the upstream and within the automation platform? >> Well, so last year we released collections, and we've really been moving towards that over the 2.9, 2.10 timeframe. And now I think you're starting to see sort of the realization of that, right? This year we've released Automation Hub on cloud.redhat.com so that we can concentrate that vendor and partner content that Red Hat supports and certifies. In AnsibleFest you'll hear us talk about Private Automation Hub. This is bringing that content experience to the customer, to the user of this content, sort of helping you curate and manage that content yourself, like Robyn said, like we want to build communities around the content that you've developed. That's the whole reason that we've done this with collections is we don't want to bind it to Ansible core releases. We don't want to block content releases, all of this great functionality that the community is building. This is what collections mean. You should be free to use the collections that you want when you want it, regardless of when Ansible core itself has released. >> Can you just take a minute real quick and just explain what is collections, for folks out there who are rich? 'Cause that's the big theme here, collections, collections, collections. That's what I'm hearing resonate throughout the virtual hallways, if you will. Twitter and beyond. >> That's a good question. Like what is a collection itself? So we've talked a lot in the past about reusable content for Ansible. We talk a lot about roles and modules and we sort of put those off to the side a little bit and say, "These are your reusable components." You can put 'em anywhere you want. You can put 'em in source control, distribute them through email, it doesn't matter. And then your playbooks, that's what you write. And that's your sort of blessed content. Collections are really about taking the modules and roles and plugins, the things that make automation possible, and bundling those up together in groups of content, groups of modules and roles, or standing by themselves so that you can decide how that's distributed and how you consume that, right? Like you might have the Azure, VMware or Red Hat satellite collection that you're using. And you're happy with that. But you want a new version of Ansible. You're not bound to using one and the same. You can stick with the content that matters to you, the roles, the modules, the plugins that work for you. And you decide when to update those and you know, what the actual modules and plugins you're using are. >> So I got to ask the content question, you know, I'm a content producer. We do videos as content, blog posts content. When you talk about content, it's code, clarify that role for us because you got, you're enabling developers with content and helping them find experts. This is a concept. Robyn, talk about this. And Matt, you can weigh in, too, define what does content mean? It means different things. (indistinct) again, content could be. >> It is one of those words, it's right up there with developers, you know, so many different things that that can mean, especially- >> Explain content and the importance of the semantics of that. Explain it, it's important that people understand the semantics of the word "content" with respect to what's going on with Ansible. >> Yeah, and Matt and I actually had a conversation about the murkiness of this word, I believe that was yesterday. So what I think about our content, you know, and I try to put myself in the mind, my first job was a CIS admin. So I try to put myself in the mind of someone who might be using this content that I'm about to attempt to explain. Like Matt just explained, we've always had these modules, which were included in Ansible. People have pieces of code that show very basic things, right? If I get one of the AWS modules, it would, I am able to do things like "I would like to create a new user." So you might make a role that actually describes the steps in Ansible, that you would have to create a new user that is able to access AWS services at your company. There may be a number of administrators who want to use that piece of stuff, that piece of code over and over and over again, because hopefully most companies are getting bigger and not smaller, right? They want to have more people accessing all sorts of pieces of technology. So making some of these chunks accessible to lots of folks is really important, right? Because what good is automation, if, sure we've taken care of half of it, but if you still have to come up with your own bits of code from scratch every time you want to invoke it, you're still not really leveraging the full power of collaboration. So when we talk about content, to me, it really is things that are constantly reusable, that are accessible, that you tie together with modules that you're getting from collections. And I think it's that bundle, you can keep those pits of reusable content in the collections or keep them separate. But, you know, it's stuff that is baked for you, or that maybe somebody inside your organization bakes, but they only have to bake it once. They don't have to bake it in 25 silos over and over and over again. >> Matt, the reason why we're talking about this is interesting, 'cause you know what this points out, in my opinion, it's my opinion. This points out that we're talking about content as a word means that you guys were on the cutting edge of new paradigms, which is content, it's essentially code, but it's addressable, community it's being shared. Someone wrote the code and it's a whole 'nother level of thinking. This is kind of a platform automation. I get it. So give us your thoughts because this is a critical component because the origination of the content, the code, I mean, I love it. Content is, I've always said content, our content should be code. It's all data, but this is interesting. This is the cutting edge concept. Could you explain what it means from your perspective? >> This is about building communities around that content, right? Like it's that sharing that didn't exist before, like Robyn mentioned, like, you know, you shouldn't have to build the same thing a dozen times or 100 times, you should be able to leverage the capabilities of experts and people who understand that section of automation the best, like I might be an expert in one field or Robyn's an expert in another field, we're automating in the same space. We should be able to bring our own expertise and resources together. And so this is what that content is. Like, I'm an expert in one, you're an expert in another, let's bring them together as part of our automation community and share them so that we can use them iterate on them and build on them and just constantly make them better. >> And the concepts are consumption, there's consumption of the content. There's the collaboration of the content. There's the sharing, all this, and there's reputation, there's expertise. I mean, it's a multi sided marketplace here, isn't it? >> Yeah. I read a article, I don't know, a year or two ago that said, we've always evolved in the technology industry around, if you have access to this, first it was the mainframes. Then it was, whatever, personal computers, the cloud, now it's containers, all of this, but, once everybody buys that mainframe or once everybody levels up their skills to whatever the next thing is that you can just buy, there's not much left that actually can help you to differentiate from your competitors, other than your ability to actually leverage all of those tools. And if you can actually have better collaboration, I think than other folks, then that is one of those points that actually will get you ahead in your digital transformation curve. >> I've been harping on this for a while. I think that cloud native finally has gone, when I say "mainstream" I mean like on everyone's mind, you look at the container uptake, you're looking at containers. We had IDC on, five to 10% of the enterprises are containerizing. That's huge growth opportunity. The IPO of, say, Snowflake's on Amazon. I mean, how does this happen? That's a company that's went public, It's the most valuable IPO in the history of IPOs on Wall Street. And it's built on Amazon, it has its own cloud. So it's like, I mean, this points to the new value that's being created on top of these new cloud native architectures. So I really think you guys are onto something big here. And I think you're starting to see this, new notions of how things are being rethought and reimagined. So let's keep it, while I've got you guys here real quick, Ansible 2.1 community release. Tell us more about the updates there. >> Oh, 2.10, because, yeah. Oh, that's fine. I know I too have had, I'm like, "Why do we do that?" But it's semantic versioning. So I am more accustomed to this now, it's a slightly different world from when I worked on Fedora. You know, I think the big highlight there is really collections. I mean, it's collections, collections, collections. That is all the work that we did, it's under the hood, over the hood, and really, how we went from being all in one repo to breaking things out. It's a big line for, we're advancing both the tool and also advancing the community's ability to actually collaborate together. And, you know, as folks start to actually use it, it's a big change for them potentially in how they can actually work together in their organizations using Ansible. One of the big things we did focus on was ensuring that their ease of use, that their experience did not change. So if they have existing Ansible stuff that they're running, playbooks, mod roles, et cetera, they should be able to use 2.10 and not see any discernible change. That's all the under the hood. That was a lot of surgery, wasn't it, Matt? Serious amounts of work. >> So Matt, 2.10, does that impact the release piece of it for the developers and the customers out there? What does it change? >> It's a good point. Like at least for the longer term, this means that we can focus on the Ansible core experience. And this is the part that we didn't touch on much before now with the collections pieces that now when we're fixing bugs, when we're iterating and making Ansible as an engine of automation better, we can do that without negatively impacting the automation that people actually use. We could focus on the core experience of actually automating itself. >> Execution environments, let's talk about that. What are they, are they being used in the community today? What do you guys react to that? >> We're actually, we're sort of in the middle of building this right now. Like one of the things that we've struggled with is when you, you need to automate, you need this content that we've talked about before. But beyond that, you have the system that sits underneath the version of Linux, the kernel that you're using, going even further, you need Python dependencies, you need library dependencies. These are hard and complicated things, like in the Ansible Tower space, we have virtual environments, which lets you install those things right alongside the Ansible Tower control plane. This can cause a lot of problems. So execution environments, they take those dependencies, the unit that is the environment that you need to run your automation in, and we're going to containerize it. You were just talking about this from the containerization perspective, right? We're going to build more easily isolated, easy to use distinct units of environments that will let you run your automation. This is great. This lets you, the person who's building the content for your organization, he can develop it and test it and send it through the CI process all the way up through production, it's the exact same environment. You could feel confident that the automation that you're running against the libraries and the models, the version of Ansible that you're using, is the same when you're developing the content as when you're running it in production for your business, for your users, for your customers. >> And that's the Nirvana. This is really where you talk about pushing it to new limits. Real quick, just to kind of end it out here for Ansible 2020, AnsibleFest 2020. Obviously we're now virtual, people aren't there in person, which is really an intimate event. Last year was awesome. Had theCUBE set right there, great event, people were intimate. What's going on for what you guys have for people that obviously we got the videos and got the media content. What's the main theme, Robyn and Matt, and what's going on for resources that might be available for folks who want to learn more, what's going on in the community, can you just take a minute each to talk about some of the exciting things that are going on at the event that they should pay attention to, and obviously, it's asynchronous so they can go anywhere anytime they want, it's the internet. Where can they go to hang out? Is there a hang space? Just give the quick two second commercial, Robyn, we'll start with you. >> All right. Well of course you can catch the keynotes early in the morning. I look forward to everybody's super exciting, highly polite comments. 'Cause I hear there's a couple people coming to this event, at least a few. I know within the event platform itself, there are chat rooms for each track. I myself will be probably hanging out in some of the diversity and inclusion spaces, honestly, and I, this is part of my keynote. You know, one of the great things about AnsibleFest is for me, and I was at the original AnsibleFest that had like 20 people in Boston in 2013. And it happened directly across the street from Red Hat Summit, which is why I was able to just ditch my job and go across the street to my future job, so to speak. We were... Well, I just lost my whole train of thought and ruined everything. Jeez. >> We got that you're going to be in the chat rooms for the diversity and community piece, off platform, is there a Slack? Is there like a site? Anything else? 'Cause you know, when the event's over, they're going to come back and consume on demand, but also the community, is there a Discord? I mean, all kinds of stuff's going on, popping up with these virtual spaces. >> One thing I should highlight is we do have the Ansible Contributor Summit that goes on the day before AnsibleFest and the day after AnsibleFest. Now, normally this is a pretty intimate event with the large outreach that we've gotten with this Fest, which is much bigger than the original one, much, much, much bigger, we've, and signing up for the contributor summit is part of the registration process for AnsibleFest. So we've actually geared our first day of that event to be towards new or aspiring contributors rather than the traditional format that we've had, which is where we have a lot of engineers, and can you remember sit down physically or in a virtual room and really talk about all of the things going on under the hood, which is, you know, can be intimidating for new people. Like "I just wanted to learn about how to contribute, not how to do surgery." So the first day is really geared towards making everything accessible to new people because turns out there's a lot of new people who are very excited about Ansible and we want to make sure that we're giving them the content that they need. >> Think about architects. I mean, SREs are jumping in, Matt, you talked about large scale. You're the chief architect, new blood's coming in. But give us an update on your perspective, what people should pay attention to at the event, after the event, communities they could be involved in, certainly people want to tap into you are an expert and find out what's going on. What's your comment? >> Yeah, you know, we have a whole new session track this year on architects, specifically for SREs and automation architects. We really want to highlight that. We want to give that sort of empowerment to the personas of people who, you know, maybe you're not a developer, maybe you're not, operations or a VP of your company. You're looking at the architecture of automation, how you can make our automation better for you and your organization. Everybody's suffered a lot and struggled with the COVID-19. We're no different, right? We want to show how automation can empower you, empower your organization and your company, just like we've struggled also. And we're excited about the things that we want to deliver in the next six months to a year. We want you to hear about those. We want you to hear about content and collections. We want you to hear about scalability, execution environments, we're really excited about what we're doing. You know, use the tools that we've provided in the AnsibleFest event experience to communicate with us, to talk to us. You can always find us on IRC via email, GitHub. We want people to continue to engage with us, our community, our open source community, to engage with us in the same ways that they have. And now we just want to share the things that we're working on, so that we can all collaborate on it and automate better. >> I'm really glad you said that. I mean, again, people are impacted by COVID-19. I got, it sounds like all channels are open. I got to say of all the communities that are having to work from home and are impacted by digital, developers probably are less impacted. They got more time to gain, they don't have to travel, they could hang out, they're used to some of these tools. So I think I guess the strategy is turn on all the channels and engage in new ways. And that seems to be the message, right? >> Yeah, exactly. >> Alright, Robyn Bergeron, great to see you again, Matt Jones, great to chat with you, chief architect for Ansible Automation Platform and of course, Robyn senior manager for the community team. Thanks so much for joining me today. I appreciate it. >> Thank you so much. >> Okay. It's theCUBE's coverage. I'm John Furrier, your host. We're here in the studio in Palo Alto. We're virtual. This is theCUBE virtual with AnsibleFest virtual. We're not face to face. Thank you for watching. (calm music)
SUMMARY :
Brought to you by Red Hat. for the Ansible Automation Platform. It's good to see you. collections, is the message, the ways that you connect Ansible to This has been kind of the Ansible that has changed the way into the collections later, If you need a minute and a half, the goal here is to bring content that the Ansible mission automate to connect and, you know, that have changed the content experience the collections that you want 'Cause that's the big theme here, so that you can decide clarify that role for us because you got, and the importance of that you would have to create a new user means that you guys that section of automation the best, And the concepts are consumption, is that you can just buy, 10% of the enterprises One of the big things we did focus on for the developers and We could focus on the core experience What do you guys react to that? that you need to run your automation in, and got the media content. and go across the street to for the diversity and community piece, that goes on the day before AnsibleFest You're the chief architect, in the next six months to a year. And that seems to be the message, right? great to see you again, We're here in the studio in
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Robyn | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Robyn Bergeron | PERSON | 0.99+ |
Matt Jones | PERSON | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
90 seconds | QUANTITY | 0.99+ |
2013 | DATE | 0.99+ |
100 times | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
60 seconds | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
25 silos | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
20 people | QUANTITY | 0.99+ |
This year | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
a minute and a half | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
AnsibleFest | EVENT | 0.99+ |
first release | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
Automation Hub | TITLE | 0.99+ |
one person | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
COVID-19 | OTHER | 0.98+ |
one spot | QUANTITY | 0.98+ |
Snowflake | ORGANIZATION | 0.98+ |
AnsibleFest | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
10% | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
Suresh Menon, Informatica | CUBE Conversation, July 2020
>> Announcer: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world. This is a CUBE conversation. >> Hello, everyone. Welcome to this CUBE conversation. I'm John Furrier, host of theCUBE. We're here in our Palo Alto studios in California for a CUBE conversation with Suresh Menon, who's the senior vice president and general manager of Informatica of the master data group. Suresh, great to see you. We couldn't see you in person. Three-time CUBE alumni at Informatica World, industry executive. We're remote. Great to see you. >> Good to see you, John. Great to be back. Wish this was in person, but I think this is fantastic. >> Well, one of the things that's clear in my interviews over the past four months, we've been doing our best to hit the road and we've got a quarantine crew here. We're doing our part telling the stories that matter. Data now more than ever, COVID-19 has shown that the companies that are prepared, that have done the work, for the digital transformation, you know, putting the cliche aside, is real and the benefits are definitely there. And you're seeing things like reaction time, war rooms are being put together, because business still needs to go on. This is the reality. And so companies are seeing some exposure and some opportunities, and so a lot of things are going on. So I want to get your reaction to that, because there are changes on how customers are evolving with data. You guys have been at the forefront of that, pioneering this horizontal data fabric, data 4.0, amidst talks about. What are you seeing from customers? How are they approaching this? Because at the end of the day, they got to come out of the pandemic with a growth strategy and they got to solve the problems they've got to do today and be in position. What are you seeing for changes? >> So one of the most important things that we started seeing, there are about three big trends that we began to see starting in about late March, and share some of the data points that we saw across the world, starting with Italy, which was in the news earlier this year with the pandemic. We saw that in one week, the stats were that online or digital sales increased by 81% in a single week. And it's obvious when you lock down a large population, commerce moves to, away from the brick and mortar kind of model to being completely online and digital. The other part of it that we started seeing is we had already started seeing a lot of our customers starting to struggle with supply chain issues. As borders started closing, opening, and then closing again, how do you maintain a resilient supply chain? And a resilient supply chain also means being able to be really agile in terms of trying to identify alternate supply sources, be able to quickly onboard new suppliers, maybe in different parts of the world that are not so affected. And then finally, the last piece that we saw were every single CFO, chief financial officer, people who ran finance organizations at all of these companies, for them, it is almost as if you're driving down the highway and you suddenly run into, enter this fog bank. The first reaction is to hit the brakes, of course, because you don't know what's (microphone cuts out) so every CFO around the world started saying, I need to be able to understand what my cash flow situation is. Where is it coming in from? Where is it going out of? How do I reconcile across the geographies, lines of business? Because everybody realized that without an adequate cash reserve, who knows how long this thing is going to carry on? We need to be able to survive. And then the fourth element that has always been important for our customers is all about customer engagement, getting the best possible customer experience. That's just being turned up to 11, the volume, because as organizations are saying, there's disruption happening now. There are new ways in which consumers are going out there and buying products and services, and these things might stick. There's also an opportunity for some of these organizations to go out and enter into markets, gain market share, that they were not able to do in the past. And then how do you come out of this, whenever it is, how do we come out of it? It's always by making sure you're retaining your customers and getting more of them. So the underpinnings across all of this, whether it's supplier data, whether it's getting the most accurate product information delivered to your online channels, whether it is being able to understand your supply chain holistically with our data platform under it, and then finally customer experience depends on understanding everything end to end, including everything you need to know about your customer. So data continues to become top of mind for all of these organizations. >> You know, Suresh, we've had conversations over the past three years, and I can remember them vividly all about, and we've been really geeking out, but also getting very industry focused around, oh, the enablement of data and doing all these things, horizontal scalability, application enablement, AI CLAIRE, all these things are very relevant. But now with COVID-19, that that future's been pulled to the present. It's accelerated so fast that everything's impacted the business model. You mentioned supply chain and cash flow. The business is right there visible, and all these things are exposed and heightens the volume, as you said, and so everyone's seeing it happen. They can see the consequences, right? So this is like the most reality view of all time in any kind of is digital transformation, will it happen? So I want to get your thoughts on this, because I've been riffing on this idea of the future of work, the word work, workplaces, workforce, workloads, and workflows, right? So they all have work in them, right? We talk about workflows and workloads. That's a cloud term and a tech term. Workplace is the physical place, now home. Workforce are people, their emotional stability, their engagement. These things are all now exposed and all this new data's coming in. Now the executives have to make these decisions. This has really been a forcing function. So first, I'm sure you agree with all that, but what's your reaction to that? Because this brings up challenges that customers are facing. What's your thoughts on this massive reality? >> Yeah, I mean, this is where I think the other domain that is very important, which is most important for organizations if you have to be successful is really that employee or workforce understanding. We talk about customer 360s. We have to talk about employee 360s, right? And tie that to locations. And there are very few enlightened organizations, I would say, maybe three, four, five years ago, who had said, we really do need to understand everything about employees, where they work from, what are the different locations they go to, whether it's home and whether it's the multiple office locations that the organization might have. And it started quite realistically in the healthcare organization. There's a large healthcare provider here in California who many, many years ago decided that they want to create an employee 360, and considering it's doctors, it's nurses, it's hospital technicians and so on, who move from one hospital to another different outpatient clinics. And we are in a disaster-prone state, and what they said is I need to build this data foundation about my employees to understand where someone is at any given point in time and be able to place them so that if there is, let's say, an earthquake in one part of the state, I want to know who's affected, and more importantly, who's not affected who can go out and help. And we started seeing that mindset now go across every single organization, organizations that said, hey, I was not able to keep track, when the lockdowns were started, I was not able to keep track of which one of my employees were in the air at that time, crossing borders, stuck in different parts of the world. So as much as we talk about product, customer financial data, supplier data, employee data, and an employee 360, and now with a lot of state and local governments creating citizens 360s has also now become top of mind because being able to pull all of this data together, and it's not just your traditional structured data. We're also talking about all the data that you're getting, the interaction data from folks carrying their phones, mobile devices, the swipes that people are doing in and out of locations, being able to capture all of that, tie it all together. Again, we talk about an explosion in volume, which I think is to your point, bringing in more automation with CLAIRE, with artificial intelligence, machine learning techniques, is really the only way to get ahead of this, because it's not humanly possible to say, as your data scales, we need to get the same linearly, the same number of people. That's not going to happen. So technology, AI, has to solve it. >> Well, I want to get to AI in a second. It's on my list to ask you about CLAIRE, get the update there. But you mentioned 360 view of business and the employee angle's definitely relevant. Talk more about this 360 business approach, how are customers approaching it across the enterprise. Certainly now more than ever, it's critical. >> Right, so the 360s have always been around, John, and I think we've had these conversations about 360s now, for the last few years now, and a lot of organizations have gone out and said, create a 360 around a particular, whichever one specific business-critical domain that they want to create a 360 out of. So typically for most organizations, you're buying parts, raw materials from a supplier. So create a supplier 360. You really need to understand is there risk there in the supply chain? Am I allowed to do business with a lot of these suppliers? It's data that helps them create that supplier 360. The product is always important, whether you're manufacturing your own, or if you're a retailer, you're buying these from your suppliers and then selling them via your different channels. And then finally, the third one was always customers, without which none of those organizations would be in business. So customer 360 was always top of mind. But, and there are ancillary domains, whether it's that's the employee 360 we just talked about, finance 360, which are of interest maybe to specific lines of business. These are all being done in silos. If you think about creating a full 360 profile of your suppliers, of your products, of your customers, the industry has been doing it now for a few years, but where this pandemic has really taught a lot of organizations is now it's important to use that platform to start connect (microphone cuts out) a line all the way from your customers via their experience all the way back to your suppliers and all the different functions and domains and 360s that it needs to touch. And the most, I guess real-world example a lot of us had to deal with was the shortages in the grocery stores, right? And that ties all the way back to the supply chain. And you're not providing your best possible customer experience if the goods and products and services that customers want to buy from you are not available. That's when organizations started realizing, we need to start connecting the customer profiles, their preferences, to the products, our inventory, all the way back down to suppliers, and are, for example, can we turn up the production in a particular factory, but maybe that location is under one of the most stringent lockdown conditions and we're not able to bring in or increase capacity there. So how do you get a full 360 across your entire business starting with customer all the way back to supplier. That is what we are saying, the end-to-end 360 view of a business, or as we, there's too many words, we just call it business 360. >> Yeah, it's interesting, and I'm interviewing a lot of your customers lately and talking some of the situations around COVID. There's the pre-COVID, before COVID, during COVID, now looking after COVID. Some have been very happy and well-prepared because they have been using, say, Informatica, and had done the work and are taking advantage of those benefits. I've talked to other practitioners who are struggling with trying to figure out how to architect, because what your customers who've been successful have been telling me is that, look at, we're in good shape right now because we did the work prior to COVID, and now they are being forced to have a 360 view not because it's a holistic corporate mission. It's they have to, right? People are at home, so it's not like, hey, let's get a 360 view of the business and do an assessment and do better and enable things. No, no, no. There's business pressure. So they're enabled. Now new types of data's coming in. So again, back to the catalog and back to some of the things that you guys have been working on. How do you talk to your customers now that they're in COVID for the ones that have been set up before COVID and the ones now that are coming to the table saying, okay, I need to now get quickly deployed with Informatica while I'm in, during the state of COVID so I can have a growth strategy coming out of it, so I don't make these mistakes again. What's your thoughts? >> Absolutely, and I think that the, whether an organization has already, a customer has already laid the groundwork, has the foundation before COVID, and the ones who are now moving full steam ahead because they're missing capabilities in those functions. The conversation is in reality more or less the same, because even for those who have the foundation, what they're starting to see is new forms of data coming in, new forms of, new requirements being placed on the, by the business on that infrastructure, the data infrastructure, and being able to, most importantly, react very, very quickly. And even for those who are starting off right now from scratch, it's the same thing. It's need to get up and running, need to get the answers to these questions, need to get the, we need to get the problems to these solutions as soon as possible. And that the theme, or I guess the talking points for both of those customers is really two things. One is you need agility. You need to be able to bring these solutions up to life and delivering as soon as possible, which means that the capabilities, the solutions you need, whether it's bringing the catalog, understanding where your data is very, very quickly, your business critical information. How do you bring that in, all of that data, and integrate that data into a 360 solution, be able to make sure it's of the highest quality, enrich it, master it, create those 360 profiles by joining it to all of this interaction, transaction data. All that has to be done with the power of technologies like CLAIRE, with artificial intelligence, so that you are up and running in a matter of days or weeks, as opposed to months and years, because you don't have that time. And then the other one which is quite important is cloud, because all of this capability needs infrastructure, hardware to run on. And we've started seeing a lot of, let's say cloud-hesitant verticals, entire verticals now in the last two to three months suddenly going from yeah, cloud is maybe somewhere down the road, as far as our future's concerned. But to now saying, we understand that we have to go to a cloud when our technicians are not able to get access to our data centers to add new machinery in there to take care of the new demands, that migration to cloud. So it's that agility and cloud which really is the common theme when we talk to customers, both- >> Yeah, and now more than ever, they need it, 'cause it's an important time, and it's going to be an inflection point, for sure. There'll be winners and losers, and people want to be on the right side of history here. Suresh, I got to ask you about AI. Obviously CLAIRE's been a big part of it. Now more than ever, if you have bad data, AI can be bad too. So understanding the relationship between data and AI is super important. This is going to be critical to help people move faster and deal with more data as soon as they're dealing with now. What's your thoughts on the role AI will play? >> Oh, AI has a huge role to play. It's already begun to play a huge role in our solutions, whether we start from catalog to integration to 360 solutions. The first thing that AI can really do very, very well is, we've gone from folks who said, let's take supply chain. There were maybe three sources of supplier data that used to come into creating a supplier 360. Today, there are hundreds of sources. If you go all the way to the customer 360, and we are talking about 1,300, 1,400 different sources of data with 90% of them sitting up in the cloud. How is it humanly possible to bring all of that data together? First of all, understand where customer information is sitting across all of those different places, whether it's your clickstream data, call log data, whether it's the actual interaction data that customers are having with in-store, online, collecting all of that information, and from your traditional systems like CRM, ERP, and billing, and all of that, bringing all that together for understanding where it is, catalog gives you that Google for the enterprise view, right? It tells you where all this data is. But then once you've got that there, it also tells you what its relative quality is, what needs to be done to it, how usable is it. To your point of if it's bad data, at least what AI can do first of all is tell you that these are unreliable attributes, these are ones that can be enriched. And then, and this is where AI now moves to the next level, which is to start inferring what kind of rules that are in our, let's say, repository across integration, quality, and mastering, and bring, and matching, bring all that together and say, here, you as the developer who's been tasked with making this happen in a matter of days, we are going to infer for you what you need to do with this data, and then we will be able to go in and bring all these sources in, connect it, load it up into a 360 solution, and create those 360 profiles that everybody downstream, whether it's your engagement systems and other. So it's really about that discovery, that automation, as well as the ability to refine and suggest new rules in order to make your data better and better as you go along. I think that's really the power of CLAIRE and AI. >> I love the Google for the enterprise or data, because the metaphor really is about finding what you're looking for. It's the discovery piece, as you said, to make it easy, and Google did make it easy to find things, which is what their search engine did. But if you look at what Google did after that, they had to have large scales. SREs is what they call them, site reliability engineers, one engineer for thousands and thousands of servers, which they, revolutionizing IT and cloud. You guys are kind of thinking the same way, data scale, right? So it's Google in terms of discovery, right? Find what you're looking for, catalog, get it in, and get it out quest, make it available for applications. But you're kind of teasing out this other point where the AI comes in. That's scale. >> Yes. >> That's super important nuance. >> Absolutely. >> But it's key to the future. >> Absolutely, because when we are starting to talk about now not just tens of millions of records when it comes to customer data or product experience dat and so on. We are already talking about organizations like Dell, for example, with our customer 360, with billions of records going in, which would be equivalent to the scale of, if you look at Google search engine business back maybe 10, 12 years ago. So yes, we are talking about within the context of a single organization or a single company, we're already talking about volumes that were unthinkable even five years ago. So being able to manage that scale, be able to have architectures, technologies that are able to autoscale, and the advantage of course is now we've got an architectural platform that has microservices. As loads start increasing, be able to spawned new instances of those microservices seamlessly. Again, this is another part where AI comes in, AI being able to say, in the old days it was somebody had to see that the CPUs are overloaded to about 100% before someone realized that we have to go out and do something about it. In this new world with AI managing the ops layer, being able to look at is this customer bringing in another, in the cloud rack, cloud world, in a SaaS world, bringing in a billion records that they want to push through in the next 10 minutes, be able to anticipate that, spawn the new infrastructure and the microservices, and be able to take care of that load and then dial those back down when the work is done. This again, from an ops perspective as well, from, so we are able to scale instead of sort of having, let's say, 1000 SREs, I think, to your example, John, have only 10 SREs to make sure that every, look at the dashboard and make sure everything is going well. >> Well, I've been covering you guys for a long time. You guys know that. And I'm a big fan. I always had been a fan of the vision that's playing out. Large scale data, large scale discovery, fast and easy, integrating that into applications for business value. It's not just the data warehouse and just park something over here. This is a mindset. It's a foundational enablement model. You guys have done an amazing job. And now more than ever, it's I think more understood because of the pandemic. >> Absolutely, and people are making that direct connection between the business outcome and the value of having this data foundation that does all the things we described. >> Suresh, great to see you, and bummer we couldn't be in person, but hey, the pandemic hit. Informatica World when virtual. A lot of different events. I know you guys have a lot of things going on virtually, and you're engaging well. Everyone's working at home. Not a problem. Most of the techies can work at home. It's not a big deal. But you've got remote customers. You guys are engaging with them. And congratulations and great to see you. >> Same here. Thank you so much. >> All right, Suresh Menon. He is senior vice president, general manager of master data at Informatica. Data's more important than ever. We're seeing it, this is a foundational thing. If it's not enabling value, then it's not going to be a good solution. This is the new criteria. This is where action matters. People who need data and need to integrate into new workloads, new applications across workforces and new workplaces. This is the reality of the future. I'm John Furrier with theCUBE. Thanks for watching. (bright music)
SUMMARY :
leaders all around the world. of the master data group. Great to be back. and they got to solve the and share some of the data points and heightens the volume, as you said, and be able to place and the employee angle's and 360s that it needs to touch. and back to some of the things and the ones who are now and it's going to be an order to make your data better It's the discovery piece, as you said, and be able to take care of that load because of the pandemic. and the value of having this Most of the techies can work at home. Thank you so much. and need to integrate into new workloads,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Suresh Menon | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
California | LOCATION | 0.99+ |
Suresh | PERSON | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
90% | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
COVID-19 | OTHER | 0.99+ |
July 2020 | DATE | 0.99+ |
Today | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
360 profiles | QUANTITY | 0.99+ |
fourth element | QUANTITY | 0.99+ |
360 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
three sources | QUANTITY | 0.99+ |
1000 SREs | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
81% | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.98+ |
CLAIRE | PERSON | 0.98+ |
five years ago | DATE | 0.98+ |
Informatica World | ORGANIZATION | 0.98+ |
tens of millions | QUANTITY | 0.98+ |
Italy | LOCATION | 0.98+ |
earlier this year | DATE | 0.97+ |
one engineer | QUANTITY | 0.97+ |
360 view | QUANTITY | 0.97+ |
10 SREs | QUANTITY | 0.97+ |
Three-time | QUANTITY | 0.97+ |
pandemic | EVENT | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
hundreds of sources | QUANTITY | 0.96+ |
three months | QUANTITY | 0.96+ |
first reaction | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
one part | QUANTITY | 0.96+ |
one week | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.95+ |
10, 12 years ago | DATE | 0.95+ |
about 100% | QUANTITY | 0.95+ |
360 solutions | QUANTITY | 0.94+ |
single company | QUANTITY | 0.93+ |
late March | DATE | 0.93+ |
360 | OTHER | 0.92+ |
one hospital | QUANTITY | 0.92+ |
COVID | TITLE | 0.91+ |
three big trends | QUANTITY | 0.87+ |
single organization | QUANTITY | 0.87+ |
billions of records | QUANTITY | 0.87+ |
about 1,300 | QUANTITY | 0.86+ |
Jeff Dickey & Jonsi Stefansson, NetApp | AWS Summit New York 2019
>> Announcer: Live from New York, it's theCube! Covering AWS Global Summit 2019. Brought to you by Amazon Web Services. >> Welcome back, here in New York City for the AWS Summit. I'm Stu Miniman and my cohost is Corey Quinn. And I'm happy to welcome two guests from NetApp. First to my right, welcome back to the program from another cloud show earlier this year. Jonsi Stefansson, who's the CCO and Vice President for Cloud Services. And to his right, well it's a first time on the program. I actually was on one of his earlier podcasts, Jeff Dickey, who's joining NetApp as the chief technologist inside that same cloud and data services group. Jeff, welcome and Jonsi welcome back. >> Thank you, Stuart. >> Thank you. >> Okay so Jonsi, let's start with you. So we've watched the cloud and data services. From my words it's like almost, I want a new brand. It's like this is not the ONTAP, everywhere, you know, best NFS, you know the number one thing there, it's about multi cloud, it's about getting the value out of my data that transformation we've seen overall in what was known as the storage industry. There are a lot of new people, a lot of new products, and it's the you know the and is I think there was one NetApp term is all of the history and the things you could trust, but a lot of new things. So give us the updates on what's exciting in your world. >> Yeah absolutely, I mean of course we are still relying on that old trusted ONTAP and WAFL storage operating system in the back end, but we have extracted a lot of that into a more automation or you're consuming it in a more autonomous way. We are actually taking all the the storage norms that the traditional storage admin is really used to, you know tweaking and all of that. That's all done and managed by us. It's fully as a service and we are more focused on the data management capabilities of ONTAP than the actual storage system or the performance of that storage operating system. I mean we are in a very unique position as NetApp. I mean we have a very strong foothold in the enterprise. And now we have integrated services with all the public clouds. I mean fully native integrated services either going through their own console or at their own APIs or with our own UI. So the data management capabilities that we are actually bringing to the table is you can seamlessly migrate from the core to the edge and to the cloud, depending on where you want your data to reside. So our goal is actually to do something very similar as Kubernetes has done to the application layer. They have made it completely mobile, there is no longer that VM format issues that you had in the old days. It's basically just a kernel module, I can move it wherever on top of a hypervisor of choice or a public cloud of choice. But that has always been sort of left behind on some propriety box sitting there. But NetApp like I said, NetApp is in this very unique position of being able to move, migrate, replicate and split the data according to your strategy whether it's on-premise or the public cloud. >> All right, Jeff, would love to hear your viewpoint as what you're hearing from customers. I've known you for many years. Talk about that journey towards cloud and what is cloud and how does it fit into their customer environment. Give us what brought you into NetApp and some of the conversations you're having if you've been digging in with the NetApp team. >> Well the coming to NetApp is actually a long story. I've known the Green Cloud folks for a long time. I think was the first kind of US partner of theirs and had been a big fan of first their cloud and then their software so I was really excited when the data acquisition happened and you know for about a year I was learning like the stuff they're working on and that was blowing my mind and again, I've worked with almost every storage company out there so it was exciting to, like the future of what was happening and then after the acquisition of Stackpoint which I was currently working with, so it's like NetApp kind of took my two favorite companies in a short time so I said, hey, I want to be working on, you guys are doing the coolest stuff that I've seen right now and the roadmap is blowing my mind, I want to join. So it's been a great time here. I think what's most unique, what I've found is that the typical, when you're doing cloud consulting, you go after the low-hanging fruit. It's very simple strategy. You know, if you were to go to a customer and say, "Let's take your highest demanding, "most revenue generating systems "and we're going to migrate those to AWS first." Well they're going to look at the $10 billion contract and you know the two year engagement and say no, we're not going to do that. You go for the low-hanging fruit. But because of the products that have come out and what we're doing in the public clouds, we're for the first time we have NFS, you know like basically SLA performant file system in the cloud that can handle the biggest, baddest on-prem apps. So now that we're able to do that, what customers are doing, they are now we're taking those big ones and it's accelerating the whole journey of the cloud because instead of creating more of a chasm between your public cloud infrastructure and your on-prem, there's a lot of people, you know face it, if you've got a $50 million budget, you're putting it mostly into cloud and some of your on-prem, which again is still generating a lot of revenue, is not getting the love it needs and it's not becoming cloud either and you have this kind of chasm. So it think it's great that with the customers we're working with, they're very excited to be moving what they thought they were never going to be able to move because it just wasn't there. And now they have native connections to all the services they love, like you know, here at AWS. So it's just great 'cause you know, yes they're consolidating their data and you're having less silos, that's exciting. But what excites me most is what are they going to do next and after that what're they going to do next with that? Like as they learn how to use their data and connect more to cloud services and our cloud services and the public cloud services, they're going to be able to do way more than they ever thought they would. >> Something that I think would resonate with a number of folks has been that, I go a little bit back, I'm a little older than I look, although I wear it super well. And I cut my teeth on WAFL and working with SnapMirror and doing all kinds of interesting things with that, it's easy to glance, walk around the expo hall and glance at it and figure huh, I see there's a NetApp booth. You must still be trying to convince AWS to let you shove a filer into us-east-1. That's not really what your company does anymore in the traditional sense but I think a lot of people may have lost that message. From a cloud perspective, what is NetApp doing in 2019? >> So I mean we are really, really software focused. So I mean we are doing a lot of work. We are containerizing that WAFL operating system, we are really excited about launching that as alpha today. That basically means launching as an alpha in October. That basically means that you could get all the ONTAP data management goodies on top of any storage operating system on top of any physical or persistent discs in any of these different public clouds. EPS, Volumes, Google PDs or Azure, we wanted to make it so anybody can actually deploy ONTAP. We've always have that story with ONTAP Select but being able to containerize it, I don't know if we can actually. So we can actually reap the benefits of Kubernetes when it comes to high availability, rapication, auto-scaling and self-healing capabilities to make it a much more robust scale out as well as scale up solution. So that's truly our focus. And our focus for 2019 is of course, we've been really, really busy with our heads down coding for a long, long time or for a long time. Very short time in NetApp terms, but in cloud terms, very, very long. Like for the last 18 months. But now we're really sort of integrating our entire portfolio where we have monitoring, deep analytics, compliancy, Kubernetes, storage providers, schedulers. So everything is sort of gelling together now. >> So I think back a couple of years ago, if you talked to Amazon, the answer to everything was move everything to the public cloud. Today, Amazon at least admitted that hybrid cloud is a thing. They won't say hybrid necessarily but you know with the outposts and what they're doing with their partnership with VMware and the like, they're doing that. When I look at customers, most of them have multi cloud. Now when we say multi cloud it means they have lots of clouds and whether or not they're tied together, they're not doing that and while Amazon won't admit to it and isn't looking to manage in that environment, they're playing in that because if I have lots of clouds, one of them is likely AWS. NetApp sits at the intersection of a lot of this. You have your huge install base inside the data center, you're working very much with Amazon, and the other cloud providers. What I'm hoping to get from you is your insight on customers, you know, where are they today, what are they struggling with in that hybrid or multi cloud world and where do you see things maturing as we go the next couple of years? >> Well I mean, the fact of the matter is, 83% of all workloads still recite on-premise. Whether it stays like that or doesn't, I mean AWS is doing Outpost, Google is doing Anthos, Azure is doing Azure Stack. And the good thing is we are actually playing with all of them we are collaborating on all these different projects, both on the storage layer as well as on the application life cycle management. From our point of view, it is really important that we start tying all the infrastructure related stuff into the application layer so you're actually managing everything from that layer and down. So for a developer like me, it's actually really simple to actually do all the tasks and completely manage my own solution. Of course I need operations to be managing the infrastructure but I should be oblivious to it as a developer and what we are actually seeing customers doing now more and more and it's actually really impressing coming here to New York and meeting all these financial companies, they have always been like probably the slowest movers to the public cloud because of compliancy reasons and other stuff, but they are actually really adopting it. They have segmented out their workloads and really know what teams are allowed to provision and are supposed to be running in the public cloud in order to tap into the innovation that's happening there and what teams are only allowed to work on on-premise environments. So it sort of relates into the true cloud concept. The true cloud concept being everything is a cloud and there is no lock in, have the freedom of choice where to provision, where to spin up your workloads. So we're seeing that more and more from our customers. Wouldn't you agree? >> Yeah, totally agree. >> Yeah, Jeff I wonder if you could give a little bit more as you said, NetApp's done quite a few acquisitions in the last couple of years. What sort of things should people be thinking about NetApp that they might not have a couple years ago? >> Well I know, I'll tell a quick story. My first day as a NetApp employee was at KubeCon in Seattle and I remember I was wearing the Net badge and I had a friend that I was partnered with and he looked at my badge and says, "NetApp? "Like the box in the closet people?" And I just like well I mean not anymore. You know and I think that's the biggest thing. You mean Network Appliance? >> Those of us that have know NetApp long enough. >> Now it's internet application, right? Now it's a little bit different. I think the big thing is you know, it's not just a storage. I mean storage is a key component, and it's very important, but that's not the only thing and I think that on the cloud side it's very important because we're still maintaining this relationship with our storage appliances and everything but we have more buyers now so we can go across the company and say, "What are you doing? "Are you an SRE? "Are you a developer lead? "Are you a VP of operations?" We have all these products that work for them yet in the end, it's a single vision to the deep insights of everything they're doing with us. >> Just quick followup on that, I think when NetApp bought a Kubernetes company, it was like okay, I'm trying to understand how that fits when I look at NetApp's biggest partners, I think VMware, Cisco, Red Hat, all going heavily after software solutions including the kubernetes piece so how does NetApp do differently because you still have strong partnerships there. >> I think we're in a strong place because now we're doing two things, we're bringing the apps to the data and the data to the apps. So it's, where do you want to be? There's the right place for your app. There's a lot of choice now and now we have, you know, now you can choose. Where is this going to live best? Where is this going to operate? Where is this going to serve our customers best? What's going to be the most cost effective? You know, being able to deploy and manage. You know, type in a couple characters and your entire production of Kubernetes deployment is backed up into where you want. Like there's just you know, the apps are nothing without data, the data is nothing without the app right? So it's bringing those two together. I think it's very important to kind of get out there. My job is getting that out that it's not storage silos, this is about your apps. What are you doing with it? Where do you want your apps, and what is that data, how is the data helping your apps grow? You know, we're helping people move forward and innovate faster with these products. >> I mean both companies, my company Green Cloud and the Stackpoint company, we were really, really early adopters of Kubernetes and we've always taken both companies very application-centric point of view on Kubernetes while most everybody else have taken a very infrastructure-centric approach. We were two staffed of companies just developers and we always sort of felt like, because it's a very common misunderstanding that Kubernetes was actually built for developers. It wasn't. It is an infrastructure play, built and developed by the Google SREs to run code. So everything that we are adding on top of it and beneath it, it ties it all together. So I mean for a developer working on our Kubernetes offerings, he's basically working in his own element, he's just doing commands and magic happens in the packet. We tie the development branch to a specific Node Pole. We apply the staging branch to another one and the production environment, once you commit that, then it actually goes through like an SRE process where they are basically the gate keepers, where they actually either allow or say hey we found the bug or we are not able to deploy this according to our standards. So tying it all together, all the way from the storage layer all the way up to the application layer is what we are all about. And I got the same question when we were acquired. When we were Green Cloud, we were in a really, really, good situation where we had term sheets from three different companies. I'm not allowed to say which ones, but everybody, once I sold it to NetApp they were like, "Why NetApp?" But if you go to KubeCon, and you are always there, there is a very live matrix on what the biggest problems are with Kubernetes and persistent volume clearance and storage and data management hasn't been sold yet. And that's where we believe that we have a unique way of offering those data management capabilities all the way up the stack. >> All right well Jonsi and Jeff, thank you for giving us the update there, absolutely. Corey Quinn, I'm Stu Miniman. We'll be at KubeCon later this year in San Diego we're at Amazon re:Invent. Always go to theCUBE.net to see all the shows that we're at as well as hit the search and you can see the thousand of videos. Always no registration to be able to check that out so check all out all the interviews. And as always, thanks for watching theCUBE. (light techno music)
SUMMARY :
Brought to you by Amazon Web Services. on the program. is all of the history and the things you could trust, and split the data according to your strategy whether and some of the conversations you're having and our cloud services and the public cloud services, to let you shove a filer into us-east-1. That basically means that you could get What I'm hoping to get from you is your insight and are supposed to be running in the public cloud a few acquisitions in the last couple of years. "Like the box in the closet people?" I think the big thing is you know, the kubernetes piece so how does NetApp do differently and the data to the apps. and the production environment, once you commit that, and you can see the thousand of videos.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jonsi | PERSON | 0.99+ |
Corey Quinn | PERSON | 0.99+ |
Jeff Dickey | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Stuart | PERSON | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Jonsi Stefansson | PERSON | 0.99+ |
October | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Green Cloud | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
$50 million | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
two year | QUANTITY | 0.99+ |
San Diego | LOCATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Stackpoint | ORGANIZATION | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
83% | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
ONTAP | TITLE | 0.99+ |
first day | QUANTITY | 0.99+ |
Azure Stack | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
first time | QUANTITY | 0.98+ |
NetApp | TITLE | 0.98+ |
AWS Summit | EVENT | 0.98+ |
thousand of videos | QUANTITY | 0.98+ |
Anthos | TITLE | 0.97+ |
Kubernetes | ORGANIZATION | 0.97+ |
US | LOCATION | 0.97+ |
today | DATE | 0.97+ |
later this year | DATE | 0.96+ |
both | QUANTITY | 0.95+ |
Kubernetes | TITLE | 0.95+ |
AWS Global Summit 2019 | EVENT | 0.94+ |
Invent | EVENT | 0.94+ |
theCUBE.net | OTHER | 0.93+ |
earlier this year | DATE | 0.93+ |
two favorite companies | QUANTITY | 0.93+ |
Azure | TITLE | 0.92+ |
couple years ago | DATE | 0.91+ |
Sunil Potti, Nutanix | Nutanix .NEXT EU 2018
>> Live from London, England, it's The Cube covering .NEXT conference Europe 2018. Brought to you by Nutanix. >> Welcome back to London, England. This is The Cube's coverage of Nutanix .NEXT 2018. 3,500 people gathered to listen to Sunil Potti. >> Thanks, Stu. >> For the keynote this morning, Sunil's the chief product and development officer with Nutanix. Glad we moved things around, Sunil, 'cause we know events, lots of things move, keynotes sometimes go long, but happy to have you back on the program. >> No, likewise, anytime. >> All right, so, I've been to a few of these and one of the things I hope you walk us through a little bit. So Nutanix, simplicity is always at its core. I have to say, it's taken me two or three times hearing the new, the broad portfolio, the spectrum, and then I've got the core, I've got essentials, I've got enterprise. I think it's starting to sink in for me, but it'll probably take people a little bit of time, so maybe let's start there. >> I mean, I think one of the biggest things that happened with mechanics is that we went from a few products just twelve months ago to over ten products within the span of a year. And both internally as well as externally, while the product values are obviously obvious, so it's more the consumption within our own sales teams, channel teams, as well as our customer base, needed to be codified into something that could be a journey of adoption. So we took it customer inwards, in about a journey that a customer goes through in adopting services in a world of multi-cloud, and before that, before you get to multi-cloud, you have to build a private cloud that is genuine, as we know. And before we do that, we have to re-platform your data center using HCI, so that's really if you work backwards to that, you start with core, which is your HCI platform for modernizing your data center and then you expand to a cloud platform for every workload, and then you can be in a position to actually leverage your multi-cloud services. >> Yeah, and I like that. I mean, start with the customer first, is where you have and I mean the challenge is, you know, every customer is a little bit different. You know, one of the biggest critiques of, you know, you say, okay, what is a private cloud? because they tend to be snowflakes. Every one's a little bit different and we have a little bit of trouble understanding where it is, or did it melt all over the floor. So give us a little bit of insight into that and help us through those stages, the dirty, the crawl-walk-run. >> Yeah, I think the biggest thing everyone has to understand here is that these are not discrete moving parts. Core is obviously your starting point of leveraging computer storage in a software defined way. The way that Amazon launched with EC2 and S3, right. But then, every service that you consume on top of public cloud still leverages computer storage. So in that sense, essentials is a bunch of additional services such as self-service, files, and so forth, but you still need the core to build on essential, to build a private cloud And then from there onwards, you can choose other services, but you're still leveraging the core constructs. So in that sense, I think, both architecturally as well as from a product perspective, as well as architecturally from a packaging perspective, that's why they're synergistic in the way that things have rolled out. >> Okay, so looking at that portfolio. A lot of the customers I work with now, they don't start out in a data center, they've already moved past that, right? So they are leveraging a partner, the public cloud, they might not even be running virtual machines at all anymore. How does that fit into your portfolio? >> Yeah, I mean, increasingly what we are realizing, and you know, we've done this over the last couple of years, is for example, with Calm, you can only use Calm to manage your public clouds without even managing your private cloud of Nutanix. Increasingly with every new service that we're building out, we're doing it so that people don't have to pay the strategy tax off the stack. It needs to be done by a desire of I want to do it versus I need to do it. So, with Frame, you can get going on AWS in any region in an instant or Azure. You don't need to use any Nutanix software. Same thing with Epoch, with Beam. So I think as a company, what we're essentially all about is about saying let us give you a cloud, service-like experience, maybe workload-centric. If it is desktops and so forth. Or if you are going to be at some point reaching a stage where you have to re-platform your data center to look like a public cloud, then we have the core, try and call it platform itself that'll help you get there as well. >> So, looking at re-platforming that data center. If I were to do that now for a customer I wouldn't be looking at virtual machines, storage, networking, I'd be looking at containers or serverless or you know, the new stuff. Again, what is Nutanix's answer to that? >> Yeah, I mean, I think what we've found is that there's quite a bit of an option, obviously, of cloud-native ads, but when it comes to mainstream budget allocation, it's still a relative silo in terms of mainstream enterprise consumption. So what we're finding out is that if you could leverage your well-known cloud platform to not create another silo for Kubernetes, don't create another silo for Edge or whatever the new use-cases are, but treat them as an extension of your core platform. At least from a manageability perspective and an operations perspective, then the chances of you adopting or your enterprise adopting these new technologies becomes higher. So, for example, in Calm, we have this pseudonym called Kalm with a K, right. Which essentially allows Kubernetes containers to run natively inside a Calm blueprint, but coexist with your databases inside of EM because that's how we see the next-generation enterprise apps morphing, right. Nobody's going to rewrite my whole app. They're going to maybe start with the web tier and the app tier as containers, but my database tier, my message queue tier, is going to be as VMs. So, how does Calm help you abstract the combination of containers and VMs into a common blueprint is what we believe is the first step towards what we call a hybrid app. And when you get to hybrid apps, is when you can actually then get to eventually all of your time to native cloud apps. >> You know, one of the questions I was hearing from customers is, they were looking for some clarity as to the hybrid environments. You know, the last couple of shows, there was a big presence of Google at the show and while I didn't see Google here on the show floor, I know there was an update from kind of, GCP and AHV. Is Google less strategic now, or is it just taking a while to, you know, incubate? How do you feel about that? >> So the way that you'll see us evolve as we navigate the cloud partnerships is to actually find the sweet spot of product-market fit, with respect to where the product is ready and where the market really wants that. And some of it is going to be us doing, you know, a partnership by intent first and then as we execute, we try to land it with honest products. So, where we started off with Google, as you guys know, is to actually leverage the cloud platform side, core locator with Google data centers and then what we we've evolved to is the fact that our data centers can quote-unquote integrate with their data centers to have a common management interface, a common security interface and all, but we can still run as core-located ones. Where the real integration that has taken some time for us to get to is the fact that, look, in addition to Calm, in addition to GKE kind of things, is rather than run as some kind of power sucking alien on top of some Google hardware, true integration comes with us actually innovating on a stack that lands AH3 natively inside GCP and that's where nested virtualization comes in and we have to take that crawl-walk-run approach there because we didn't want to expose it to public customers what we didn't consume internally. So what we have with the new offering that now is called Test Drive is, essentially that. We've proven that AH3 can run a nested virtualization mode on GCP natively, you can core locate with the rest of GCP services, and we use it currently in our R&D environment for running thousands of nodes for pretty much everyday testing on a daily basis, right. And so, once customer interview expose that now as an environment for our end customers to actually test-drive Nutanix as a fully compatible stack though, on purpose, so you have Prism Central, the full CDP stack and so forth, then as that gets hardened over a period of time, we expose that into production and so forth. >> So there's one category of cloud I haven't heard yet, and that's the service providers. So Nutanix used to be a really good partner for service providers, you know, enabling them to deliver services locally to local geography, stuff like that, so what's the sense of Nutanix regarding these service providers currently? >> Yeah, I think that frankly, that's probably a 2019 material change to our roadmap. It's your, the analogy that I have is that when we first launched our operating system, we fist had to do it with an opinionated stack using Supermicro. Most importantly, from an end-customer perspective, they got a single throat to choke, but also equally importantly, it kept the engineering team honest because we knew what it means to do one pick-up page for the full stack. Similarly, when we launched Xi, we needed to make sure we knew what SREs do, right. That scale, and so that's why we started with our version of SMC on, you know, as you guys know with Additional Reality as well as partners like Xterra. But very soon you're going to see is, once we have cleared that opinionated stack, software-wise we're able to leverage it, just like we went from Supermicro to Dell and Lenovo and seven other partners, you're going to see us create a Xi partner network. Which essentially allows us to federate Xi as an OS into the service providers. And that's more a 2019 plus timeframe. >> Yeah, speaking along those lines, the keynote this morning, Karbon with a k talked about Kubernetti's. Talk about that, that's the substrate for Nutanix's push toward cloud natives, so-- >> Yeah, I mean, I think you're going to hear that in the day two keynote as well, is basically, customer's want, as I said, an operating system for containers that is based on well-known APIs like Kube Cattle from Kubernetes and all that, but at the same time, it is curated to support all of the enterprise services such as volumes, storage, security policies from Flow, and you know, the operational policies of containers shouldn't be any different from Vms. So think about it as the developers still a Kubernetes-like interface, they can still port their containers from Neutanix to any other environment, but from an IT ops side, it looks like Kubernetes, containers, and VMs are co-residing as a first-class option. >> Yeah, I feel like there had been a misperception about what Kubernetes is and how it fits, you know. My take has been, it's part of the platform so there's not going to be a battle for a distribution of Kubernetes because I'm going to choose a platform and it should have Kubernetes and it should be compatible with other Kubernetes out there. >> Yeah, I mean, it's going to be like a feature of Linux. See, in that sense, there's lots of Linux distros but the core capabilities of Linux are the same, right. So in that sense, Kubernetes is going to become a feature of Linux, or the cloud operating system, so that those least-common denominator features are going to be there in every cloud OS. >> Alright, so Kubernetes not differentiating just expand the platform >> Enabling >> Enabling peace. So, tell us what is differentiating today? You know, what are the areas where Nutanix stands alone as different from some of the other platform providers of today? >> I think that, I mean obviously, whatever we do, we are trying to do it thoughtfully from the operational, you know, simplicity as a first-class citizen. Like how many new screens do we add when we use new features? A simple example of that is when we did micro-segmentation. The part was to make sure you could go from choosing ten VMs to grouping them and putting a policy as soon as possible as little friction of adopting a new product. So, we didn't have to "virtualize" the network, you didn't need to have VX LANs to actually micro-segment, just like in public cloud, right. So I think we're taking the same thing into services up the stack. A good one to talk about is Error. Which is essentially looking at databases as the next complex beast of operational complexity, besides. Especially, Oracle Rack. And it's easier to manage postcrest and so forth, but what if you could simplify not just the open source management, but also the database side of it? So I would say that Error would be a good example of a strategic value proposition or what does it mean to create a one plus one equals three value proposition to database administrators? Just like we did that for VIR vetted administrators, we're now going after DBS. >> Alright, well, Sunil thank you so much. Wish we had another hour to go through it, but give you the final word, as people leave London this year, you know, what should they be taking away when they think about Nutanix? >> I think the platform continues to evolve, but the key takeaway is that it's a platform company. Not a product company. And with that comes the burden, as well as the promise of being an iconic company for the next, hopefully, decade or so. All right, thanks a lot. >> Well, it's been a pleasure to watch the continued progress, always a pleasure to chat. >> Thank you >> All right, for you Piskar, I'm Stu Miniman, back with more coverage here from Nutanix's .NEXT 2018 in London, England. Thanks for watching the CUBE. (light electronic music)
SUMMARY :
Brought to you by Nutanix. 3,500 people gathered to listen to Sunil Potti. but happy to have you back on the program. I think it's starting to sink in for me, and then you expand to a cloud platform for every workload, and I mean the challenge is, you know, and so forth, but you still need the core A lot of the customers I work with now, So, with Frame, you can get going on AWS in any region or serverless or you know, the new stuff. They're going to maybe start with the web tier or is it just taking a while to, you know, incubate? And some of it is going to be us doing, you know, for service providers, you know, enabling them with our version of SMC on, you know, the keynote this morning, but at the same time, it is curated to support all about what Kubernetes is and how it fits, you know. Yeah, I mean, it's going to be like a feature of Linux. of the other platform providers of today? from the operational, you know, simplicity as people leave London this year, you know, I think the platform continues to evolve, to watch the continued progress, always a pleasure to chat. All right, for you Piskar, I'm Stu Miniman,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lenovo | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Piskar | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Sunil Potti | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Supermicro | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
London, England | LOCATION | 0.99+ |
Epoch | ORGANIZATION | 0.99+ |
Xterra | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Sunil | PERSON | 0.99+ |
Beam | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
3,500 people | QUANTITY | 0.99+ |
twelve months ago | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
ten VMs | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
EC2 | TITLE | 0.98+ |
first step | QUANTITY | 0.98+ |
three times | QUANTITY | 0.98+ |
Kube Cattle | TITLE | 0.98+ |
S3 | TITLE | 0.98+ |
DBS | ORGANIZATION | 0.98+ |
one category | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
thousands of nodes | QUANTITY | 0.96+ |
seven other partners | QUANTITY | 0.96+ |
Stu | PERSON | 0.95+ |
2018 | EVENT | 0.95+ |
today | DATE | 0.95+ |
Edge | TITLE | 0.94+ |
SMC | ORGANIZATION | 0.94+ |
first | QUANTITY | 0.94+ |
a year | QUANTITY | 0.92+ |
single throat | QUANTITY | 0.91+ |
Oracle | ORGANIZATION | 0.91+ |
The Cube | ORGANIZATION | 0.91+ |
SREs | ORGANIZATION | 0.89+ |
this morning | DATE | 0.89+ |
Sunil | ORGANIZATION | 0.89+ |
Kubernetti | ORGANIZATION | 0.86+ |
couple | QUANTITY | 0.85+ |
EU | EVENT | 0.84+ |
Frame | ORGANIZATION | 0.83+ |
Karbon | PERSON | 0.82+ |
Dave Rensin, Google | Google Cloud Next 2018
>> Live from San Francisco, it's The Cube. Covering Google Cloud Next 2018 brought to you by Google Cloud and its ecosystem partners. >> Welcome back everyone, it's The Cube live in San Francisco. At Google Cloud's big event, Next 18, GoogleNext18 is the hashtag. I'm John Furrier with Jeff Frick, our next guest, Dave Rensin, director of CRE and network capacity at Google. CRE stands for Customer Reliability Engineering, not to be confused with SRE which is Google's heralded program Site Reliability Engineering, categoric changer in the industry. Dave, great to have you on. Thanks for coming on. >> Thank you so much for having me. >> So we had a meeting a couple months ago and I was just so impressed by how much thought and engineering and business operations have been built around Google's infrastructure. It's a fascinating case study in history of computing, you guys obviously power yourselves and the Cloud is just massive. You've got the Site Reliability Engineer concept that now is, I won't say is a boiler plate, but it's certainly the guiding architecture for how enterprise is going to start to operate. Take a minute to explain the SRE and the CRE concept within Google. I think it's super important that you guys, again pioneered, something pretty amazing with the SRE program. >> Well, I mean, like everything it was just formed out of necessity for us. We did the calculation 12 or 13 years ago, I think. We sat down a piece of paper and we said, well, the number of people we need to run our systems scales linearly with the number of machines, which scales linearly with the number of users, and the complexity of the stuff you're doing. Alright, carry the two divide by six, plot line. In ten years, now this is 13 or 14 years ago, we're going to need one million humans to run google. And that was at the growth and complexity of 10 years ago or 12 years ago. >> Yeah, Search. (laughs) >> Search, right? We didn't have Android, we didn't have Cloud, we didn't have Assistant, we didn't have any of these things. We were like, well that's not going to work. We're going to have to do something different and so that's kind of where SRE came from. It's like, how do we automate, the basic philosophy is simple, give to the machines all the things machines can do. And keep for the humans all the things that require human judgment. And that's how we get to a place where like 2,500 SREs run all of Google. >> And that's massive and there's billions and billions of users. >> Yeah. >> Again, I think this is super important because at that time it was a tell sign for you guys to wake up and go, well I can't get a million humans. But it's now becoming, in my opinion, what this enterprise is going through in this digital transformation, whatever we call it these days, consumer's agent of IT now it's digital trasfor-- Whatever it is, the role of the human-machine interaction is now changing, people need to do more. They can collect more data than ever before. It doesn't cost them that much to collect data. >> Yeah. >> We just heard from the BigQuery guys, some amazing stuff happening. So now enterprises are almost going through the same changeover that you guys had to go through. And this I now super important because now you have the tooling and the scale that Google has. And so it's almost like it's a level up fast. So, how does an enterprise become SRE like, quickly, to take advantage of the Cloud? >> So, you know, I would like to say this is all sort of a deliberate march of a multi-year plan. But it wasn't, it was a little accidental. Starting two or three years ago, companies were asking us, they were saying, we're getting mired in toil. Like, we're not being able to innovate because we're spending all of our budget and effort just running the things and turning the crank. How do you have billions of users and not have this problem? We said, oh we use this thing called SRE. And they're like please use more words. And so we wrote a book. Right? And we expected maybe 20 people would read the book, and it was fine. And we didn't do it for any other reason other than that seemed like a very scalable way to tell people the words. And then it all just kind of exploded. We didn't expect that it was going to be true and so a couple of years ago we said, well, maybe we should formalize our interactions of, we should go out proactively and teach every enterprise we can how to do this and really work with them, and build up muscle memory. And that's where CRE comes from. That's my little corner of SRE. It's the part of SRE that, instead of being inward focused, we point out to companies. And our goal is that every firm from five to 50 thousand can follow these principles. And they can. wW know they can do it. And it's not as hard as they think. The funny thing about enterprises is they have this inferiority complex, like they've been told for years by Silicon Valley firms in sort of this derogatory way that, you're just an enterprise. We're the innovate-- That's-- >> Buy our stuff. Buy our software. Buy IT. >> We're smarter than you! And it's nonsense. There are hundreds and hundreds of thousands of really awesome engineers in these enterprises, right? And if you just give them a little latitude. And so anyway, we can walk these companies on this journey and it's been, I mean you've seen it, it's just been snowballing the last couple of years. >> Well the developers certainly have changed the game. We've seen with Cloud Native the role of developers doing toil and, or specific longer term projects at an app related IT would support them. So you had this traditional model that's been changed with agile et cetera. And dev ops, so that's great. So you know, golf clap for that. Now it's like scale >> No more than a golf clap it's been real. >> It's been a high five. Now it's like, they got to go to the next level. The next level is how do you scale it, how do I get more apps, how am I going to drive more revenue, not just reduce the cost? But now you got operators, now I have to operate things. So I think the persona of what operating something means, what you guys have hit with SRE, and CRE is part of that program, and that's really I think the aha moment. So that's where I see, and so how does someone read the book, put it in practice? Is it a cultural shift? Is it a reorganization? What are you guy seeing? What are some of the successes that you guys have been involved in? >> The biggest way to fail at doing SRE is try to do all of it at once. Don't do that. There are a few basic principles, that if you adhere to, the rest of it just comes organically at a pace that makes sense for your business. The easiest thing to think of, is simply-- If I had to distill it down to a few simple things, it's just this. Any system involving people is going to have errors. So any goal you have that assumes perfection, 100% uptime, 100% customer satisfaction, zero error, that kind of thing, is a lie. You're lying to yourself, you're lying to your customers. It's not just unrealistic its, in a way kind of immoral. So you got to embrace that. And then that difference between perfection and the amounts, the closeness to perfection that your customers really need, cuz they don't really need perfection, should be just a budget. We call it the error budget. Go spend the budget because above that line your customers are indifferent they don't care. And that unlocks innovation. >> So this is important, I want to just make sure I slow down on this, error budget is a concept that you're talking about. Explain that, because this is, I think, interesting. Because you're saying it's bs that there's no errors, because there's always errors, Right? >> Sure. >> So you just got to factor in and how you deal with them is-- But explain this error budget, because this operating philosophy of saying deal with errors, so explain this error budget concept. >> It comes from this observation, which is really fascinating. If you plot reliability and customer satisfaction on a graph what you will find is, for a while as your reliability goes up, your customer satisfaction goes up. Fantastic. And then there's a point, a magic line, after which you hit this really deep knee. And what you find is if you are much under that line your customers are angry, like pitchforks, torches, flipping cars, angry. And if you operate much above that line they are indifferent. Because, the network they connect with is less reliable than you. Or the phone they're using is less reliable than you. Or they're doing other things in their day than using your system, right? And so, there's a magic line, actually there's a term, it's called an SLO, Service Level Objective. And the difference between perfection, 100%, and the line you need, which is very business specific, we say treat as a budget. If you over spend your budget your customers aren't happy cuz you're less reliable than they need. But if you consistently under spend your budget, because they're indifferent to the change and because it is exponentially more expensive for incrementive improvement, that's literally resources you're wasting. You're wasting the one resource you can never get back, which is time. Spend it on innovation. And just that mental shift that we don't have to be perfect, less people do open and honest, blameless postmortems. It let's them embrace their risk in innovation. We go out of our way at Google to find people who accidentally broke something, took responsibility for it, redesigned the system so that the next unlucky person couldn't break it the same way, and then we promote them and celebrate them. >> So you push the error budget but then it's basically a way to do some experimentation, to do some innovation >> Safely. >> Safely. And what you're saying is, obviously the line of unhappy customers, it's like Gmail. When Gmail breaks people are like, the World freaks out, right? But, I'm happy with Gmail right now. It's working. >> But here's the thing, Gmail breaks very, very little. Very, very often. >> I never noticed it breaking. >> Will you notice the difference between 10 milliseconds of delivery time? No, of course not. Now, would you notice an hour or whatever? There's a line, you would for sure notice. >> That's the SLO line. >> That's exactly right. >> You're also saying that if you try to push above that, it costs more and there's not >> And you don't care >> An incremental benefit >> That's right. >> It doesn't effect my satisfaction. >> Yeah, you don't care. >> I'm at nirvana, now I'm happy. >> Yeah. >> Okay, and so what does that mean now for putting things in practice? What's the ideal error budget, that's an SLO? Is that part of the objective? >> Well that's part of the work to do as a business. And that's part of what my team does, is help you figure out is, what is the SLO, what is the error budget that makes sense for you for this application? And it's different. A medical device manufacturer is going to have a different SLO than a bank or a retailer, right? And the shapes are different. >> And it's interesting, we hear SLA, the Service Level Agreement, it's an old term >> Different things. >> Different things, here objective if I get this right, is not just about speed and feeds. There's also qualitative user experience objectives, right? So, am I getting that right? >> Very much so. SLOs and SLAs get confused a lot because they share two letters. But they don't mean anywhere near the same thing. An SLA is a legal agreement. It's a contract with your user that describes a penalty if you don't meet a certain performance. Lawyers, and sometimes sales or marketing people, drive SLAs. SLOs are different things driven by engineers. They are quantitative measures of your users happiness right now. And exactly to your point, it's always from the user's perspective. Like, your user does not care if the CPU and your fleet spiked. Or the memory usage went up x. They care, did my mail delivery slow down? Or is my load balancer not serving things? So, focus from your user backwards into your systems and then you get much saner things to track. >> Dave, great conversation. I love the innovation, I love the operating philosophy cuz you're really nailing it with terms of you want to make people happy but you're also pushing the envelope. You want to get these error budgets so we can experiment and learn, and not repeat the same mistake. That sounds like automation to me. But I want you to take a minute to explain, what SRE, that's an inward facing thing for Google, you are called a CRE, Customer Reliability Engineer. Explain what that is because I heard Diane Greene saying, we're taking a vertical focus. She mentioned healthcare. Seems like Google is starting to get in, and applying a lot of resources, to the field, customers. What is a CRE? What does that mean? How is that a part of SRE? Explain that. >> So a couple of years ago, when I was first hired at Google I was hired to build and run Cloud support. And one of the things I noticed, which you notice when you talk to customers a lot, is you know the industries done a really fabulous job of telling people how to get to Cloud. I used to work at Amazon. Amazon is a fantastic job! Telling people, how do you get to Cloud? How do you build a thing? But we're awful, as an industry, about telling them how to live there. How do you run it? Cuz it's different running a thing in a Cloud than it is running it in On-Prem. And you find that's the cause of a lot of friction for people. Not that they built it wrong, but they're just operating it in a way that's not quite compatible. It's a few degree off. And so we have this notion of, well we know how to operate these things to scale, that's what SRE is. What if, what if, we did a crazy thing? We took some of our SREs and instead of pointing them in at our production systems, we pointed them out at customers? Like what if we genetically screened our SREs for, can talk to human, instead of can talk to machine? Which is what you optimize for when you hire an engineer. And so we started Siri, it's this part of our SRE org that we point outwards to customer. And our job is to walk that path with you and really do it to get like-- sometimes we go so far as even to share a pager with you. And really get you to that place where your operations look a lot like we're talking that same language. >> It's custom too, you're looking at their environment. >> Oh yeah, it's bespoke. And then we also try to do scale things. We did the first SRE book. At the show just two days ago we launched the companion volume to the book, which is like-- cheap plug segment, where it's the implementation details. The first book's sort of a set of principles, these are the implementation details. Anything we can do to close that gap, I don't know if I ever told you the story, but when I was a little kid when I was like six. Like 1978, my dad who's always loved technology decided he was going to buy a personal computer. So he went to the largest retailer of personal computers in North America, Macy's in 1978, (laughs) and he came home with two things. He came home with a huge box and a human named Fred. And Fred the human unpacked the big box and set up the monitor, and the tape drive, and the keyboard, and told us about hardware and software and booting up, because who knew any of these things in 1978? And it's a funny story that you needed a human named Fred. My view is, I want to close the gap so that Siri are the Freds. Like, in a few years it'll be funny that you would ever need humans, from Google or anyone else, to help you learn how-- >> It's really helping people operate their new environment at a whole. It's a new first generation problem. >> Yeah. >> Essentially. Well, Dave great stuff. Final question, I want to get your thoughts. Great that we can have this conversation. You should come to the studio and go more and more deeper on this, I think it's a super important, and new role with SRES and CREs. But the show here, if you zoom out and look at Google Cloud, look down on the stage of what's going on this week, what's the most important story that should be told that's coming out of Google Cloud? Across all the announcements, what's the most important thing that people should be aware of? >> Wow, I have a definite set of biases, that won't lie. To me, the three most exciting announcements were GKE On-Prem, the idea that manage kubernetes you can actually run in your own environment. People have been saying for years that hybrid wasn't really a thing. Hybrid's a thing and it's going to be a thing for a long time, especially in enterprises. That's one. I think the introduction of machine learning to BigQuery, like anything we can do to bring those machine learning tools into these petabytes-- I mean, you mentioned it earlier. We are now collecting so much data not only can we not, as companies, we can't manage it. We can't even hire enough humans to figure out the right questions. So that's a big thing. And then, selfishly, in my own view of it because of reliability, the idea that Stackdriver will let you set up SLO dashboards and SLO alerting, to me that's a big win too. Those are my top three. >> Dave, great to have you on. Our SLO at The Cube is to bring the best content we possibly can, the most interviews at an event, and get the data and share that with you live. It's The Cube here at Google Cloud Next 18 I'm John Furrier with Jeff Frick. Stay with us, we've got more great content coming. We'll be right back after this short break.
SUMMARY :
brought to you by Google Cloud Dave, great to have you on. and the CRE concept within Google. and the complexity of the stuff you're doing. Yeah, Search. And keep for the humans And that's massive at that time it was a tell sign for you guys the same changeover that you guys and effort just running the things Buy our stuff. And if you just give them a little latitude. So you had this traditional model it's been real. and so how does someone read the book, the closeness to perfection error budget is a concept that you're talking about. and how you deal with them is-- and the line you need, obviously the line of unhappy customers, But here's the thing, Will you notice the difference between And the shapes are different. So, am I getting that right? and then you get much saner things to track. and not repeat the same mistake. And our job is to walk that path with you It's custom too, And it's a funny story that you needed It's a new first generation problem. Great that we can have this conversation. the idea that Stackdriver will let you and get the data and share that with you live.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Rensin | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Diane Greene | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
1978 | DATE | 0.99+ |
Siri | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
John Furrier | PERSON | 0.99+ |
Fred | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
20 people | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
two letters | QUANTITY | 0.99+ |
10 milliseconds | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
first book | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Android | TITLE | 0.99+ |
two | QUANTITY | 0.99+ |
an hour | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
two | DATE | 0.99+ |
The Cube | ORGANIZATION | 0.98+ |
2,500 SREs | QUANTITY | 0.98+ |
Gmail | TITLE | 0.98+ |
SRE | ORGANIZATION | 0.98+ |
10 years ago | DATE | 0.98+ |
Macy | ORGANIZATION | 0.98+ |
12 years ago | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
two days ago | DATE | 0.98+ |
Google Cloud | TITLE | 0.97+ |
three years ago | DATE | 0.97+ |
ORGANIZATION | 0.96+ | |
first generation | QUANTITY | 0.96+ |
zero error | QUANTITY | 0.96+ |
50 thousand | QUANTITY | 0.94+ |
GoogleNext18 | EVENT | 0.94+ |
13 | DATE | 0.93+ |
SRE | TITLE | 0.93+ |
couple of years ago | DATE | 0.92+ |
Silicon Valley | LOCATION | 0.91+ |
CRE | ORGANIZATION | 0.91+ |
couple months ago | DATE | 0.91+ |
Cloud | TITLE | 0.91+ |
agile | TITLE | 0.9+ |
Google Cloud | ORGANIZATION | 0.9+ |
Assistant | TITLE | 0.89+ |
one million humans | QUANTITY | 0.89+ |
14 years ago | DATE | 0.89+ |
SLA | TITLE | 0.88+ |
ten years | QUANTITY | 0.87+ |
12 | DATE | 0.86+ |
Stackdriver | ORGANIZATION | 0.86+ |
last couple of years | DATE | 0.85+ |