Image Title

Search Results for iOS:

Ev Kontsevoy, Teleport | AWS re:Invent 2022


 

>>Hello everyone and welcome back to Las Vegas. I've got my jazz hands because I am very jazzed to be here at AWS Reinvent Live from the show floor all week. My name is Savannah Peterson, joined with the infamous John Farer. John, how you feeling >>After feeling great? Love? What's going on here? The vibe is a cloud, cloud native. Lot of security conversation, data, stuff we love Cloud Native, >>M I >>A L, I mean big news. Security, security, data lake. I mean, who would've thought Amazon have a security data lake? You know, e k s, I mean >>You might have with that tweet you had out >>Inside outside the containers. Reminds me, it feels like coan here. >>It honestly, and there's a lot of overlap and it's interesting that you mention CubeCon because we talked to the next company when we were in Detroit just a couple weeks ago. Teleport E is the CEO and founder F Welcome to the show. How you doing? >>I'm doing well. Thank you for having me today. >>We feel very lucky to have you. We hosted Drew who works on the product marketing side of Teleport. Yeah, we got to talk caddies and golf last time on the show. We'll talk about some of your hobbies a little bit later, but just in case someone's tuning in, unfamiliar with Teleport, you're all about identity. Give us a little bit of a pitch, >>Little bit of our pitch. Teleport is the first identity native infrastructure access platform. It's used by engineers and it's used by machines. So notice that I used very specific choice of words first identity native, what does it mean? Identity native? It consists of three things and we're writing a book about those, but I'll let you know. Stay >>Tuned on that front. >>Exactly, yes, but I can talk about 'em today. So the first component of identity, native access is moving away from secrets towards true identity. The secrets, I mean things like passwords, private keys, browser cookies, session tokens, API keys, all of these things is secrets and they make you vulnerable. The point is, as you scale, it's absolutely impossible to protect all of the seekers because they keep growing and multiplying. So the probability of you getting hacked over time is high. So you need to get rid of secrets altogether that that's the first thing that we do. We use something called True Identity. It's a combination of your biometrics as well as identity of your machines. That's tpms, HSMs, Ubikes and so on, so forth. >>Go >>Ahead. The second component is Zero Trust. Like Teleport is built to not trust the network. So every resource inside of your data center automatically gets configured as if there is no perimeter it, it's as safe as it was on the public network. So that's the second thing. Don't trust the network. And the third one is that we keep access policy in one place. So Kubernetes clusters, databases on stage, rdp, all of these protocols, the access policy will be in one place. That's identity. Okay, >>So I'm, I'm a hacker. Pretend I'm a hacker. >>Easy. That sounds, >>That sounds really good to me. Yeah, I'm supposed to tell 'em you're hacker. Okay. I can go to one place and hack that. >>I get this question a lot. The thing is, you want centralization when it comes to security, think about your house being your AWS account. Okay? Everything inside your furniture, your valuable, like you'll watch collection, like that's your data, that's your servers, paper clusters, so and so forth. Right Now I have a choice and your house is in a really bad neighborhood. Okay, that's the bad internet. Do you wanna have 20 different doors or do you want to have one? But like amazing one, extremely secure, very modern. So it's very easy for you to actually maintain it and enforce policy. So the answer is, oh, you probably need to have >>One. And so you're designing security identity from a perspective of what's best for the security posture. Exactly. Sounds like, okay, so now that's not against the conventional wisdom of the perimeter's dead, the cloud's everywhere. So in a way kind of brings perimeter concepts into the posture because you know, the old model of the firewall, the moat >>It Yeah. Just doesn't scale. >>It doesn't scale. You guys bring the different solution. How do you fit into the new perimeters dead cloud paradigm? >>So the, the way it works that if you are, if you are using Teleport to access your infrastructure, let's just use for example, like a server access perspective. Like that machine that you're accessing doesn't listen on a network if it runs in Teleport. So instead Teleport creates this trusted outbound tunnels to the proxy. So essentially you are managing devices using out going connection. It's kind of like how your phone runs. Yeah. Like your phone is actually ultimate, it's like a teleport like, like I It's >>Like teleporting into your environment. >>Yeah, well play >>Journal. But >>Think about actually like one example of an amazing company that's true Zero trust that we're all familiar with would be Apple. Because every time you get a new iOS on your phone, the how is it different from Apple running massive software deployment into enormous cloud with billions of servers sprinkle all over the world without perimeter. How is it possible That's exactly the kind of technology that Teleports >>Gives you. I'm glad you clarified. I really wanted to get that out on the table. Cuz Savannah, this is, this is the paradigm shift around what an environment is Exactly. Did the Apple example, so, okay, tell 'em about customer traction. Are people like getting it right away? Are their teams ready? Are they go, oh my god this is >>Great. Pretty much you see we kinda lucky like in a, in a, like in this business and I'm walking around looking at all these successful startups, like every single one of them has a story about launching the right thing at just the right like moment. Like in technology, like the window to launch something is extremely short. Like months. I'm literally talking months. So we built Teleport started to work on it in like 2015. It was internal project, I believe it or not, also a famous example. It's really popular like internal project, put it on GitHub and it sat there relatively unnoticed for a while and then it just like took off around 2000 >>Because people start to feel the pain. They needed it. Exactly, >>Exactly. >>Yeah. The timing. Well and And what a great way to figure out when the timing is right? When you do something like that, put it on GitHub. Yeah. >>People >>Tell you what's up >>Yeah's Like a basketball player who can just like be suspended in the air over the hoop for like half the game and then finally his score and wins >>The game. Or video gamer who's lagged, everyone else is lagging and they got the latency thing. Exactly. Thing air. Okay. Talk about the engineering side. Cause I, I like this at co con, you mentioned it at the opening of this segment that you guys are for engineers, not it >>Business people. That's right. >>Explain that. Interesting. This is super important. Explain why and why that's resonating. >>So there is this ongoing shift on more and more responsibilities going to engineers. Like remember back in the day before we even had clouds, we had people actually racking servers, sticking cables into them, cutting their fingers, like trying to get 'em in. So those were not engineers, they were different teams. Yeah. But then you had system administrators who would maintain these machines for you. Now all of these things are done with code. And when these things are done with code and with APIs, that shifts to engineers. That is what Teleport does with policy. So if you want to have a set of rules that govern who or what and when under what circumstances can access what data like on Kubernetes, on databases, on, on servers wouldn't be nice to use code for it. So then you could use like a version control and you can keep track of changes. That's what teleport enables. Traditionally it preferred more kind of clicky graphical things like clicking buttons. And so it's just a different world, different way of doing it. So essentially if you want security as code, that's what Teleport provides and naturally this language resonates with this persona. >>Love that. Security is coding. It's >>A great term. Yeah. Love it. I wanna, I wanna, >>Okay. We coined it, someone else uses it on the show. >>We borrow it >>To use credit. When did you, when did you coin that? Just now? >>No, >>I think I coined it before >>You wanted it to be a scoop. I love that. >>I wish I had this story when I, I was like a, like a poor little 14 year old kid was dreaming about security code but >>Well Dave Ante will testify that I coined data as code before anyone else but it got 10 years ago. You >>Didn't hear it this morning. Jimmy actually brought it back up. Aws, you're about startups and he's >>Whoever came up with lisp programming language that had this concept that data and code are exact same thing, >>Right? We could debate nerd lexicon all day on the cube. In fact, that could even be a segment first >>Of we do. First of all, the fact that Lisp came up on the cube is actually a milestone because Lisp is a very popular language for object-oriented >>Grandfather of everything. >>Yes, yes, grandfather. Good, good. Good catch there. Yeah, well done. >>All right. I'm gonna bring us back. I wanna ask you a question >>Talking about nerd this LIS is really >>No, I think it's great. You know how nerdy we can get here though. I mean we can just hang out in the weeds the whole time. All right. I wanna ask you a question that I asked Drew when we were in Detroit just because I think for some folks and especially the audience, they may not have as distinctive a definition as y'all do. How do you define identity? >>Oh, that's a great question. So identity as a term was, it was always used for security purposes. But most people probably use identity in the context of single signon sso. Meaning that if your company uses identity for access, which instead of having each application have an account for you, like a data entry with your first name, last name emails and your role. Yeah. You instead have a central database, let's say Okta or something like that. Yep. And then you, you use that to access everything that's kind of identity based access because there is a single source of identity. What we say is that we, that needs to be extended because it it no longer enough because that identity can be stolen. So if someone gets access to your Okta account using your credentials, then they can become you. So in order for identity to be attached to you and become your true identity, you have to rely on physical world objects. That's biometrics your facial fingerprint, like your facial print, your fingerprints as well as biometric of your machine. Like your laptops have PPM modules on it. They're absolutely unique. They cannot be cloned stolen. So that is your identity as well. So if you combine whatever is in Octa with the biker chip in this laptop and with your finger that collectively is your true identity, which cannot be stolen. So it's can't be hacked. >>And someone can take my finger like they did in the movies. >>So they would have to do that. And they would also have to They'd >>Steal your match. Exactly, exactly. Yeah. And they'd have to have your eyes >>And they have to, and you have >>Whatever the figure that far, they meant what >>They want. So that is what Drew identity is from telecom and >>Biometric. I mean it's, we're so there right now it's, it's really not an issue. It's only getting faster and better to >>Market. There is one important thing I said earlier that I want to go back to that I said that teleport is not just for engineers, it's also for machines. Cuz machines they also need the identity. So when we talk about access silos and that there are many different doors into your apartment, there are many different ways to access your data. So on the infrastructure side, machines are doing more and more. So we are offloading more and more tasks to them. That's a really good, what do machines use to access each other? Biome? They use API keys, they use private keys, they use basically passwords. Yeah. Like they're secrets and we already know that that's bad, right? Yeah. So how do you extend biometrics to machines? So this is why AWS offers cloud HSM service. HSM is secure hardware security module. That's a unique private key for the machine that is not accessible by anyone. And Teleport uses that to give identities to machines. Does do >>Customers have to enable that themselves or they have that part of a Amazon, the that >>Special. So it's available on aws. It's available actually in good old, like old bare metal machines that have HSMs on them on the motherboard. And it's optional by the way Teleport can work even if you don't have that capability. But the point is that we tried, you >>Have a biometric equivalent for the machines with >>Take advantage of it. Yeah. It's a hardware thing that you have to have and we all have it. Amazon sells it. AWS sells it to us. Yeah. And Teleport allows you to leverage that to enhance security of the infrastructure. >>So that classic hardware software play on that we're always talking about here on the cube. It's all, it's all important. I think this is really fascinating though. So I had an on the way to the show, I just enrolled in Clear and I had used a different email. I enrolled for the second time and my eyes wouldn't let me have two accounts. And this was the first time I had tried to sort of hack my own digital identity. And the girl, I think she was humoring me that was, was kindly helping me, the clear employee. But I think she could tell I was trying to mess with it and I wanted to see what would happen. I wanted to see if I could have two different accounts linked to my biometric data and I couldn't it, it picked it up right away. >>That's your true >>Identity. Yeah, my true identity. So, and forgive me cuz this is kind of just a personal question. It might be a little bit finger finger to the wind, but how, just how much more secure if you could, if you could give us a, a rating or a percentage or a a number. How much more secure is leveraging biometric data for identity than the secrets we've been using historically? >>Look, I could, I played this game with you and I can answer like infinitely more secure, right? Like but you know how security works that it all depends on implementation. So let's say you, you can deploy teleport, you can put us on your infrastructure, but if you're running, let's say like a compromised old copy of WordPress that has vulnerability, you're gonna get a hack through that angle. But >>Happens happens to my personal website all the time. You just touched Yeah, >>But the fact is that we, I I don't see how your credentials will be stolen in this system simply because your TPM on your laptop and your fingerprint, they cannot be downloaded. They like a lot of people actually ask us a slightly different question. It's almost the opposite of it. Like how can I trust you with my biometrics? When I use my fingerprint? That's my information. I don't want the company I work at to get my fingerprint people. I think it's a legit question to ask. >>Yeah. And it's >>What you, the answer to that question is your fingerprint doesn't really leave your laptop teleport doesn't see your fingerprint. What happens is when your fingerprint gets validated, it's it's your laptop is matching what's on the tpm. Basically Apple does it and then Apple simply tells teleport, yep that's F or whoever. And that's what we are really using. So when you are using this form authentication, you're not sharing your biometric with the company you work at. >>It's a machine to human confirmation first and >>Then it's it. It's basically you and the laptop agreeing that my fingerprint matches your TPM and if your laptop agrees, it's basically hardware does validation. So, and teleport simply gets that signal. >>So Ed, my final question for you is here at the show coupon, great conversations there for your company. What's your conversations here like at reinvent? Are you meeting with Amazon people, customers? What are some of the conversations? Because this is a much broader, I mean it's still technical. Yep. But you know, a lot of business kind of discussions, architectural refactoring of organizations. What are some of the things that you're talking about here with Telepo? What are, >>So I will mention maybe two trends I observed. The first one is not even security related. It's basically how like as a cloud becomes more mature, people now actually at different organizations develop their own internal ways of doing cloud properly. And they're not the same. Because when cloud was earlier, like there were this like best practices that everyone was trying to follow and there was like, there was just a maybe lack of expertise in the world and and now finding that different organizations just do things completely different. Like one, like for example, yeah, like some companies love having handful, ideally just one enormous Kubernetes cluster with a bunch of applications on it. And the other companies, they create Kubernetes clusters for different workloads and it's just like all over the map and both of them are believed that they're doing it properly. >>Great example of bringing in, that's Kubernetes with the complexity. And >>That's kind of one trend I'm noticing. And the second one is security related. Is that everyone is struggling with the access silos is that ideally every organization is dreaming about a day, but they have like one place which is which with great user experience that simply spells out this is what policy is to access this particular data. And it gets a automatically enforced by every single cloud provider, but every single application, but every single protocol, but every single resource. But we don't have that unfortunately Teleport is slowly becoming that, of course. Excuse me for plugging >>TelePro. No, no worries. >>But it is this ongoing theme that everyone is can't wait to have that single source of truth for accessing their data. >>The second person to say single source of truth on this stage in the last 24 >>Hours or nerds will love that. I >>Know I feel well, but it's all, it all comes back to that. I keep using this tab analogy, but we all want everything in one place. We don't wanna, we don't wanna have to be going all over the place and to look for >>Both. Because if it's and everything else places, it means that different teams are responsible for it. Yeah. So it becomes this kind of internal information silo as well. So you not even, >>And the risks and liabilities there, depending on who's overseeing everything. That's awesome. Right? So we have a new challenge on the cube specific to this show thing of this as your 30 minute or 30 minute that would be bold. 32nd sizzle reel, Instagram highlight. What is your hot take? Most important thing, biggest theme of the show this year. >>This year. Okay, so here's my thing. Like I want cloud to become something I want it to be. And every time I come here and I'm like, are we closer? Are we closer? So here's what I want. I want all cloud providers collectively to kind of merge. So then when we use them, it feels like we are programming one giant machine. Kind of like in the matrix, right? The movie. So like I want cloud to feel like a computer, like to have this almost intimate experience you have with your laptop. Like you can like, like do this and the laptop like performs the instructions. So, and it feels to me that we are getting closer. So like walking around here and seeing how everything works now, like on the single signon on from a security perspective, there is so that consolidation is finally happening. So it's >>The software mainframe we used to call it back in 2010. >>Yeah, yeah. Just kind of planetary scale thing. Yes. It's not the Zuckerberg that who's building metaverse, it's people here at reinvent. >>Unlimited resource for developers. Just call in. Yeah, yeah. Give me some resource, spin me up some, some compute. >>I would like alter that slightly. I would just basically go and do this and you shouldn't even worry about how it gets done. Just put instructions into this planetary mainframe and mainframe will go and figure this out. Okay. >>We gotta take blue or blue or red pill. I >>Know. I was just gonna say y'all, we are this, this, this, this segment is lit. >>We got made tricks. We got brilliant. We didn't get super cloud in here but we, we can weave that in. We got >>List. We just said it. So >>We got lisp. Oh great con, great conversation. Cloud native. >>Outstanding conversation. And thank you so much for being here. We love having teleport on the show. Obviously we hope to see you back again soon and and Drew as well. And thank all of you for tuning in this afternoon. Live from Las Vegas, Nevada, where we are hanging out at AWS Reinvent with John Furrier. I'm Savannah Peterson. This is the Cube. We are the source for high tech coverage.

Published Date : Nov 30 2022

SUMMARY :

John, how you feeling Lot of security conversation, data, stuff we love Cloud Native, I mean, who would've thought Amazon have a security data lake? Inside outside the containers. the CEO and founder F Welcome to the show. Thank you for having me today. We'll talk about some of your hobbies a little bit later, but just in case someone's tuning in, unfamiliar with Teleport, So notice that I So the probability of you getting hacked over time is high. So that's the second thing. So I'm, I'm a hacker. I can go to one place and hack that. So the answer is, oh, you probably need to have into the posture because you know, How do you fit into the new perimeters So the, the way it works that if you are, if you are using Teleport to access your infrastructure, But How is it possible That's exactly the kind of technology that Teleports I'm glad you clarified. So we built Teleport started to work on it in like 2015. Because people start to feel the pain. When you do something like that, Cause I, I like this at co con, you mentioned it at the opening of this segment that you That's right. This is super important. So essentially if you want Security is coding. I wanna, I wanna, When did you, when did you coin that? I love that. You Didn't hear it this morning. We could debate nerd lexicon all day on the cube. First of all, the fact that Lisp came up on the cube is actually a milestone because Lisp is a Yeah, well done. I wanna ask you a question I wanna ask you a question that I asked Drew when we were in Detroit just because I think for some So in order for identity to be attached to you and become your true identity, you have to rely So they would have to do that. And they'd have to have your eyes So that is what Drew identity is from telecom and I mean it's, we're so there right now it's, it's really not an issue. So how do you extend biometrics to machines? And it's optional by the way Teleport can work even if you don't have that capability. And Teleport allows you to leverage that So I had an on the way to the show, I just enrolled It might be a little bit finger finger to the wind, but how, just how much more secure if you could, So let's say you, you can deploy teleport, you can put us on your infrastructure, Happens happens to my personal website all the time. But the fact is that we, I I don't see how your credentials So when you are using this form authentication, you're not sharing your biometric with the company you It's basically you and the laptop agreeing that my fingerprint matches your TPM and So Ed, my final question for you is here at the show coupon, great conversations there for And the other companies, Great example of bringing in, that's Kubernetes with the complexity. And the second one is security related. No, no worries. But it is this ongoing theme that everyone is can't wait to have that single I We don't wanna, we don't wanna have to be going all over the place and to look for So you not even, So we have a new challenge on the cube specific to this show thing of this as your 30 minute or 30 you have with your laptop. It's not the Zuckerberg that who's building metaverse, Give me some resource, spin me up some, some compute. I would just basically go and do this and you shouldn't even I We got made tricks. So We got lisp. And thank all of you for tuning in this afternoon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Savannah PetersonPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

John FarerPERSON

0.99+

AppleORGANIZATION

0.99+

2010DATE

0.99+

2015DATE

0.99+

DetroitLOCATION

0.99+

Las VegasLOCATION

0.99+

Ev KontsevoyPERSON

0.99+

JimmyPERSON

0.99+

30 minuteQUANTITY

0.99+

DrewPERSON

0.99+

TeleportORGANIZATION

0.99+

30 minuteQUANTITY

0.99+

Dave AntePERSON

0.99+

EdPERSON

0.99+

JohnPERSON

0.99+

iOSTITLE

0.99+

second thingQUANTITY

0.99+

SavannahPERSON

0.99+

two accountsQUANTITY

0.99+

two different accountsQUANTITY

0.99+

John FurrierPERSON

0.99+

BothQUANTITY

0.99+

ZuckerbergPERSON

0.99+

third oneQUANTITY

0.99+

one placeQUANTITY

0.99+

bothQUANTITY

0.99+

each applicationQUANTITY

0.99+

Las Vegas, NevadaLOCATION

0.99+

TeleProORGANIZATION

0.99+

second componentQUANTITY

0.98+

This yearDATE

0.98+

10 years agoDATE

0.98+

todayDATE

0.98+

second timeQUANTITY

0.98+

firstQUANTITY

0.98+

first thingQUANTITY

0.98+

second personQUANTITY

0.98+

single sourceQUANTITY

0.97+

first timeQUANTITY

0.97+

three thingsQUANTITY

0.97+

20 different doorsQUANTITY

0.97+

this yearDATE

0.97+

InstagramORGANIZATION

0.96+

TelepoORGANIZATION

0.96+

first nameQUANTITY

0.96+

14 year oldQUANTITY

0.96+

Teleport EORGANIZATION

0.96+

oneQUANTITY

0.95+

billions of serversQUANTITY

0.95+

first oneQUANTITY

0.95+

second oneQUANTITY

0.95+

this afternoonDATE

0.94+

singleQUANTITY

0.94+

FirstQUANTITY

0.94+

GitHubORGANIZATION

0.94+

couple weeks agoDATE

0.94+

two trendsQUANTITY

0.92+

first componentQUANTITY

0.91+

CubeConORGANIZATION

0.9+

one important thingQUANTITY

0.89+

awsORGANIZATION

0.89+

one exampleQUANTITY

0.87+

Martin Mao & Jeff Cobb, Chronosphere | KubeCon + CloudNativeCon NA 2022


 

>>Good afternoon everyone, and welcome back to Cuan where my cohost John Farer and I are broadcasting live, along with Lisa Martin from Cuan Detroit, Michigan. We are joined this afternoon by two very interesting gentlemen who also happen to be legends on the cube. John, how long have you known the next few? They've, >>They've made their mark on the cube with Jerry Chen from Greylock was one of our most attended cube guests. He's a VC partner at Greylock and an investor and this company that just launched their new cloud observability platform should be a great segment. >>Well, I'm excited. I are. Are you excited? Should I string this out just a little bit longer? No, I won't. I won't do that to you. Please welcome Martin and Jeff from Chronosphere Martin. Jeff, thank you so much for being >>Here. Thank you for having us. Thank you. >>I noticed right away that you have raised a mammoth series C. Yeah. 200 million if I'm not mistaken. >>That is correct. >>Where's the company at? >>Yeah, so we raised that series C a year ago. In fact, we were just talking about it a year ago at Cub Con. Since then, at the time we're about 80 employees or so. Since then, we've tripled the headcount, so we're over 200 people. Casual, triple casual, triple of the headcount. Yeah. Luckily it was the support of business, which is also tripled in the last year. So we're very lucky from that perspective as well. And a couple of other things we're pretty proud of last year. We've had a hundred percent customer retention, which is always a great thing to have as a SaaS platform there. >>Real metric if you've had a hundred percent. I'm >>Kidding. It's a good metric to, to put out there if you had a hundred percent. I would say for sure. It's an A for sure and exactly welcome to meet >>Anyone else who's had a hundred percent >>Customer attention here at coupon this week and 90% of our customers are using more of the service and, and you know, therefore paying more for the service as well. So those are great science for us and I think it shows that we're clearly doing something right on the product side. I would say. And >>Last and last time you're on the cube. We're talking about about the right data. Not so much a lot of data, if I remember correctly. Yeah, a hundred percent. And that was a unique approach. Yeah, it's a data world on relative observability. And you guys just launched a new release of your platform, cloud native platform. What's new in the platform? Can you share an update on what you guys release? >>Yeah, well we did and, and you, you bring up a great point. You know, like it's not just in observably but overall data is exploding. Alright, so three things there. It's like, hey, can your platform even handle the explosion of data? Can it control it over time and make sure that as your business grows, the data doesn't continue explode at the same time. And then for the end users, can they make sense of all this data? Cuz what's the point of having it if the end users can't make sense of it? So actually our product announcement this time is a pretty big refresh of, of a lot of features in our, in our platform. And it actually tackles all three of these particular components. And I'll let Jeff, our head of product, Doug, >>You, you run product, you get the keys to the kingdom, I do product roadmap. People saying, Hey this, take this out. You're under a lot of pressure. What makes the platform platform a great observability product? >>So the keystone of what we do that's different is helping you control the data, right? As we're talking about there's an infinite amount of data. These systems are getting more and more and more complicated. A lot of what we do is help you understand the utility of the telemetry so that you can optimize for keeping and storing and paying for the data that's actually helpful as opposed to the stuff that isn't. >>What's the benefit now with observability, with all the noise out in the marketplace, there's been a shift over the past couple years. Cloud native at scale, you're seeing a lot more automation, almost a set to support the growth for more application development. We had a Docker CEO on earlier today, he said there are more applications being deployed in the past year than in the history of open source. So more and more apps are being deployed, more data's being generated. What's the key to observability right now that's gonna separate the winners from the losers? >>Yeah, I think, you know, not only are there more applications being deployed, but there are smaller and small applications being deployed mostly on containers these days more than if they, hence this conference gets larger and larger every year. Right? So, you know, I think the key is a can your system handle this data explosion is, is the first thing. Not only can it handle the data explosion, but you know, APM solutions have been around for a very long time and those were really introspecting into an application. Whereas these days what's more important is, well how is your application interfacing with every other application in your distributed architecture there, right? So the use case is slightly different there. And then to what Jeff was saying is like once the data is there, not only making use of what is actually useful to you, but then having the end user make sense of it. >>Because we, we, we always think about the technology changes. We forget that the end users are different now we used to have IT operations team operating everything and the developers would write the application, just throw it over the wall. These days the developers have to actually operate this thing in production. So the end users of these systems are very different as well. And you can imagine these are folks, your average developer as maybe not operated things for many years in production before. So they need to, that they need to pick up a new skill set, they need to use new tooling in order to, to do that. So yeah, it's, it's, >>And you got the developer persona, you got a developer that's building products for builders and developers that are building products to be consumed. So they're not, they're not really infrastructure builders, they're just app developers. >>Exactly. Exactly. That's right. And that's what a lot of the new functionality that we're introducing here at the show is all about is helping developers who build software by day and are on call by night, actually get in context. There's so much data chances of when that, when one of those pages goes off and your number comes up, that the problem happens to be in the part of the system that you know a lot about are pretty low, chances are you're gonna get bothered about something else. So we've built a feature, we call it collections that's about putting you in the right context and connecting you into the piece of the system where the problem is to orient you and to get you started. So instead of waiting through, through hundreds of millions of things, you're waiting through the stuff that's in the immediate neighborhood of where the >>Problem is. Yeah. To your point about data, you can't let it go unchecked. That's right. You gotta gotta understand that. And we were talking about containers again with, again with docker, you know, nuance point, but oh, scan your container. But not everyone's scanning the containers security nightmare, right? I mean, >>Well I think one of the things that I, I loved in reading the notes in preparation for you coming up is you've actually created cloud native observability with the goal of eliminating engineering burnout. And what you're talking about there is actually the cognitive burden of when things happen. Yeah, for sure. We we're, you know, we're not just designing for when everything goes right, You need to be prepared for when everything goes wrong and that poor lonely individual in the middle of the night has, it's >>A tough job. >>Has to navigate that >>And, and observability is just one thing you gotta mean like security is another thing. So, so many more things have been piled on top of the developer in addition to actually creating the application. Right? It is. There is a lot. And you know, observably is one of those key things you need to do your job. So as much as, as much as we can make that easier, that's a better bit. Like there are so many things being piled on right now. >>That's the holy grail right there. Because they don't want to be doing exactly >>The work. Exactly. They're not observability experts. >>Exactly. And automating that in. So where do you guys weigh in on the automation wave? Everything's automation. Yeah. Is that kind of a hand waving or what's going on? What's the reality? What's actually happening? >>Yeah, I think automation I think is key. You hear a lot of ai ml ops there. I, I don't know if I really believe in that or having a machine self heal itself or anything like that. But I think automation is key because there are a lot of repeatable tasks in a lot of what you're doing. So once you detect that something goes wrong, generally if you've seen it before, you know what the fix is. So I think automation plays a key on the sense that once it's detected again the second time, the third time, okay, I know what I did the previous time, let, let's make sure we can do that again. So automation I think is key. I think it helps a lot with the burnout. I dunno if I'd go as far as the >>Same burnout's a big deal. >>Well there's an example again in the, in the stuff we're releasing this week, a new feature we call query accelerator. That's a form of automation. Problem is you got all this data, mountain of data, put you in the right context so you're at least in the right neighborhood, but now you need to query it. You gotta get the data to actually inform the specific problem you're trying to solve. And the burden on the developer in that situation is really high. You have to know what you're looking for and you have to know how to efficiently ask for it. So you're not waiting for a long time and >>We >>Built a feature, you tell us what you want, we will figure out how to get it for you efficiently. That's the kind of automation that we're focused on. That's actually a good service. How can we, it >>Sounds >>Blissful. How can we accelerate and optimize what you were gonna do anyway, rather than trying to read your mind or predict the future. >>Yes, >>Savannah, some community forward. Yeah, I, I'm, so I'm curious, you, you clearly lead with a lot of empathy, both of you and, and putting your, well you probably have experience with this as well, but putting your mind or putting yourself in the mind to the developer are, what's that like for you from a product development standpoint? Are you doing a lot of community engagement? Are you talking to developers to try and anticipate what they're gonna be needing next in terms of, of your offering? Or how has that work >>For you? Oh, for sure. So, so I run product, I have a lot of product managers who work for me. Somebody that I used to work with, she was accusing me, but what she called, she called me an anthropologist of a product manager. I >>Get these kind of you, the very good design school vibes from you both of you, which >>Is, and the reason why she said the way you do this, you go and you live with them in order to figure out what a day in their life is really like, what the job is really like, what's easy, what's hard. And that's what we try to aim at and try to optimize for. So that's very much the way that we do all of >>Our work. And that's really also highlights the fact that we're in a market that requires acute realtime data from the customer. Cause it's, and it's all new data. Well >>Yeah, it's all changing. The tools change every day. I mean if we're not watching how, and >>So to your point, you need it in real time as well. The whole point of moving to cloud native is you have a reliable product or service there. And like if you need to wait a few minutes to even know that something's wrong, like you've already lost at that point, you've already lost a ton of customers, potentially. You've already lost a ton of business. You know, to your point about the, the community earlier, one other thing we're trying to do is also give back to the community a little bit. So actually two days ago we just announced the open source of a tool that we've been using in our product for a very long time. But of course our product is, is a paid product, right? But actually open source a part of that tool thus that the broader community can benefit as well. And that tool which, which tool is that? It's, it's called Prom lens. And it's actually the Prometheus project is the open sourced metrics project that everybody uses. So this is a query builder that helps developers understand how to create queries in a much more efficient way. We've had in our product for a long time, but we're like, let's give that back to the community so that the broader community of developers out there can have a much easier time creating these queries as well. What's >>Been the feedback? >>We only now it's two days ago so I'm not, I'm not exactly sure. I imagine >>It's great. They're probably playing with it right now. >>Exactly. Exactly. Exactly. For sure. I imagine. Great. >>Yeah, you guys mentioned burnout before and we heard this a lot now you mentioned in terms of data we've been hearing and reporting about Insta security world, which is also data specific observability ties right into security. Yep. How does a company figure out, first of all, burnout's a big problem. It's more and more data coming. It's like, it's like doesn't stop and the breaches are coming too. How does a company know when they need that their observability strategy is broken? Is there sig signs of you know, burnout? Is there signs of breaches? I mean, what are some of the tell signs that if I'm a CSO I go, you know what, maybe I should check out promisee. When do, when do you guys match in and go we're a perfect fit to solve that problem? >>Yeah, I, I would say, you know, because we're focused on the observability side, less so on the security side, some of those signals are like how many incidents do you have? How many outages do you have? What's the occurrence of these things and how long does it take to recover from from from these particular incidents? How >>Upsetting are we finding customers? >>Upsetting are >>Customer. Exactly. >>And and one trend was seeing >>Not churn happening. Exactly. >>And one trend we're seeing in the industry is that 68% of companies are saying that they're having more incidents over time. Right. And if you have more incidents, you can imagine more engineers are being paid, are being woken up and they're being put under more stress. And one thing you said that very interesting is, you know, I think generally in the observability world, you ideally actually don't want to figure out the problem when it goes wrong. Ideally what you want to do these days is figure out how do I remediate this and get the business back to a running state as quickly as I can. And then when the business isn't burning, let me go and figure out what the underlying root cause is. So the strategy there is changed as well from the APM days where like I don't want to figure out the problem in real time. I wanna make sure my business and my service is running as it should be. And then separately from that, once it is then I wanna go >>Under understand that assume it's gonna happen, be ready to close that isolate >>The >>Fire. Exactly. Exactly. And, and you know, you can imagine, you know the whole movement towards C I C D, like generally when you don't touch a system, nothing goes wrong. You deploy change, first thing you do is not figure out why you change break thing. Get that back like underplay that change roll that change back, get your business back to a estate and then take the time where you're not under pressure, you're not gonna be burnt out to figure out what was it about my change that that broke everything. So, yeah. Got >>It. >>Well it's not surprising that you've added some new exciting customers to the roster. We have. We have. You want to tell the audience who they might >>Be? Yes. It's been a few big names in the last year we're pretty excited about. One is Snapchat, I think everybody knows, knows that application And one is Robin Hood. So you know, you can imagine very large, I'll say tech forward companies that have completed their migrations to, to cloud native or a wallet on their way to Cloudnative and, and we like helping those customers for sure. We also like helping a lot of startups out there cause they start off in the cloud native world. Like if you're gonna build a business today, you're gonna use Kubernetes from day one. Right? But we're actually interestingly seeing more and more of is traditional enterprises who are just early, pretty early on in their cloudnative migration then now starting to adopt cloud native at scale and now they're running to the same problems. As well >>Said, the Gartner data last year was something like 85% of companies had not made that transformation. Right. So, and that, I mean that's looking at larger scale companies, obviously >>A hundred, you're >>Right on the pulse. They >>Have finished it, but a lot of them are starting it now. So we're seeing pilot >>Projects, testing and cadence. And I imagine it's a bit of a different pace when you're working with some of those transforming companies versus those startups that are, are just getting rolling. I >>Love and you know, you have a lot of legacy use case you have to, like, if you're a startup, you can imagine there's no baggage, there's no legacy. You're just starting brand new, right? If you're a large enterprise, you have to really think about, okay, well how do I get my active business moved over? But yeah. >>Yeah. And how do you guys see the whole cloud native scale moving with the hyper scales? Like aws? You've got a lot of multi-cloud conversation. We call it super cloud in our narrative, but there's now this new, we're gonna get some of common services being identified. We're seeing a, we're seeing a lot more people recognize and with Kubernetes that hey, you know what, you could get some common services maybe across clouds with SOS doing storage. We got Min iOS doing some storage. Yeah. Cloud flare, I mean starting to see a lot more non-hyper scale systems. >>Yeah, I mean I, and I think that's the pattern there and I think it, it's, especially for enterprise at the top end, right? You see a, a lot of companies are trying to de-risk by saying, Hey, I, I don't want to bet maybe on one cloud provider, I sort of need to hedge my bets a little bit. And Kubernetes is a great tool to go do that. You can imagine some of these other tools you mentioned is a great way to do that. Observability is another great way to do that. Or the cloud providers have their observability or monitoring tooling, but it's really optimized just for that cloud provider, just for those services there. So if you're really trying to run either your custom applications or a multi-cloud approach, you really can't use one cloud providers solution to go solve that problem. Do you >>Guys see yourselves with that unifying >>Layer? We, we, we are a little bit as that lay because it's agnostic to each of the cloud providers. And the other thing is we actually like to understand where our customers run and then try to run their observability stack on a different cloud provider. Cuz we use the cloud ourselves. We're not running our own data centers of course, but it's an interesting thing where everybody has a common dependency on the cloud provider. So when us e one ofs hate to call them out, but when us E one ofs goes down, imagine half the internet goes down, right? And that's the time that you actually need observability. Right? Seriously. And every other tooling there. So we try to find out where do you run and then we try to actually run you elsewhere. But yeah, >>I like that. And nobody wants to see the ugly bits anyway. Exactly. And we all know who when we're all using someone when everything >>Exactly. Exactly, exactly. >>People off the internet. So it's very, I, I really love that. Martin, Jeff, thank you so much for being here with us. Thank you. What's next? What, how do people find out, how do they get one of the jobs since three Xing your >>Employee growth? We're hiring a lot. I think the best thing is to go check out our website chronosphere.io. You'll find out a lot about our, our, our careers, our job openings, the culture we're trying to build here. Find out a lot about the product as well. If you do have an observability problem, like that's the best place to go to find out about that as well. Right. >>Fantastic. Well if you want to join a quarter billion, a quarter of a billion dollar rocket ship over here and certainly a unicorn, get in touch with Martin and Jeff. John, thank you so much for joining me for this very special edition and thank all of you for tuning in to the Cube live here from Motor City. My name's Savannah Peterson and we'll see you in a little bit. >>Robert Herbeck. People obviously know you from Shark Tanks, but the Herbeck group has been really laser focused on cyber security. So I actually helped to bring my.

Published Date : Oct 26 2022

SUMMARY :

John, how long have you known the next few? He's a VC partner at Greylock and an investor and this company that just launched their new cloud Jeff, thank you so much for being Thank you. I noticed right away that you have raised a mammoth series C. And a couple of other things we're pretty proud of last year. Real metric if you've had a hundred percent. It's a good metric to, to put out there if you had a hundred percent. and you know, therefore paying more for the service as well. And you guys just launched a new release of your platform, cloud native platform. So actually our product announcement this time is a pretty big refresh of, You, you run product, you get the keys to the kingdom, I do product roadmap. So the keystone of what we do that's different is helping you control the What's the key to observability right now that's gonna separate the winners from the losers? Not only can it handle the data explosion, but you know, APM solutions have been around for And you can imagine these are folks, And you got the developer persona, you got a developer that's building the part of the system that you know a lot about are pretty low, chances are you're gonna get bothered about And we were talking about containers again with, again with docker, you know, nuance point, We we're, you know, we're not just designing for when everything goes right, You need to be prepared for when everything And you know, observably is one of those key things you need to do your job. That's the holy grail right there. Exactly. So where do you guys weigh in on the automation wave? So once you detect that something goes wrong, generally if you've seen it before, you know what the fix is. You gotta get the data to actually inform the specific problem you're trying to solve. Built a feature, you tell us what you want, we will figure out how to get it for you efficiently. How can we accelerate and optimize what you were gonna do anyway, empathy, both of you and, and putting your, well you probably have experience with this as well, of a product manager. Is, and the reason why she said the way you do this, you go and you live with them in order to And that's really also highlights the fact that we're in a market that requires acute realtime I mean if we're not watching how, and And like if you need to wait a few minutes to even know that something's wrong, like you've already lost at that point, I imagine They're probably playing with it right now. I imagine. I mean, what are some of the tell signs that if I'm a CSO I go, you know what, Exactly. Exactly. And if you have more incidents, you can imagine more engineers are being paid, are being woken up and they're being put And, and you know, you can imagine, you know the whole movement towards C I C D, You want to tell the audience who they might So you know, you can imagine very large, Said, the Gartner data last year was something like 85% of companies had not made that transformation. Right on the pulse. So we're seeing pilot And I imagine it's a bit Love and you know, you have a lot of legacy use case you have to, like, if you're a startup, you can imagine there's no baggage, We're seeing a, we're seeing a lot more people recognize and with Kubernetes that hey, you know what, tools you mentioned is a great way to do that. And that's the time that you actually need observability. And we all know who when we're all using someone when Exactly. Martin, Jeff, thank you so much for being here with If you do have an observability problem, like that's the best place to go to find out about of you for tuning in to the Cube live here from Motor City. People obviously know you from Shark Tanks, but the Herbeck group has been really

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

MartinPERSON

0.99+

Lisa MartinPERSON

0.99+

Jerry ChenPERSON

0.99+

Jeff CobbPERSON

0.99+

DougPERSON

0.99+

JohnPERSON

0.99+

John FarerPERSON

0.99+

Savannah PetersonPERSON

0.99+

Robert HerbeckPERSON

0.99+

third timeQUANTITY

0.99+

last yearDATE

0.99+

CuanLOCATION

0.99+

85%QUANTITY

0.99+

68%QUANTITY

0.99+

Martin MaoPERSON

0.99+

second timeQUANTITY

0.99+

90%QUANTITY

0.99+

GartnerORGANIZATION

0.99+

firstQUANTITY

0.99+

a year agoDATE

0.99+

200 millionQUANTITY

0.99+

bothQUANTITY

0.99+

hundred percentQUANTITY

0.99+

two days agoDATE

0.99+

KubeConEVENT

0.99+

Shark TanksORGANIZATION

0.99+

CloudNativeConEVENT

0.98+

oneQUANTITY

0.98+

SnapchatORGANIZATION

0.98+

hundreds of millionsQUANTITY

0.98+

DockerORGANIZATION

0.98+

over 200 peopleQUANTITY

0.98+

this weekDATE

0.97+

one trendQUANTITY

0.97+

GreylockORGANIZATION

0.97+

three thingsQUANTITY

0.97+

first thingQUANTITY

0.97+

threeQUANTITY

0.97+

OneQUANTITY

0.95+

past yearDATE

0.95+

about 80 employeesQUANTITY

0.93+

two very interesting gentlemenQUANTITY

0.93+

SavannahPERSON

0.93+

KubernetesTITLE

0.92+

this afternoonDATE

0.92+

one thingQUANTITY

0.91+

PrometheusTITLE

0.9+

todayDATE

0.9+

eachQUANTITY

0.9+

chronosphere.ioOTHER

0.89+

Robin HoodPERSON

0.88+

Motor CityLOCATION

0.87+

earlier todayDATE

0.86+

Cuan Detroit, MichiganLOCATION

0.85+

a quarter of a billion dollarQUANTITY

0.82+

ChronosphereORGANIZATION

0.81+

tonQUANTITY

0.81+

iOSTITLE

0.81+

awsORGANIZATION

0.8+

halfQUANTITY

0.78+

past couple yearsDATE

0.75+

NA 2022EVENT

0.75+

Prom lensOTHER

0.72+

hundredQUANTITY

0.68+

day oneQUANTITY

0.65+

quarter billionQUANTITY

0.63+

waveEVENT

0.63+

AMD & Oracle Partner to Power Exadata X9M


 

(upbeat jingle) >> The history of Exadata in the platform is really unique. And from my vantage point, it started earlier this century as a skunkworks inside of Oracle called Project Sage back when grid computing was the next big thing. Oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve. Last April, for example, Oracle announced the availability of Exadata X9M in OCI, Oracle Cloud Infrastructure. One thing that hasn't been as well publicized is that Exadata on OCI is using AMD's EPYC processors in the database service. EPYC is not Eastern Pacific Yacht Club for all you sailing buffs, rather it stands for Extreme Performance Yield Computing, the enterprise grade version of AMD's Zen architecture which has been a linchpin of AMD's success in terms of penetrating enterprise markets. And to focus on the innovations that AMD and Oracle are bringing to market, we have with us today, Juan Loaiza, who's executive vice president of mission critical technologies at Oracle, and Mark Papermaster, who's the CTO and EVP of technology and engineering at AMD. Juan, welcome back to the show. Mark, great to have you on The Cube in your first appearance, thanks for coming on. Juan, let's start with you. You've been on The Cube a number of times, as I said, and you've talked about how Exadata is a top platform for Oracle database. We've covered that extensively. What's different and unique from your point of view about Exadata Cloud Infrastructure X9M on OCI? >> So as you know, Exadata, it's designed top down to be the best possible platform for database. It has a lot of unique capabilities, like we make extensive use of RDMA, smart storage. We take advantage of everything we can in the leading hardware platforms. X9M is our next generation platform and it does exactly that. We're always wanting to be, to get all the best that we can from the available hardware that our partners like AMD produce. And so that's what X9M in it is, it's faster, more capacity, lower latency, more iOS, pushing the limits of the hardware technology. So we don't want to be the limit, the software database software should not be the limit, it should be the actual physical limits of the hardware. That that's what X9M's all about. >> Why, Juan, AMD chips in X9M? >> We're introducing AMD chips. We think they provide outstanding performance, both for OTP and for analytic workloads. And it's really that simple, we just think the performance is outstanding in the product. >> Mark, your career is quite amazing. I could riff on history for hours but let's focus on the Oracle relationship. Mark, what are the relevant capabilities and key specs of the AMD chips that are used in Exadata X9M on Oracle's cloud? >> Well, thanks. It's really the basis of the great partnership that we have with Oracle on Exadata X9M and that is that the AMD technology uses our third generation of Zen processors. Zen was architected to really bring high performance back to X86, a very strong roadmap that we've executed on schedule to our commitments. And this third generation does all of that, it uses a seven nanometer CPU that is a core that was designed to really bring throughput, bring really high efficiency to computing and just deliver raw capabilities. And so for Exadata X9M, it's really leveraging all of that. It's really a balanced processor and it's implemented in a way to really optimize high performance. That is our whole focus of AMD. It's where we've reset the company focus on years ago. And again, great to see the super smart database team at Oracle really partner with us, understand those capabilities and it's been just great to partner with them to enable Oracle to really leverage the capabilities of the Zen processor. >> Yeah. It's been a pretty amazing 10 or 11 years for both companies. But Mark, how specifically are you working with Oracle at the engineering and product level and what does that mean for your joint customers in terms of what they can expect from the collaboration? >> Well, here's where the collaboration really comes to play. You think about a processor and I'll say, when Juan's team first looked at it, there's general benchmarks and the benchmarks are impressive but they're general benchmarks. And they showed the base processing capability but the partnership comes to bear when it means optimizing for the workloads that Exadata X9M is really delivering to the end customers. And that's where we dive down and as we learn from the Oracle team, we learn to understand where bottlenecks could be, where is there tuning that we could in fact really boost the performance above that baseline that you get in the generic benchmarks. And that's what the teams have done, so for instance, you look at optimizing latency to our DMA, you look at optimizing throughput on oil TP and database processing. When you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust, we have thousands of parameters that can be adjusted for a given workload. And that's the beauty of the partnership. So we have the expertise on the CPU engineering, Oracle Exadata team knows innately what the customers need to get the most out of their platform. And when the teams came together, we actually achieved anywhere from 20% to 50% gains on specific workloads, it is really exciting to see. >> Mark, last question for you is how do you see this relationship evolving in the future? Can you share a little roadmap for the audience? >> You bet. First off, given the deep partnership that we've had on Exadata X9M, it's really allowed us to inform our future design. So in our current third generation, EPYC is that is really what we call our epic server offerings. And it's a 7,003 third gen and Exadara X9M. So what about fourth gen? Well, fourth gen is well underway, ready for the future, but it incorporates learning that we've done in partnership with Oracle. It's going to have even more through capabilities, it's going to have expanded memory capabilities because there's a CXL connect express link that'll expand even more memory opportunities. And I could go on. So that's the beauty of a deep partnership as it enables us to really take that learning going forward. It pays forward and we're very excited to fold all of that into our future generations and provide even a better capabilities to Juan and his team moving forward. >> Yeah, you guys have been obviously very forthcoming. You have to be with Zen and EPYC. Juan, anything you'd like to add as closing comments? >> Yeah. I would say that in the processor market there's been a real acceleration in innovation in the last few years, there was a big move 10, 15 years ago when multicore processors came out. And then we were on that for a while and then things started stagnating, but in the last two or three years, AMD has been leading this, there's been a dramatic acceleration in innovation so it's very exciting to be part of this and customers are getting a big benefit from this. >> All right. Hey, thanks for coming back on The Cube today. Really appreciate your time. >> Thanks. Glad to be here. >> All right and thank you for watching this exclusive Cube conversation. This is Dave Vellante from The Cube and we'll see you next time. (upbeat jingle)

Published Date : Sep 22 2022

SUMMARY :

in the database service. in the leading hardware platforms. And it's really that simple, and key specs of the the great partnership that we have expect from the collaboration? but the partnership comes to So that's the beauty of a deep partnership You have to be with Zen and EPYC. but in the last two or three years, coming back on The Cube today. Glad to be here. and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JuanPERSON

0.99+

Dave VellantePERSON

0.99+

OracleORGANIZATION

0.99+

Juan LoaizaPERSON

0.99+

MarkPERSON

0.99+

10QUANTITY

0.99+

20%QUANTITY

0.99+

Mark PapermasterPERSON

0.99+

AMDORGANIZATION

0.99+

Last AprilDATE

0.99+

11 yearsQUANTITY

0.99+

thousandsQUANTITY

0.99+

both companiesQUANTITY

0.99+

iOSTITLE

0.99+

7,003QUANTITY

0.99+

X9MTITLE

0.99+

50%QUANTITY

0.99+

fourth genQUANTITY

0.98+

todayDATE

0.98+

FirstQUANTITY

0.98+

ZenCOMMERCIAL_ITEM

0.97+

third generationQUANTITY

0.97+

X86COMMERCIAL_ITEM

0.97+

first appearanceQUANTITY

0.97+

ExadataTITLE

0.97+

third genQUANTITY

0.96+

earlier this centuryDATE

0.96+

seven nanometerQUANTITY

0.96+

ExadataORGANIZATION

0.94+

firstQUANTITY

0.92+

Eastern Pacific Yacht ClubORGANIZATION

0.9+

EPYCORGANIZATION

0.87+

bothQUANTITY

0.86+

OCITITLE

0.85+

One thingQUANTITY

0.83+

Exadata X9MCOMMERCIAL_ITEM

0.81+

Power ExadataORGANIZATION

0.81+

The CubeORGANIZATION

0.8+

OCIORGANIZATION

0.79+

The CubeCOMMERCIAL_ITEM

0.79+

ZenORGANIZATION

0.78+

three yearsQUANTITY

0.78+

Exadata X9MCOMMERCIAL_ITEM

0.74+

X9MCOMMERCIAL_ITEM

0.74+

yearsDATE

0.73+

15 years agoDATE

0.7+

10DATE

0.7+

EPYCOTHER

0.65+

ExadaraORGANIZATION

0.64+

Oracle Cloud InfrastructureORGANIZATION

0.61+

last few yearsDATE

0.6+

Exadata Cloud Infrastructure X9MTITLE

0.6+

Nayaki Nayyar and Nick Warner | Ivanti & SentinelOne Partner to Revolutionize Patch Management


 

hybrid work is the new reality according to the most recent survey data from enterprise technology research cios expect that 65 of their employees will work either as fully remote or in a hybrid model splitting time between remote and in office remote of course can be anywhere it could be home it could be at the beach overseas literally anywhere there's internet so it's no surprise that these same technology executives cite security as their number one priority well ahead of other critical technology initiatives including collaboration software cloud computing and analytics which round out the top four in the etr survey now as we've reported securing endpoints was important prior to the pandemic but the explosion in the past two plus years of remote work and corollary device usage has made the problem even more acute and let's face it managing sprawling i.t assets has always been a pain patch management for example has been a nagging concern for practitioners and with ransomware attacks on the rise it's critical that security teams harden it assets throughout their life cycle staying current and constantly staying on top of vulnerabilities within the threat surface welcome to this special program on the cube enable and secure the everywhere workplace brought to you by ivanti in this program we highlight key partnerships between avanti and its ecosystem to address critical problems faced by technology and security teams in our first segment we explore a collaboration between avanti and sentinel one where the two companies are teaming to simplify patch management my name is dave vellante and i'll be your host today and with me are nayaki nayar who's the president and chief product officer at avanti and nick warner president and security of the security group at sentinel one welcome naki and nick and hackie good to have you back in the cube great to see you guys thank you thank you dave uh really good to be back on cube uh i'm a veteran of cube so thank you for having us and um look forward to a great discussion today yeah you better thanks okay hey good nick nick good to have you on as well what do we need to know about this partnership please so uh if you look at uh we are super excited about this partnership nick thank you for joining us on this session today um when you look at ivanti ivanti has been a leader in two big segments uh we are a leader in unified endpoint management with the acquisition of mobileye now we have a holistic end-to-end management of all devices be it windows linux mac ios you name it right so we have that seamless single pane of glass to manage all devices but in addition to that we are also a leader in risk-based patch management um dave that's what we are very excited about this partnership with the with central one where now we can combine the strength we have in the risk-based patch management with central one's xdr platform and truly help address what i call the need of the hour with our customers for them to be able to detect uh vulnerabilities and being able to remediate them proactively remediate them right so that's what we are super excited about this partnership and nick would love to hand it over to you to talk about uh the partnership and the journey ahead of us thanks and you know from center one's perspective we see autonomous vulnerability assessment and remediation as really necessary given the evolution uh in the sophistication the volume and the ferocity of threats out there and what's really key is being able to remediate risks and machine speed and also identify vulnerability exposure in real time and you know if you look traditionally at uh vulnerability scanning and patch management they've really always been two separate things and when things are separate they take time between the two coordination communication what we're looking to do with our singularity xdr platform is holistically deliver one unified solution that can identify threats identify vulnerabilities and automatically and autonomously leverage patch management to much better protect our customers so maybe maybe that's why patch management is such a challenge for many organizations because as you described nick it's sort of a siloed from security and those worlds are coming together but maybe you guys could address the specific problems that you're trying to solve with this collaboration yeah so if you look at uh just in a holistic level uh dave today cyber crime is at catastrophic heights right and this is not just a cio or a cso issue this is a board issue every organization every enterprise is addressing this at the board level and when you double click on it one of the challenges that we have heard from our customers over and over again is the complexity and the manual processes that are in place for remediation or patching all their operating systems their applications their third party apps and that is where it's very very time consuming very complex very cumbersome and the question is how do we help them automate it right how do we help them remove those manual processes and autonomously intermediate right so which is where this partnership between ivanti and central one helps organizations to bring this autonomous nature to bring those proactive predictive capabilities to detect an issue prioritize that issue based on risk-based prioritization is what we call it and autonomously remediate that issue right so that's where uh this partnership really really uh helps our customers address the the top concerns they have in cyber crime or cyber security got it so prioritization automation nick maybe you could address what are the keys i mean you got to map vulnerabilities to software updates how do you make sure that your the patches there's not a big lag between your patch and and the known vulnerabilities and you've got this diverse set of you know i.t portfolio assets how do you manage all that it's a great question and i and i think really the number one uh issue around this topic is that security teams and it teams are facing a really daunting task of identifying all the time every day all the vulnerabilities in their ecosystem and the biggest problem with this is how do they get context and priority and i think what people have come to realize through the years of dealing with with patch management uh and vulnerability scanning is that patching without the context of what the possible impact or priority of that risk is really comes down to busy work and i think what's so important in a totally interconnected world with attacks happening at machine speed is being able to take that precious asset that we call time and make sure you properly prioritize that how we're doing it from sentinel one singularity xdr perspective is by leveraging autonomous threat information and being able to layer that against vulnerability information to properly view through that lens the highest priority threats and vulnerabilities that you need to patch and then using our single agent technology be able to autonomously remediate and patch those vulnerabilities whether or not it's on a mac a pc server a cloud workload and the beauty of our solution is it gives you proper clarity so you can see the impact of vulnerabilities each and every day in your environment and know that you're doing the right thing in the right order got it okay so the context gives you the risks profile allows you to prioritize and then of course you can you know remediate what else should we know about this this joint solution uh in terms of you know what it is how i engage any other detail on how it addresses the the problem specifically yeah so it's all about race against the time um uh dave when it's how we help our customers uh detect the vulnerability prioritize it and remediate it the attackers are able to weaponize those vulnerabilities and and have an attack right so it's really it's how we help our customers be a lot more proactive and predictive address those vulnerabilities versus um before the attackers really get access to it right so that's where our joint solution in fact i always say whatever edr with this edr or mdr or xdr the r portion of that r is very one he comes in our neurons for patch management or what we call neurons but risk based patch management combined with um central ones xdr is where we truly uh bring the combined solutions to to to life right so the r is where ivanti really plays a big part in uh in the joint solution yeah absolutely the response i mean people i think all agree you're going to get infiltrated that's how you respond to it you know the thing about this topic is when you make a business case a lot of times you'll go to the cfo and say hey if we don't do this we're going to be in big trouble and so it's this fear factor and i get that it's super important but but are there other measurements of success that that you you can share in other words how are customers going to determine the value of this joint solution so it's a mean time to repair let me go nick and then i'm sure you have your uh metrics and how you're measuring the success it's about how we can detect an issue and repair that issue it's reducing that mean time to repair as much as possible and making it as real-time as possible for our customers right that's where the true outcome through success and the metric that customers can track measure and continuously improve on nick you want to add to that for sure yeah you know you make some great great points niaki and what what i would add is um what sentinel one singularity platform is known for is automated and autonomous detection prevention and response and remediation across threats and if you look traditionally at patch management or vulnerability assessment they're typically deployed and run in point-of-time solutions what i mean by that is that they're scans and re-scans the way that advanced edr solutions and xdr solutions such as single one singularity platform work is we're constantly recording everything that's happening on all of your systems in real time and so what we do is literally eliminate the window of opportunity between a patch being uh needed a vulnerability being discovered and you knowing that you have that need for that vulnerability to be patched in your environment you don't have to wait for that 12 or 24-hour window to scan for vulnerabilities you will immediately know it in your network you'll also know the security implications of that vulnerability so you know when and how to prioritize and then furthermore you can take autonomous hatching measures against that so at the end of the day the name of the game in security is time and it's about reducing that window of opportunity for the adversaries for the threat actors and this is a epic leap forward in in doing that for our customers and that capability nick is a function of your powerful agent or is it architecture where's that come from that's a great question it's it's a combination of a couple of things the first is our agent technology which performs constant monitoring on every system every behavior every process running on all your systems live and in real time so this is not a batch process that that kicks up once a day this is always running in the background so the moment a new application is installed the moment a new application version is deployed we know about it we record it instantaneously so if you think about that and layer against getting best in class vulnerability information from a partner like avanti and then also being able to deploy patch management against that you can start to see how you're applying that in real time in your environment and the last thing i i'd like to add is because we're watching everything and then layering it against thread intel and context using our proprietary machine learning technology that that idea of being able to prioritize and escalate is critical because if you talk to security providers there's a couple different uh challenges that they're facing and i would say the top two are alert fatigue and then also human human power limitations and so no security team has enough people on their team and no security teams have an absence of alerts and so the fact that we can prioritize alerts surface the ones that are the most important give context to that and also save them precious hours of their personnel's time by being able to do this autonomously and automatically we're really killing two birds with one stone that's great there's the business case right there you just laid out some other things that we can measure right it all comes back to the data doesn't it we got to go but i'll give you the last word yeah i mean we are super excited about this partnership uh like nick said uh we believe in how we can help our customers discover all the assets we have they have um manage those assets but a big chunk of it is how we help them secure it right secure uh their devices the applications the data that's on those devices the end points and being able to provide an experience a service experience at the end of the day so that end users don't have to worry about securing you don't have to think about security it should be embedded it should be autonomous and it should be contactually personalized right so uh that's the journey we are on and uh thank you nick for this great partnership and look forward to a great journey ahead of us thank you yeah thanks to both of you nick appreciate it okay keep it right there after this quick break we're gonna be back to look at how ivanti is working with other partners to simplify and harden the anywhere workplace you're watching the cube your leader in enterprise and emerging tech coverage [Music] you

Published Date : Sep 16 2022

SUMMARY :

got it okay so the context gives you the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
12QUANTITY

0.99+

65QUANTITY

0.99+

Nick WarnerPERSON

0.99+

avantiORGANIZATION

0.99+

first segmentQUANTITY

0.99+

bothQUANTITY

0.99+

two companiesQUANTITY

0.99+

todayDATE

0.99+

24-hourQUANTITY

0.99+

ivantiORGANIZATION

0.99+

Nayaki NayyarPERSON

0.99+

davePERSON

0.98+

dave vellantePERSON

0.98+

central oneORGANIZATION

0.98+

two separate thingsQUANTITY

0.97+

nayaki nayarPERSON

0.97+

nickPERSON

0.96+

once a dayQUANTITY

0.96+

two birdsQUANTITY

0.96+

firstQUANTITY

0.95+

nakiPERSON

0.95+

top twoQUANTITY

0.94+

one stoneQUANTITY

0.94+

central oneORGANIZATION

0.94+

pandemicEVENT

0.93+

hackiePERSON

0.92+

oneQUANTITY

0.89+

top fourQUANTITY

0.88+

niakiPERSON

0.88+

macCOMMERCIAL_ITEM

0.87+

one of the challengesQUANTITY

0.87+

eachQUANTITY

0.87+

number oneQUANTITY

0.86+

two big segmentsQUANTITY

0.85+

two coordinationQUANTITY

0.8+

singleQUANTITY

0.79+

IvantiORGANIZATION

0.78+

sentinelORGANIZATION

0.76+

every systemQUANTITY

0.74+

every enterpriseQUANTITY

0.73+

everyQUANTITY

0.71+

single paneQUANTITY

0.69+

past two plus yearsDATE

0.68+

lotQUANTITY

0.66+

windows linux mac iosTITLE

0.65+

sentinel oneORGANIZATION

0.63+

SentinelOneORGANIZATION

0.62+

ivantiPERSON

0.6+

every processQUANTITY

0.59+

every organizationQUANTITY

0.56+

timesQUANTITY

0.56+

xdrTITLE

0.54+

theirQUANTITY

0.54+

chiefPERSON

0.53+

coupleQUANTITY

0.47+

nickORGANIZATION

0.39+

Oracle & AMD Partner to Power Exadata X9M


 

[Music] the history of exadata in the platform is really unique and from my vantage point it started earlier this century as a skunk works inside of oracle called project sage back when grid computing was the next big thing oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve and i remember the oracle hp database machine which was announced at oracle open world almost 15 years ago and then exadata kept evolving after the sun acquisition it became a platform that had tightly integrated hardware and software and today exadata it keeps evolving almost like a chameleon to address more workloads and reach new performance levels last april for example oracle announced the availability of exadata x9m in oci oracle cloud infrastructure and introduced the ability to run the autonomous database service or the exa data database service you know oracle often talks about they call it stock exchange performance level kind of no description needed and sort of related capabilities the company as we know is fond of putting out benchmarks and comparisons with previous generations of product and sometimes competitive products that underscore the progress that's being made with exadata such as 87 percent more iops with metrics for latency measured in microseconds mics instead of milliseconds and many other numbers that are industry-leading and compelling especially for mission-critical workloads one thing that hasn't been as well publicized is that exadata on oci is using amd's epyc processors in the database service epyc is not eastern pacific yacht club for all your sailing buffs rather it stands for extreme performance yield computing the enterprise grade version of amd's zen architecture which has been a linchpin of amd's success in terms of penetrating enterprise markets and to focus on the innovations that amd and oracle are bringing to market we have with us today juan loyza who's executive vice president of mission critical technologies at oracle and mark papermaster who's the cto and evp of technology and engineering at amd juan welcome back to the show mark great to have you on thecube and your first appearance thanks for coming on yep happy to be here thank you all right juan let's start with you you've been on thecube a number of times as i said and you've talked about how exadata is a top platform for oracle database we've covered that extensively what's different and unique from your point of view about exadata cloud infrastructure x9m on oci yeah so as you know exadata it's designed top down to be the best possible platform for database uh it has a lot of unique capabilities like we make extensive use of rdma smart storage we take advantage of you know everything we can in the leading uh hardware platforms and x9m is our next generation platform and it does exactly that we're always wanting to be to get all the best that we can from the available hardware that our partners like amd produce and so that's what x9 in it is it's faster more capacity lower latency more ios pushing the limits of the hardware technology so we don't want to be the limit the software the database software should not be the limit it should be uh the actual physical limits of the hardware and that that's what x9m is all about why won amd chips in x9m uh yeah so we're we're uh introducing uh amd chips we think they provide outstanding performance uh both for oltp and for analytic workloads and it's really that simple we just think that performance is outstanding in the product yeah mark your career is quite amazing i've been around long enough to remember the transition to cmos from emitter coupled logic in the mainframe era back when you were at ibm that was an epic technology call at the time i was of course steeped as an analyst at idc in the pc era and like like many witnessed the tectonic shift that apple's ipod and iphone caused and the timing of you joining amd is quite important in my view because it coincided with the year that pc volumes peaked and marked the beginning of what i call a stagflation period for x86 i could riff on history for hours but let's focus on the oracle relationship mark what are the relevant capabilities and key specs of the amd chips that are used in exadata x9m on oracle's cloud well thanks and and uh it's really uh the basis of i think the great partnership that we have with oracle on exadata x9m and that is that the amd technology uses our third generation of zen processors zen was you know architected to really bring high performance you know back to x86 a very very strong road map that we've executed you know on schedule to our commitments and this third generation does all of that it uses a seven nanometer cpu that is a you know core that was designed to really bring uh throughput uh bring you know really high uh efficiency uh to computing uh and just deliver raw capabilities and so uh for uh exadata x9m uh it's really leveraging all of that it's it's a uh implemented in up to 64 cores per socket it's got uh you know really anywhere from 128 to 168 pcie gen 4 io connectivity so you can you can really attach uh you know all of the uh the necessary uh infrastructure and and uh storage uh that's needed uh for exadata performance and also memory you have to feed the beast for those analytics and for the oltp that juan was talking about and so it does have eight lanes of memory for high performance ddr4 so it's really as a balanced processor and it's implemented in a way to really optimize uh high performance that that is our whole focus of uh amd it's where we've you know reset the company focus on years ago and uh again uh you know great to see uh you know the the super smart uh you know database team at oracle really a partner with us understand those capabilities and it's been just great to partner with them to uh you know to you know enable oracle to really leverage the capabilities of the zen processor yeah it's been a pretty amazing 10 or 11 years for both companies but mark how specifically are you working with oracle at the engineering and product level you know and what does that mean for your joint customers in terms of what they can expect from the collaboration well here's where the collaboration really comes to play you think about a processor and you know i'll say you know when one's team first looked at it there's general benchmarks and the benchmarks are impressive but they're general benchmarks and you know and they showed you know the i'll say the you know the base processing capability but the partnership comes to bear uh when it when it means optimizing for the workloads that exadata x9m is really delivering to the end customers and that's where we dive down and and as we uh learn from the oracle team we learned to understand where bottlenecks could be uh where is there tuning that we could in fact in fact really boost the performance above i'll say that baseline that you get in the generic benchmarks and that's what the teams have done so for instance you look at you know optimizing latency to rdma you look at just throughput optimizing throughput on otp and database processing when you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust we have you know thousands of parameters that can be adjusted for a given workload and that's again that's the beauty of the partnership so we have the expertise on the cpu engineering uh you know oracle exudated team knows innately what the customers need to get the most out of their platform and when the teams came together we actually achieved anywhere from 20 percent to 50 gains on specific workloads it's really exciting to see so okay so so i want to follow up on that is that different from the competition how are you driving customer value you mentioned some you know some some percentage improvements are you measuring primarily with with latency how do you look at that well uh you know we are differentiated with the uh in the number of factors we bring a higher core density we bring the highest core density certainly in x86 and and moreover what we've led the industry is how to scale those cores we have a very high performance fabric that connects those together so as as a customer needs more cores again we scale anywhere from 8 to 64 cores but what the trick is uh that is you add more cores you want the scale the scale to be as close to linear as possible and so that's a differentiation we have and we enable that again with that balanced computer of cpu io and memory that we design but the key is you know we pride ourselves at amd of being able to partner in a very deep fashion with our customers we listen very well i think that's uh what we've had the opportunity uh to do with uh juan and his team we appreciate that and and that is how we got the kind of performance benefits that i described earlier it's working together almost like one team and in bringing that best possible capability to the end customers great thank you for that one i want to come back to you can both the exadata database service and the autonomous database service can they take advantage of exadata cloud x9m capabilities that are in that platform yeah absolutely um you know autonomous is basically our self-driving version of the oracle database but fundamentally it is the same uh database course so both of them will take advantage of the tremendous performance that we're getting now you know when when mark takes about 64 cores that's for chip we have two chips you know it's a two socket server so it's 128 128-way processor and then from our point of view there's two threads so from the database point there's 200 it's a 256-way processor and so there's a lot of raw performance there and we've done a lot of work with the amd team to make sure that we deliver that to our customers for all the different kinds of workload including otp analytics but also including for our autonomous database so yes absolutely allah takes advantage of it now juan you know i can't let you go without asking about the competition i've written extensively about the big four hyperscale clouds specifically aws azure google and alibaba and i know that don't hate me sometimes it angers some of my friends at oracle ibm too that i don't include you in that list but but i see oracle specifically is different and really the cloud for the most demanding applications and and top performance databases and not the commodity cloud which of course that angers all my friends at those four companies so i'm ticking everybody off so how does exadata cloud infrastructure x9m compare to the likes of aws azure google and other database cloud services in terms of oltp and analytics value performance cost however you want to frame it yeah so our architecture is fundamentally different uh we've architected our database for the scale out environment so for example we've moved intelligence in the storage uh we've put uh remote direct memory access we put persistent memory into our product so we've done a lot of architectural changes that they haven't and you're starting to see a little bit of that like if you look at some of the things that amazon and google are doing they're starting to realize that hey if you're gonna achieve good results you really need to push some database uh processing into the storage so so they're taking baby steps toward that you know you know roughly 15 years after we we've had a product and again at some point they're gonna realize you really need rdma you really need you know more uh direct access to those capabilities so so they're slowly getting there but you know we're well ahead and what you know the way this is delivered is you know better availability better performance lower latency higher iops so and this is why our customers love our product and you know if you if you look at the global fortune 100 over 90 percent of them are running exit data today and even in the in our cloud uh you know over 60 of the global 100 are running exadata in the oracle cloud because of all the differentiated uh benefits that they get uh from the product uh so yeah we're we're well ahead in the in the database space mark last question for you is how do you see this relationship evolving in the future can you share a little road map for the audience you bet well first off you know given the deep partnership that we've had on exudate x9m uh it it's really allowed us to inform our future design so uh in our current uh third generation epic epyc is uh that is really uh what we call our epic server offerings and it's a 7003 third gen in and exudate x9m so what about fourth gen well fourth gen is well underway uh you know it and uh and uh you know ready to you know for the for the future but it incorporates learning uh that we've done in partnership with with oracle uh it's gonna have even more through capabilities it's gonna have expanded memory capabilities because there's a cxl connect express link that'll expand even more memory opportunities and i could go on so you know that's the beauty of a deep partnership as it enables us to really take that learning going forward it pays forward and we're very excited to to fold all of that into our future generations and provide even a better capabilities to one and his team moving forward yeah you guys have been obviously very forthcoming you have to be with with with zen and epic juan anything you'd like to add as closing comments yeah i would say that in the processor market there's been a real acceleration in innovation in the last few years um there was you know a big move 10 15 years ago when multi-core processors came out and then you know we were on that for a while and then things started staggering but in the last two or three years and amd has been leading this um there's been a dramatic uh acceleration in innovation in this space so it's very exciting to be part of this and and customers are getting a big benefit from this all right chance hey thanks for coming back in the cube today really appreciate your time thanks glad to be here all right thank you for watching this exclusive cube conversation this is dave vellante from thecube and we'll see you next time [Music]

Published Date : Sep 13 2022

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
20 percentQUANTITY

0.99+

juan loyzaPERSON

0.99+

amdORGANIZATION

0.99+

amazonORGANIZATION

0.99+

8QUANTITY

0.99+

256-wayQUANTITY

0.99+

10QUANTITY

0.99+

OracleORGANIZATION

0.99+

alibabaORGANIZATION

0.99+

87 percentQUANTITY

0.99+

128QUANTITY

0.99+

oracleORGANIZATION

0.99+

two threadsQUANTITY

0.99+

googleORGANIZATION

0.99+

11 yearsQUANTITY

0.99+

todayDATE

0.99+

50QUANTITY

0.99+

200QUANTITY

0.99+

ipodCOMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

two chipsQUANTITY

0.99+

both companiesQUANTITY

0.99+

10DATE

0.98+

iphoneCOMMERCIAL_ITEM

0.98+

earlier this centuryDATE

0.98+

last aprilDATE

0.98+

third generationQUANTITY

0.98+

juanPERSON

0.98+

64 coresQUANTITY

0.98+

128-wayQUANTITY

0.98+

two socketQUANTITY

0.98+

eight lanesQUANTITY

0.98+

awsORGANIZATION

0.97+

AMDORGANIZATION

0.97+

iosTITLE

0.97+

fourth genQUANTITY

0.96+

168 pcieQUANTITY

0.96+

dave vellantePERSON

0.95+

third genQUANTITY

0.94+

aws azureORGANIZATION

0.94+

appleORGANIZATION

0.94+

thousands of parametersQUANTITY

0.92+

yearsDATE

0.91+

15 yearsQUANTITY

0.9+

Power ExadataORGANIZATION

0.9+

over 90 percentQUANTITY

0.89+

four companiesQUANTITY

0.89+

firstQUANTITY

0.88+

ociORGANIZATION

0.87+

first appearanceQUANTITY

0.85+

one teamQUANTITY

0.84+

almost 15 years agoDATE

0.83+

seven nanometerQUANTITY

0.83+

last few yearsDATE

0.82+

one thingQUANTITY

0.82+

15 years agoDATE

0.82+

epycTITLE

0.8+

over 60QUANTITY

0.79+

amd produceORGANIZATION

0.79+

Kumaran Siva, AMD | VMware Explore 2022


 

>>Good morning, everyone. Welcome to the cubes day two coverage of VMware Explorer, 2022 live from San Francisco. Lisa Martin here with Dave Nicholson. We're excited to kick off day two of great conversations with VMware partners, customers it's ecosystem. We've got a V an alumni back with us Kumer on Siva corporate VP of business development from AMD joins us. Great to have you on the program in person. Great >>To be here. Yes. In person. Indeed. Welcome. >>So the great thing yesterday, a lot of announcements and B had an announcement with VMware, which we will unpack that, but there's about 7,000 to 10,000 people here. People are excited, ready to be back, ready to be hearing from this community, which is so nice. Yesterday am B announced. It is optimizing AMD PON distributed services card to run on VMware. Bsphere eight B for eight was announced yesterday. Tell us a little bit about that. Yeah, >>No, absolutely. The Ben Sando smart neck DPU. What it allows you to do is it, it provides a whole bunch of capabilities, including offloads, including encryption DEC description. We can even do functions like compression, but with, with the combination of VMware project Monterey and, and Ben Sando, we we're able to do is even do some of the vSphere, actual offloads integration of the hypervisor into the DPU card. It's, it's pretty interesting and pretty powerful technology. We're we're pretty excited about it. I think this, this, this could, you know, potentially, you know, bring some of the cloud value into, in terms of manageability, in terms of being able to take care of bare metal servers and also, you know, better secure infrastructure, you know, cloudlike techniques into the, into the mainstream on-premises enterprise. >>Okay. Talk a little bit about the DPU data processing unit. They talked about it on stage yesterday, but help me understand that versus the CPU GPU. >>Yeah. So it's, it's, it's a different, it's a different point, right? So normally you'd, you'd have the CPU you'd have we call it dumb networking card. Right. And I say dumb, but it's, it's, you know, it's just designed to go process packets, you know, put and put them onto PCI and have the, the CPU do all of the, kind of the, the packet processing, the, the virtual switching, all of those functions inside the CPU. What the DPU allows you to do is, is actually offload a bunch of those functions directly onto the, onto the deep view card. So it has a combination of these special purpose processors that are programmable with the language called P four, which is one, one of the key things that pan Sando brings. Here's a, it's a, it's a real easy to program, easy to use, you know, kind of set so that not some of, some of our larger enterprise customers can actually go in and, you know, do some custom coding depending on what their network infrastructure looks like. But you can do things like the V switch in, in the, in the DPU, not having to all have that done on the CPU. So you freeze up some of the CPU course, make sure, make sure infrastructure run more efficiently, but probably even more importantly, it provides you with more, with greater security, greater separation between the, between the networking side and the, the CPU side. >>So, so that's, that's a key point because a lot of us remember the era of the tonic TCP, I P offload engine, Nick, this isn't simply offloading CPU cycles. This is actually providing a sort of isolation. So that the network that's right, is the network has intelligence that is separate from the server. Is that, is that absolutely key? Is that absolutely >>Key? Yeah. That's, that's a good way of looking at it. Yeah. And that's, that's, I mean, if you look at some of the, the, the techniques used in the cloud, the, you know, this, this, this in fact brings some of those technologies into, into the enterprise, right. So where you are wanting to have that level of separation and management, you're able to now utilize the DPU card. So that's, that's a really big, big, big part of the value proposition, the manageability manageability, not just offload, but you know, kind of a better network for enterprise. Right. >>Right. >>Can you expand on that value proposition? If I'm a customer what's in this for me, how does this help power my multi-cloud organization? >>Yeah. >>So I think we have some, we actually have a number of these in real customer use cases today. And so, you know, folks will use, for example, the compression and the, sorry, the compression and decompression, that's, that's definitely an application in the storage side, but also on the, just on the, just, just as a, as a DPU card in the mainstream general purpose, general purpose server server infrastructure fleet, they're able to use the encryption and decryption to make sure that their, their, their infrastructure is, is kind of safe, you know, from point to point within the network. So every, every connected, every connection there is actually encrypted and that, that, you know, managing those policies and orchestrating all of that, that's done to the DPU card. >>So, so what you're saying is if you have DPU involved, then the server itself and the CPUs become completely irrelevant. And basically it's just a box of sheet metal at that point. That's, that's a good way of looking at that. That's my segue talking about the value proposition of the actual AMD. >>No, absolutely. No, no. I think, I think, I think the, the, the CPUs are always going to be central in this and look. And so, so I think, I think having, having the, the DPU is extremely powerful and, and it does allow you to have better infrastructure, but the key to having better infrastructure is to have the best CPU. Well, tell >>Us, tell >>Us that's what, tell us us about that. So, so I, you know, this is, this is where a lot of the, the great value proposition between VMware and AMD come together. So VMware really allows enterprises to take advantage of these high core count, really modern, you know, CPU, our, our, our, our epic, especially our Milan, our 7,003 product line. So to be able to take advantage of 64 course, you know, VMware is critical for that. And, and so what they, what they've been able to do is, you know, know, for example, if you have workloads running on legacy, you know, like five year old servers, you're able to take a whole bunch of those servers and consolidate down, down into a single node, right. And the power that VMware gives you is the manageability, the reliability brings all of that factors and allows you to take advantage of, of the, the, the latest, latest generation CPUs. >>You know, we've actually done some TCO modeling where we can show, even if you have fully depreciated hardware, like, so it's like five years old plus, right. And so, you know, the actual cost, you know, it's already been written off, but the cost just the cost of running it in terms of the power and the administration, you know, the OPEX costs that, that are associated with it are greater than the cost of acquiring a new set of, you know, a smaller set of AMD servers. Yeah. And, and being able to consolidate those workloads, run VMware, to provide you with that great, great user experience, especially with vSphere 8.0 and the, and the hooks that VMware have built in for AMD AMD processors, you actually see really, really good. It's just a great user experience. It's also a more efficient, you know, it's just better for the planet. And it's also better on the pocketbook, which is, which is, which is a really cool thing these days, cuz our value in TCO translates directly into a value in terms of sustainability. Right. And so, you know, from, from energy consumption, from, you know, just, just the cost of having that there, it's just a whole lot better >>Talk about on the sustainability front, how AMD is helping its customers achieve their sustainability goals. And are you seeing more and more customers coming to you saying, we wanna understand what AMD is doing for sustainability because it's important for us to work with vendors who have a core focus on it. >>Yeah, absolutely. You know, I think, look, I'll be perfectly honest when we first designed our CPU, we're just trying to build the biggest baddest thing that, you know, that, that comes out in terms of having the, the, the best, the, the number, the, the largest number of cores and the best TCO for our customers, but what it's actually turned out that TCO involves energy consumption. Right. And, and it involves, you know, the whole process of bringing down a whole bunch of nodes, whole bunch of servers. For example, we have one calculation where we showed 27, you know, like I think like five year old servers can be consolidated down into five AMD servers that, that ratio you can see already, you know, huge gains in terms of sustainability. Now you asked about the sustainability conversation. This I'd say not a week goes by where I'm not having a conversation with, with a, a, a CTO or CIO who is, you know, who's got that as part of their corporate, you know, is part of their corporate brand. And they want to find out how to make their, their infrastructure, their data center, more green. Right. And so that's, that's where we come in. Yeah. And it's interesting because at least in the us money is also green. So when you talk about the cost of power, especially in places like California, that's right. There's, there's a, there's a natural incentive to >>Drive in that direction. >>Let's talk about security. You know, the, the, the threat landscape has changed so dramatically in the last couple of years, ransomware is a household word. Yes. Ransomware attacks happened like one every 11 seconds, older technology, a little bit more vulnerable to internal threats, external threats. How is AMD helping customers address the security fund, which is the board level conversation >>That that's, that's, that's a, that's a great, great question. Look, I look at security as being, you know, it's a layered thing, right? I mean, if you talk to any security experts, security, doesn't, you know, there's not one component and we are an ingredient within the, the greater, you know, the greater scheme of things. A few things. One is we have partnered very closely with the VMware. They have enabled our SUV technology, secure encrypted virtualization technology into, into the vSphere. So such that all of the memory transactions. So you have, you have security, you know, at, you know, security, when you store store on disks, you have security over the network and you also have security in the compute. And when you go out to memory, that's what this SUV technology gives you. It gives you that, that security going, going in your, in your actual virtual machine as it's running. And so the, the, we take security extremely seriously. I mean, one of the things, every generation that you see from, from AMD and, and, you know, you have seen us hit our cadence. We do upgrade all of the security features and we address all of the sort of known threats that are out there. And obviously this threats, you know, kind of coming at us all the time, but our CPUs just get better and better from, from a, a security stance. >>So shifting gears for a minute, obviously we know the pending impossible acquisition, the announced acquisition of VMware by Broadcom, AMD's got a relationship with Broadcom independently, right? No, of course. What is, how's that relationship? >>Oh, it's a great relationship. I mean, we, we, you know, they, they have certified their, their, their niche products, their HPA products, which are utilized in, you know, for, for storage systems, sand systems, those, those type of architectures, the hardcore storage architectures. We, we work with them very closely. So they, they, they've been a great partner with us for years. >>And you've got, I know, you know, we are, we're talking about current generation available on the shelf, Milan based architecture, is that right? That's right. Yeah. But if I understand correctly, maybe sometime this year, you're, you're gonna be that's right. Rolling out the, rolling out the new stuff. >>Yeah, absolutely. So later this year, we've already, you know, we already talked about this publicly. We have a 96 core gen platform up to 96 cores gen platform. So we're just, we're just taking that TCO value just to the next level, increasing performance DDR, five CXL with, with memory expansion capability. Very, very neat leading edge technology. So that that's gonna be available. >>Is that NextGen P C I E, or has that shift already been made? It's >>Been it's NextGen. P C I E P C E gen five. Okay. So we'll have, we'll have that capability. That'll be, that'll be out by the end of this year. >>Okay. So those components you talk about. Yeah. You know, you talk about the, the Broadcom VMware universe, those components that are going into those new slots are also factors in performance and >>Yeah, absolutely. You need the balance, right? You, you need to have networking storage and the CPU. We're very cognizant of how to make sure that these cores are fed appropriately. Okay. Cuz if you've just put out a lot of cores, you don't have enough memory, you don't have enough iOS. That's, that's the key to, to, to, you know, our approach to, to enabling performance in the enterprise, make sure that the systems are balanced. So you get the experience that you've had with, let's say your, you know, your 12 core, your 16 core, you can have that same experience in the 96 core in a node or 96 core socket. So maybe a 192 cores total, right? So you can have that same experience in, in a tune node in a much denser, you know, package server today or, or using Melan technology, you know, 128 cores, super, super good performance. You know, its super good experience it's, it's designed to scale. Right. And especially with VMware as, as our infrastructure, it works >>Great. I'm gonna, Lisa, Lisa's got a question to ask. I know, but bear with me one bear >>With me. Yes, sir. >>We've actually initiated coverage of this question of, you know, just hardware matter right anymore. Does it matter anymore? Yeah. So I put to you the question, do you think hardware still matters? >>Oh, I think, I think it's gonna matter even more and more going forward. I mean just, but it's all cloud who cares just in this conversation today. Right? >>Who cares? It's all cloud. Yeah. >>So, so, so definitely their workloads moving to the cloud and we love our cloud partners don't get me wrong. Right. But there are, you know, just, I've had so many conversations at this show this week about customers who cannot move to the cloud because of regulatory reasons. Yeah. You know, the other thing that you don't realize too, that's new to me is that people have depreciated their data centers. So the cost for them to just go put in new AMD servers is actually very low compared to the cost of having to go buy, buy public cloud service. They still want to go buy public cloud services and that, by the way, we have great, great, great AMD instances on, on AWS, on Google, on Azure, Oracle, like all of our major, all of the major cloud providers, support AMD and have, have great, you know, TCO instances that they've, they've put out there with good performance. Yeah. >>What >>Are some of the key use cases that customers are coming to AMD for? And, and what have you seen change in the last couple of years with respect to every customer needing to become a data company needing to really be data driven? >>No, that's, that's also great question. So, you know, I used to get this question a lot. >>She only asks great questions. Yeah. Yeah. I go down and like all around in the weeds and get excited about the bits and the bites she asks. >>But no, I think, look, I think the, you know, a few years ago and I, I think I, I used to get this question all the time. What workloads run best on AMD? My answer today is unequivocally all the workloads. Okay. Cuz we have processors that run, you know, run at the highest performance per thread per per core that you can get. And then we have processors that have the highest throughput and, and sometimes they're one in the same, right. And Ilan 64 configured the right way using using VMware vSphere, you can actually get extremely good per core performance and extremely good throughput performance. It works well across, just as you said, like a database to data management, all of those kinds of capabilities, DevOps, you know, E R P like there's just been a whole slew slew of applications use cases. We have design wins in, in major customers, in every single industry in every, and these, these are big, you know, the big guys, right? >>And they're, they're, they're using AMD they're successfully moving over their workloads without, without issue. For the most part. In some cases, customers tell us they just, they just move the workload on, turn it on. It runs great. Right. And, and they're, they're fully happy with it. You know, there are other cases where, where we've actually gotten involved and we figured out, you know, there's this configuration of that configuration, but it's typically not a, not a huge lift to move to AMD. And that's that I think is a, is a key, it's a key point. And we're working together with almost all of the major ISV partners. Right. And so just to make sure that, that, that they have run tested certified, I think we have over 250 world record benchmarks, you know, running in all sorts of, you know, like Oracle database, SAP business suite, all of those, those types of applications run, run extremely well on AMD. >>Is there a particular customer story that you think really articulates the value of running on AMD in terms of enabling bus, big business outcome, safer a financial services organization or healthcare organization? Yeah. >>I mean we, yeah, there's certainly been, I mean, across the board. So in, in healthcare we've seen customers actually do the, the server consolidation very effectively and then, you know, take advantage of the, the lower cost of operation because in some cases they're, they're trying to run servers on each floor of a hospital. For example, we've had use cases where customers have been able to do that because of the density that we provide and to be able to, to actually, you know, take, take their compute more even to the edge than, than actually have it in the, in those use cases in, in a centralized matter. The another, another interesting case FSI in financial services, we have customers that use us for general purpose. It, we have customers that use this for kind of the, the high performance we call it grid computing. So, you know, you have guys that, you know, do all this trading during the day, they collect tons and tons of data, and then they use our computers to, or our CPUs to just crunch to that data overnight. >>And it's just like this big, super computer that just crunches it's, it's pretty incredible. They're the, the, the density of the CPUs, the value that we bring really shines, but in, in their general purpose fleet as well. Right? So they're able to use VMware, a lot of VMware customers in that space. We love our, we love our VMware customers and they're able to, to, to utilize this, they use use us with HCI. So hyperconverge infrastructure with V VSAN and that's that that's, that's worked works extremely well. And, and, and our, our enterprise customers are extremely happy with that. >>Talk about, as we wrap things up here, what's next for AMD, especially AMD with VMwares VMware undergoes its potential change. >>Yeah. So there there's a lot that we have going on. I mean, I gotta say VMware is one of the, let's say premier companies in terms of, you know, being innovative and being, being able to drive new, new, interesting pieces of technology and, and they're very experimentive right. So they, we have, we have a ton of things going with them, but certainly, you know, driving pin Sando is, is very, it is very, very important to us. Yeah. I think that the whole, we're just in the, the cusp, I believe of, you know, server consolidation becoming a big thing for us. So driving that together with VMware and, you know, into some of these enterprises where we can show, you know, save the earth while we, you know, in terms of reducing power, reducing and, and saving money in terms of TCO, but also being able to enable new capabilities. >>You know, the other part of it too, is this new infrastructure enables new workloads. So things like machine learning, you know, more data analytics, more sophisticated processing, you know, that, that is all enabled by this new infrastructure. So we, we were excited. We think that we're on the precipice of, you know, going a lot of industries moving forward to, to having, you know, the next level of it. It's no longer about just payroll or, or, or enterprise business management. It's about, you know, how do you make your, you know, your, your knowledge workers more productive, right. And how do you give them more capabilities? And that, that is really, what's exciting for us. >>Awesome Cooper. And thank you so much for joining Dave and me on the program today, talking about what AMD, what you're doing to supercharge customers, your partnership with VMware and what is exciting. What's on the, the forefront, the frontier, we appreciate your time and your insights. >>Great. Thank you very much for having me. >>Thank you for our guest and Dave Nicholson. I'm Lisa Martin. You're watching the cube live from VMware Explorer, 22 from San Francisco, but don't go anywhere, Dave and I will be right back with our next guest.

Published Date : Aug 31 2022

SUMMARY :

Great to have you on the program in person. So the great thing yesterday, a lot of announcements and B had an announcement with VMware, I think this, this, this could, you know, potentially, you know, bring some of the cloud value into, but help me understand that versus the CPU GPU. And I say dumb, but it's, it's, you know, it's just designed to go process So that the network that's right, not just offload, but you know, kind of a better network for enterprise. And so, you know, folks will use, for example, the compression and the, And basically it's just a box of sheet metal at that point. the DPU is extremely powerful and, and it does allow you to have better infrastructure, And the power that VMware gives you is the manageability, the reliability brings all of that factors the administration, you know, the OPEX costs that, that are associated with it are greater than And are you seeing more and more customers coming to you saying, And, and it involves, you know, the whole process of bringing down a whole bunch of nodes, How is AMD helping customers address the security fund, which is the board level conversation And obviously this threats, you know, kind of coming at us all the time, So shifting gears for a minute, obviously we I mean, we, we, you know, they, they have certified their, their, their niche products, available on the shelf, Milan based architecture, is that right? So later this year, we've already, you know, we already talked about this publicly. That'll be, that'll be out by the end of this year. You know, you talk about the, the Broadcom VMware universe, that's the key to, to, to, you know, our approach to, to enabling performance in the enterprise, I know, but bear with me one So I put to you the question, do you think hardware still matters? but it's all cloud who cares just in this conversation today. Yeah. But there are, you know, just, I've had so many conversations at this show this week about So, you know, I used to get this question a lot. around in the weeds and get excited about the bits and the bites she asks. Cuz we have processors that run, you know, run at the highest performance you know, running in all sorts of, you know, like Oracle database, SAP business Is there a particular customer story that you think really articulates the value of running on AMD density that we provide and to be able to, to actually, you know, take, take their compute more even So they're able to use VMware, a lot of VMware customers in Talk about, as we wrap things up here, what's next for AMD, especially AMD with VMwares So driving that together with VMware and, you know, into some of these enterprises where learning, you know, more data analytics, more sophisticated processing, you know, And thank you so much for joining Dave and me on the program today, talking about what AMD, Thank you very much for having me. Thank you for our guest and Dave Nicholson.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dave NicholsonPERSON

0.99+

BroadcomORGANIZATION

0.99+

AMDORGANIZATION

0.99+

DavePERSON

0.99+

San FranciscoLOCATION

0.99+

Kumaran SivaPERSON

0.99+

five yearQUANTITY

0.99+

12 coreQUANTITY

0.99+

VMwareORGANIZATION

0.99+

192 coresQUANTITY

0.99+

16 coreQUANTITY

0.99+

96 coreQUANTITY

0.99+

CaliforniaLOCATION

0.99+

five yearsQUANTITY

0.99+

CooperPERSON

0.99+

iOSTITLE

0.99+

7,003QUANTITY

0.99+

OracleORGANIZATION

0.99+

LisaPERSON

0.99+

128 coresQUANTITY

0.99+

yesterdayDATE

0.99+

AWSORGANIZATION

0.99+

MilanLOCATION

0.99+

todayDATE

0.99+

GoogleORGANIZATION

0.99+

this yearDATE

0.98+

Yesterday amDATE

0.98+

fiveQUANTITY

0.98+

one componentQUANTITY

0.98+

eightQUANTITY

0.98+

HPAORGANIZATION

0.98+

each floorQUANTITY

0.98+

oneQUANTITY

0.97+

this weekDATE

0.97+

vSphere 8.0TITLE

0.97+

later this yearDATE

0.97+

day twoQUANTITY

0.97+

10,000 peopleQUANTITY

0.96+

96 coreQUANTITY

0.95+

TCOORGANIZATION

0.95+

2022DATE

0.95+

OneQUANTITY

0.95+

27QUANTITY

0.94+

64 courseQUANTITY

0.94+

SandoORGANIZATION

0.94+

one calculationQUANTITY

0.94+

end of this yearDATE

0.93+

VMwaresORGANIZATION

0.93+

Super Data Cloud | Supercloud22


 

(electronic music) >> Welcome back to our studios in Palo Alto, California. My name is Dave Vellante, I'm here with John Furrier, who is taking a quick break. You know, in one of the early examples that we used of so called super cloud was Snowflake. We called it a super data cloud. We had, really, a lot of fun with that. And we've started to evolve our thinking. Years ago, we said that data was going to form in the cloud around industries and ecosystems. And Benoit Dogeville is a many time guest of theCube. He's the co-founder and president of products at Snowflake. Benoit, thanks for spending some time with us, at Supercloud 22, good to see you. >> Thank you, thank you, Dave. >> So, you know, like I said, we've had some fun with this meme. But it really is, we heard on the previous panel, everybody's using Snowflake as an example. Somebody how builds on top of hyper scale infrastructure. You're not building your own data centers. And, so, are you building a super data cloud? >> We don't call it exactly that way. We don't like the super word, it's a bit dismissive. >> That's our term. >> About our friends, cloud provider friends. But we call it a data cloud. And the vision, really, for the data cloud is, indeed, it's a cloud which overlays the hyper scaler cloud. But there is a big difference, right? There are several ways to do this super cloud, as you name them. The way we picked is to create one single system, and that's very important, right? There are several ways, right. You can instantiate your solution in every region of the cloud and, you know, potentially that region could be AWS, that region could be GCP. So, you are, indeed, a multi-cloud solution. But Snowflake, we did it differently. We are really creating cloud regions, which are superimposed on top of the cloud provider region, infrastructure region. So, we are building our regions. But where it's very different is that each region of Snowflake is not one instantiation of our service. Our service is global, by nature. We can move data from one region to the other. When you land in Snowflake, you land into one region. But you can grow from there and you can, you know, exist in multiple cloud at the same time. And that's very important, right? It's not different instantiation of a system, it's one single instantiation which covers many cloud regions and many cloud provider. >> So, we used Snowflake as an example. And we're trying to understand what the salient aspects are of your data cloud, what we call super cloud. In fact, you've used the word instantiate. Kit Colbert, just earlier today, laid out, he said, there's sort of three levels. You can run it on one cloud and communicate with the other cloud, you can instantiate on the clouds, or you can have the same service running 24/7 across clouds, that's the hardest example. >> Yeah. >> The most mature. You just described, essentially, doing that. How do you enable that? What are the technical enablers? >> Yeah, so, as I said, first we start by building, you know, Snowflake regions, we have today 30 regions that span the world, so it's a world wide system, with many regions. But all these regions are connected together. They are meshed together with our technology, we name it Snow Grid, and that makes it hard because, you know, Azure region can talk to a WS region, or GCP regions, and as a user for our cloud, you don't see, really, these regional differences, that regions are in different potentially cloud. When you use Snowflake, you can exist, your presence as an organization can be in several regions, several clouds, if you want, geographic, both geographic and cloud provider. >> So, I can share data irrespective of the cloud. And I'm in the Snowflake data cloud, is that correct? I can do that today? >> Exactly, and that's very critical, right? What we wanted is to remove data silos. And when you insociate a system in one single region, and that system is locked in that region, you cannot communicate with other parts of the world, you are locking data in one region. Right, and we didn't want to do that. We wanted data to be distributed the way customer wants it to be distributed across the world. And potentially sharing data at world scales. >> Does that mean if I'm in one region and I want to run a query, if I'm in AWS in one region, and I want to run a query on data that happens to be in an Azure cloud, I can actually execute that? >> So, yes and no. The way we do it is very expensive to do that. Because, generally, if you want to join data which are in different region and different cloud, it's going to be very expensive because you need to move data every time you join it. So, the way we do it is that you replicate the subset of data that you want to access from one region from other region. So, you can create this data mesh, but data is replicated to make it very cheap and very performing too. >> And is the Snow Grid, does that have the metadata intelligence to actually? >> Yes, yes. >> Can you describe that a little? >> Yeah, Snow Grid is both a way to exchange metadata. So, each region of Snowflake knows about all the other regions of Snowflake. Every time we create a new region, the metadata is distributed over our data cloud, not only region knows all the region, but knows every organization that exists in our cloud, where this organization is, where data can be replicated by this organization. And then, of course, it's also used as a way to exchange data, right? So, you can exchange data by scale of data size. And I was just receiving an email from one of our customers who moved more than four petabytes of data, cross region, cross cloud providers in, you know, few days. And it's a lot of data, so it takes some time to move. But they were able to do that online, completely online, and switch over to the other region, which is very important also. >> So, one of the hardest parts about super cloud that I'm still trying to struggling through is the security model. Because you've got the cloud as your sort of first line of defense. And now we've got multiple clouds, with multiple first lines of defense, I've got a shared responsibility model across those clouds, I've got different tools in each of those clouds. Do you take care of that? Where do you pick up from the cloud providers? Do you abstract that security layer? Do you bring in partners? It's a very complicated. >> No, this is a great question. Security has always been the most important aspect of Snowflake sense day one, right? This is the question that every customer of ours has. You know, how can you guarantee the security of my data? And, so, we secure data really tightly in region. We have several layers of security. It starts by creating every data at rest. And that's very important. A lot of customers are not doing that, right? You hear of these attacks, for example, on cloud, where someone left their buckets. And then, you know, you can access the data because it's a non-encrypted. So, we are encrypting everything at rest. We are encrypting everything in transit. So, a region is very secure. Now, you know, from one region, you never access data from another region in Snowflake. That's why, also, we replicate data. Now the replication of that data across region, or the metadata, for that matter, is really our least secure, so Snow Grid ensures that everything is encrypted, everything is, we have multiple encryption keys, and it's stored in hardware secure modules, so, we bit Snow Grid such that it's secure and it allows very secure movement of data. >> Okay, so, I know we kind of, getting into the technology here a lot today, but because super cloud is the future, we actually have to have an architectural foundation on which to build. So, you mentioned a bucket, like an S3 bucket. Okay, that's storage, but you also, for instance, taking advantage of new semi-conductor technology. Like Graviton, as an example, that drives efficiency. You guys talk about how you pass that on to your customers. Even if it means less revenue for you, so, awesome, we love that, you'll make it up in volume. And, so. >> Exactly. >> How do you deal with the lowest common denominator problem? I was talking to somebody the other day and this individual brought up what I thought was a really good point. What if we, let's say, AWS, have the best, silicon. And we can run the fastest and the least expensive, and the lowest power. But another cloud provider hasn't caught up yet. How do you deal with that delta? Do you just take the best of and try to respect that? >> No, it's a great question. I mean, of course, our software is extracting all the cloud providers infrastructure so that when you run in one region, let's say AWS, or Azure, it doesn't make any difference, as far as the applications are concerned. And this abstraction, of course, is a lot of work. I mean, really, a lot of work. Because it needs to be secure, it needs to be performance, and every cloud, and it has to expose APIs which are uniform. And, you know, cloud providers, even though they have potentially the same concept, let's say block storage, APIs are completely different. The way these systems are secure, it's completely different. There errors that you can get. And the retry mechanism is very different from one cloud to the other. The performance is also different. We discovered that when we starting to port our software. And we had to completely rethink how to leverage block storage in that cloud versus that cloud, because just off performance too. And, so, we had, for example, to stripe data. So, all this work is work that you don't need as an application because our vision, really, is that application, which are running in our data cloud, can be abstracted for this difference. And we provide all the services, all the workload that this application need. Whether it's transactional access to data, analytical access to data, managing logs, managing metrics, all of this is abstracted too, so that they are not tied to one particular service of one cloud. And distributing this application across many region, many cloud, is very seamless. >> So, Snowflake has built, your team has built a true abstraction layer across those clouds that's available today? It's actually shipping? >> Yes, and we are still developing it. You know, transactional, Unistore, as we call it, was announced last summit. So, they are still, you know, work in progress. >> You're not done yet. >> But that's the vision, right? And that's important, because we talk about the infrastructure, right. You mention a lot about storage and compute. But it's not only that, right. When you think about application, they need to use the transactional database. They need to use an analytical system. They need to use machine learning. So, you need to provide, also, all these services which are consistent across all the cloud providers. >> So, let's talk developers. Because, you know, you think Snowpark, you guys announced a big application development push at the Snowflake summit recently. And we have said that a criterion of super cloud is a super paz layer, people wince when I say that, but okay, we're just going to go with it. But the point is, it's a purpose built application development layer, specific to your particular agenda, that supports your vision. >> Yes. >> Have you essentially built a purpose built paz layer? Or do you just take them off the shelf, standard paz, and cobble it together? >> No, we build it a custom build. Because, as you said, what exist in one cloud might not exist in another cloud provider, right. So, we have to build in this, all these components that a multi-application need. And that goes to machine learning, as I said, transactional analytical system, and the entire thing. So that it can run in isolation physically. >> And the objective is the developer experience will be identical across those clouds? >> Yes, the developers doesn't need to worry about cloud provider. And, actually, our system will have, we didn't talk about it, but a marketplace that we have, which allows, actually, to deliver. >> We're getting there. >> Yeah, okay. (both laughing) I won't divert. >> No, no, let's go there, because the other aspect of super cloud that we've talked about is the ecosystem. You have to enable an ecosystem to add incremental value, it's not the power of many versus the capabilities of one. So, talk about the challenges of doing that. Not just the business challenges but, again, I'm interested in the technical and architectural challenges. >> Yeah, yeah, so, it's really about, I mean, the way we enable our ecosystem and our partners to create value on top of our data cloud, is via the marketplace. Where you can put shared data on the marketplace. Provide listing on this marketplace, which are data sets. But it goes way beyond data. It's all the way to application. So, you can think of it as the iPhone. A little bit more, all right. Your iPhone is great. Not so much because the hardware is great, or because of the iOS, but because of all the applications that you have. And all these applications are not necessarily developed by Apple, basically. So, we are, it's the same model with our marketplace. We foresee an environment where providers and partners are going to build these applications. We call it native application. And we are going to help them distribute these applications across cloud, everywhere in the world, potentially. And they don't need to worry about that. They don't need to worry about how these applications are going to be instantiated. We are going to help them to monetize these applications. So, that unlocks, you know, really, all the partner ecosystem that you have seen, you know, with something like the iPhone, right? It has created so many new companies that have developed these applications. >> Your detractors have criticized you for being a walled garden. I've actually used that term. I used terms like defacto standard, which are maybe less sensitive to you, but, nonetheless, we've seen defacto standards actually deliver value. I've talked to Frank Slootman about this, and he said, Dave, we deliver value, that's what we're all about. At the same time, he even said to me, and I want your thoughts on this, is, look, we have to embrace open source where it makes sense. You guys announced Apache Iceberg. So, what are your thoughts on that? Is that to enable a developer ecosystem? Why did you do Iceberg? >> Yeah, Iceberg is very important. So, just to give some context, Iceberg is an open table format. >> Right. >> Which was first developed by Netflix. And Netflix put it open source in the Apache community. So, we embraced that open source standard because it's widely used by many companies. And, also, many companies have really invested a lot of effort in building big data, Hadoop Solutions, or DataX Solution, and they want to use Snowflake. And they couldn't really use Snowflake, because all their data were in open format. So, we are embracing Iceberg to help these companies move through the cloud. But why we have been reluctant with direct access to data, direct access to data is a little bit of a problem for us. And the reason is when you direct access to data, now you have direct access to storage. Now you have to understand, for example, the specificity of one cloud versus the other. So, as soon as you start to have direct access to data, you lose your cloud data sync layer. You don't access data with API. When you have direct access to data, it's very hard to sync your data. Because you need to grant access, direct access to tools which are not protected. And you see a lot of hacking of data because of that. So, direct access to data is not serving well our customers, and that's why we have been reluctant to do that. Because it is not cloud diagnostic. You have to code that, you need a lot of intelligence, why APIs access, so we want open APIs. That's, I guess, the way we embrace openness, is by open API versus you access, directly, data. >> iPhone. >> Yeah, yeah, iPhone, APIs, you know. We define a set of APIs because APIs, you know, the implementation of the APIs can change, can improve. You can improve compression of data, for example. If you open direct access to data now, you cannot evolve. >> My point is, you made a promise, from governed, security, data sharing ecosystem. It works the same way, so that's the path that you've chosen. Benoit Dogeville, thank you so much for coming on theCube and participating in Supercloud 22, really appreciate that. >> Thank you, Dave. It was a great pleasure. >> All right, keep it right there, we'll be right back with our next segment, right after this short break. (electronic music)

Published Date : Aug 9 2022

SUMMARY :

You know, in one of the So, you know, like I said, We don't like the super and you can, you know, or you can have the same How do you enable that? we start by building, you know, And I'm in the Snowflake And when you insociate a So, the way we do it is that you replicate So, you can exchange data So, one of the hardest And then, you know, So, you mentioned a and the least expensive, so that when you run in one So, they are still, you know, So, you need to provide, Because, you know, you think Snowpark, And that goes to machine a marketplace that we have, I won't divert. So, talk about the of all the applications that you have. At the same time, he even said to me, So, just to give some context, You have to code that, you because APIs, you know, so that's the path that you've chosen. It was a great pleasure. with our next segment, right

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Frank SlootmanPERSON

0.99+

BenoitPERSON

0.99+

DavePERSON

0.99+

AppleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

John FurrierPERSON

0.99+

Kit ColbertPERSON

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

Benoit DogevillePERSON

0.99+

one regionQUANTITY

0.99+

iOSTITLE

0.99+

30 regionsQUANTITY

0.99+

more than four petabytesQUANTITY

0.99+

SnowflakeEVENT

0.99+

first lineQUANTITY

0.99+

SnowparkORGANIZATION

0.99+

SnowflakeORGANIZATION

0.98+

todayDATE

0.98+

ApacheORGANIZATION

0.98+

bothQUANTITY

0.98+

eachQUANTITY

0.98+

oneQUANTITY

0.97+

UnistoreORGANIZATION

0.97+

Supercloud 22EVENT

0.97+

first linesQUANTITY

0.97+

DataX SolutionORGANIZATION

0.97+

each regionQUANTITY

0.96+

Snow GridTITLE

0.96+

one cloudQUANTITY

0.96+

one single regionQUANTITY

0.96+

firstQUANTITY

0.96+

one regionQUANTITY

0.96+

Hadoop SolutionsORGANIZATION

0.95+

WSORGANIZATION

0.94+

Supercloud 22ORGANIZATION

0.93+

three levelsQUANTITY

0.93+

SnowflakeTITLE

0.93+

one single systemQUANTITY

0.92+

IcebergTITLE

0.89+

one single instantiationQUANTITY

0.89+

theCubeORGANIZATION

0.86+

AzureORGANIZATION

0.85+

Years agoDATE

0.83+

earlier todayDATE

0.82+

one instantiationQUANTITY

0.82+

Super Data CloudORGANIZATION

0.81+

S3COMMERCIAL_ITEM

0.8+

one cloudQUANTITY

0.76+

deltaORGANIZATION

0.76+

AzureTITLE

0.75+

one of our customersQUANTITY

0.72+

day oneQUANTITY

0.72+

Supercloud22EVENT

0.66+

Breaking Analysis: How the cloud is changing security defenses in the 2020s


 

>> Announcer: From theCUBE studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> The rapid pace of cloud adoption has changed the way organizations approach cybersecurity. Specifically, the cloud is increasingly becoming the first line of cyber defense. As such, along with communicating to the board and creating a security aware culture, the chief information security officer must ensure that the shared responsibility model is being applied properly. Meanwhile, the DevSecOps team has emerged as the critical link between strategy and execution, while audit becomes the free safety, if you will, in the equation, i.e., the last line of defense. Hello, and welcome to this week's, we keep on CUBE Insights, powered by ETR. In this "Breaking Analysis", we'll share the latest data on hyperscale, IaaS, and PaaS market performance, along with some fresh ETR survey data. And we'll share some highlights and the puts and takes from the recent AWS re:Inforce event in Boston. But first, the macro. It's earning season, and that's what many people want to talk about, including us. As we reported last week, the macro spending picture is very mixed and weird. Think back to a week ago when SNAP reported. A player like SNAP misses and the Nasdaq drops 300 points. Meanwhile, Intel, the great semiconductor hope for America misses by a mile, cuts its revenue outlook by 15% for the year, and the Nasdaq was up nearly 250 points just ahead of the close, go figure. Earnings reports from Meta, Google, Microsoft, ServiceNow, and some others underscored cautious outlooks, especially those exposed to the advertising revenue sector. But at the same time, Apple, Microsoft, and Google, were, let's say less bad than expected. And that brought a sigh of relief. And then there's Amazon, which beat on revenue, it beat on cloud revenue, and it gave positive guidance. The Nasdaq has seen this month best month since the isolation economy, which "Breaking Analysis" contributor, Chip Symington, attributes to what he calls an oversold rally. But there are many unknowns that remain. How bad will inflation be? Will the fed really stop tightening after September? The Senate just approved a big spending bill along with corporate tax hikes, which generally don't favor the economy. And on Monday, August 1st, the market will likely realize that we are in the summer quarter, and there's some work to be done. Which is why it's not surprising that investors sold the Nasdaq at the close today on Friday. Are people ready to call the bottom? Hmm, some maybe, but there's still lots of uncertainty. However, the cloud continues its march, despite some very slight deceleration in growth rates from the two leaders. Here's an update of our big four IaaS quarterly revenue data. The big four hyperscalers will account for $165 billion in revenue this year, slightly lower than what we had last quarter. We expect AWS to surpass 83 billion this year in revenue. Azure will be more than 2/3rds the size of AWS, a milestone from Microsoft. Both AWS and Azure came in slightly below our expectations, but still very solid growth at 33% and 46% respectively. GCP, Google Cloud Platform is the big concern. By our estimates GCP's growth rate decelerated from 47% in Q1, and was 38% this past quarter. The company is struggling to keep up with the two giants. Remember, both GCP and Azure, they play a shell game and hide the ball on their IaaS numbers, so we have to use a survey data and other means of estimating. But this is how we see the market shaping up in 2022. Now, before we leave the overall cloud discussion, here's some ETR data that shows the net score or spending momentum granularity for each of the hyperscalers. These bars show the breakdown for each company, with net score on the right and in parenthesis, net score from last quarter. lime green is new adoptions, forest green is spending up 6% or more, the gray is flat, pink is spending at 6% down or worse, and the bright red is replacement or churn. Subtract the reds from the greens and you get net score. One note is this is for each company's overall portfolio. So it's not just cloud. So it's a bit of a mixed bag, but there are a couple points worth noting. First, anything above 40% or 40, here as shown in the chart, is considered elevated. AWS, as you can see, is well above that 40% mark, as is Microsoft. And if you isolate Microsoft's Azure, only Azure, it jumps above AWS's momentum. Google is just barely hanging on to that 40 line, and Alibaba is well below, with both Google and Alibaba showing much higher replacements, that bright red. But here's the key point. AWS and Azure have virtually no churn, no replacements in that bright red. And all four companies are experiencing single-digit numbers in terms of decreased spending within customer accounts. People may be moving some workloads back on-prem selectively, but repatriation is definitely not a trend to bet the house on, in our view. Okay, let's get to the main subject of this "Breaking Analysis". TheCube was at AWS re:Inforce in Boston this week, and we have some observations to share. First, we had keynotes from Steven Schmidt who used to be the chief information security officer at Amazon on Web Services, now he's the CSO, the chief security officer of Amazon. Overall, he dropped the I in his title. CJ Moses is the CISO for AWS. Kurt Kufeld of AWS also spoke, as did Lena Smart, who's the MongoDB CISO, and she keynoted and also came on theCUBE. We'll go back to her in a moment. The key point Schmidt made, one of them anyway, was that Amazon sees more data points in a day than most organizations see in a lifetime. Actually, it adds up to quadrillions over a fairly short period of time, I think, it was within a month. That's quadrillion, it's 15 zeros, by the way. Now, there was drill down focus on data protection and privacy, governance, risk, and compliance, GRC, identity, big, big topic, both within AWS and the ecosystem, network security, and threat detection. Those are the five really highlighted areas. Re:Inforce is really about bringing a lot of best practice guidance to security practitioners, like how to get the most out of AWS tooling. Schmidt had a very strong statement saying, he said, "I can assure you with a 100% certainty that single controls and binary states will absolutely positively fail." Hence, the importance of course, of layered security. We heard a little bit of chat about getting ready for the future and skating to the security puck where quantum computing threatens to hack all of the existing cryptographic algorithms, and how AWS is trying to get in front of all that, and a new set of algorithms came out, AWS is testing. And, you know, we'll talk about that maybe in the future, but that's a ways off. And by its prominent presence, the ecosystem was there enforced, to talk about their role and filling the gaps and picking up where AWS leaves off. We heard a little bit about ransomware defense, but surprisingly, at least in the keynotes, no discussion about air gaps, which we've talked about in previous "Breaking Analysis", is a key factor. We heard a lot about services to help with threat detection and container security and DevOps, et cetera, but there really wasn't a lot of specific talk about how AWS is simplifying the life of the CISO. Now, maybe it's inherently assumed as AWS did a good job stressing that security is job number one, very credible and believable in that front. But you have to wonder if the world is getting simpler or more complex with cloud. And, you know, you might say, "Well, Dave, come on, of course it's better with cloud." But look, attacks are up, the threat surface is expanding, and new exfiltration records are being set every day. I think the hard truth is, the cloud is driving businesses forward and accelerating digital, and those businesses are now exposed more than ever. And that's why security has become such an important topic to boards and throughout the entire organization. Now, the other epiphany that we had at re:Inforce is that there are new layers and a new trust framework emerging in cyber. Roles are shifting, and as a direct result of the cloud, things are changing within organizations. And this first hit me in a conversation with long-time cyber practitioner and Wikibon colleague from our early Wikibon days, and friend, Mike Versace. And I spent two days testing the premise that Michael and I talked about. And here's an attempt to put that conversation into a graphic. The cloud is now the first line of defense. AWS specifically, but hyperscalers generally provide the services, the talent, the best practices, and automation tools to secure infrastructure and their physical data centers. And they're really good at it. The security inside of hyperscaler clouds is best of breed, it's world class. And that first line of defense does take some of the responsibility off of CISOs, but they have to understand and apply the shared responsibility model, where the cloud provider leaves it to the customer, of course, to make sure that the infrastructure they're deploying is properly configured. So in addition to creating a cyber aware culture and communicating up to the board, the CISO has to ensure compliance with and adherence to the model. That includes attracting and retaining the talent necessary to succeed. Now, on the subject of building a security culture, listen to this clip on one of the techniques that Lena Smart, remember, she's the CISO of MongoDB, one of the techniques she uses to foster awareness and build security cultures in her organization. Play the clip >> Having the Security Champion program, so that's just, it's like one of my babies. That and helping underrepresented groups in MongoDB kind of get on in the tech world are both really important to me. And so the Security Champion program is purely purely voluntary. We have over 100 members. And these are people, there's no bar to join, you don't have to be technical. If you're an executive assistant who wants to learn more about security, like my assistant does, you're more than welcome. Up to, we actually, people grade themselves when they join us. We give them a little tick box, like five is, I walk on security water, one is I can spell security, but I'd like to learn more. Mixing those groups together has been game-changing for us. >> Now, the next layer is really where it gets interesting. DevSecOps, you know, we hear about it all the time, shifting left. It implies designing security into the code at the dev level. Shift left and shield right is the kind of buzz phrase. But it's getting more and more complicated. So there are layers within the development cycle, i.e., securing the container. So the app code can't be threatened by backdoors or weaknesses in the containers. Then, securing the runtime to make sure the code is maintained and compliant. Then, the DevOps platform so that change management doesn't create gaps and exposures, and screw things up. And this is just for the application security side of the equation. What about the network and implementing zero trust principles, and securing endpoints, and machine to machine, and human to app communication? So there's a lot of burden being placed on the DevOps team, and they have to partner with the SecOps team to succeed. Those guys are not security experts. And finally, there's audit, which is the last line of defense or what I called at the open, the free safety, for you football fans. They have to do more than just tick the box for the board. That doesn't cut it anymore. They really have to know their stuff and make sure that what they sign off on is real. And then you throw ESG into the mix is becoming more important, making sure the supply chain is green and also secure. So you can see, while much of this stuff has been around for a long, long time, the cloud is accelerating innovation in the pace of delivery. And so much is changing as a result. Now, next, I want to share a graphic that we shared last week, but a little different twist. It's an XY graphic with net score or spending velocity in the vertical axis and overlap or presence in the dataset on the horizontal. With that magic 40% red line as shown. Okay, I won't dig into the data and draw conclusions 'cause we did that last week, but two points I want to make. First, look at Microsoft in the upper-right hand corner. They are big in security and they're attracting a lot of dollars in the space. We've reported on this for a while. They're a five-star security company. And every time, from a spending standpoint in ETR data, that little methodology we use, every time I've run this chart, I've wondered, where the heck is AWS? Why aren't they showing up there? If security is so important to AWS, which it is, and its customers, why aren't they spending money with Amazon on security? And I asked this very question to Merrit Baer, who resides in the office of the CISO at AWS. Listen to her answer. >> It doesn't mean don't spend on security. There is a lot of goodness that we have to offer in ESS, external security services. But I think one of the unique parts of AWS is that we don't believe that security is something you should buy, it's something that you get from us. It's something that we do for you a lot of the time. I mean, this is the definition of the shared responsibility model, right? >> Now, maybe that's good messaging to the market. Merritt, you know, didn't say it outright, but essentially, Microsoft they charge for security. At AWS, it comes with the package. But it does answer my question. And, of course, the fact is that AWS can subsidize all this with egress charges. Now, on the flip side of that, (chuckles) you got Microsoft, you know, they're both, they're competing now. We can take CrowdStrike for instance. Microsoft and CrowdStrike, they compete with each other head to head. So it's an interesting dynamic within the ecosystem. Okay, but I want to turn to a powerful example of how AWS designs in security. And that is the idea of confidential computing. Of course, AWS is not the only one, but we're coming off of re:Inforce, and I really want to dig into something that David Floyer and I have talked about in previous episodes. And we had an opportunity to sit down with Arvind Raghu and J.D. Bean, two security experts from AWS, to talk about this subject. And let's share what we learned and why we think it matters. First, what is confidential computing? That's what this slide is designed to convey. To AWS, they would describe it this way. It's the use of special hardware and the associated firmware that protects customer code and data from any unauthorized access while the data is in use, i.e., while it's being processed. That's oftentimes a security gap. And there are two dimensions here. One is protecting the data and the code from operators on the cloud provider, i.e, in this case, AWS, and protecting the data and code from the customers themselves. In other words, from admin level users are possible malicious actors on the customer side where the code and data is being processed. And there are three capabilities that enable this. First, the AWS Nitro System, which is the foundation for virtualization. The second is Nitro Enclaves, which isolate environments, and then third, the Nitro Trusted Platform Module, TPM, which enables cryptographic assurances of the integrity of the Nitro instances. Now, we've talked about Nitro in the past, and we think it's a revolutionary innovation, so let's dig into that a bit. This is an AWS slide that was shared about how they protect and isolate data and code. On the left-hand side is a classical view of a virtualized architecture. You have a single host or a single server, and those white boxes represent processes on the main board, X86, or could be Intel, or AMD, or alternative architectures. And you have the hypervisor at the bottom which translates instructions to the CPU, allowing direct execution from a virtual machine into the CPU. But notice, you also have blocks for networking, and storage, and security. And the hypervisor emulates or translates IOS between the physical resources and the virtual machines. And it creates some overhead. Now, companies like VMware have done a great job, and others, of stripping out some of that overhead, but there's still an overhead there. That's why people still like to run on bare metal. Now, and while it's not shown in the graphic, there's an operating system in there somewhere, which is privileged, so it's got access to these resources, and it provides the services to the VMs. Now, on the right-hand side, you have the Nitro system. And you can see immediately the differences between the left and right, because the networking, the storage, and the security, the management, et cetera, they've been separated from the hypervisor and that main board, which has the Intel, AMD, throw in Graviton and Trainium, you know, whatever XPUs are in use in the cloud. And you can see that orange Nitro hypervisor. That is a purpose-built lightweight component for this system. And all the other functions are separated in isolated domains. So very strong isolation between the cloud software and the physical hardware running workloads, i.e., those white boxes on the main board. Now, this will run at practically bare metal speeds, and there are other benefits as well. One of the biggest is security. As we've previously reported, this came out of AWS's acquisition of Annapurna Labs, which we've estimated was picked up for a measly $350 million, which is a drop in the bucket for AWS to get such a strategic asset. And there are three enablers on this side. One is the Nitro cards, which are accelerators to offload that wasted work that's done in traditional architectures by typically the X86. We've estimated 25% to 30% of core capacity and cycles is wasted on those offloads. The second is the Nitro security chip, which is embedded and extends the root of trust to the main board hardware. And finally, the Nitro hypervisor, which allocates memory and CPU resources. So the Nitro cards communicate directly with the VMs without the hypervisors getting in the way, and they're not in the path. And all that data is encrypted while it's in motion, and of course, encryption at rest has been around for a while. We asked AWS, is this an, we presumed it was an Arm-based architecture. We wanted to confirm that. Or is it some other type of maybe hybrid using X86 and Arm? They told us the following, and quote, "The SoC, system on chips, for these hardware components are purpose-built and custom designed in-house by Amazon and Annapurna Labs. The same group responsible for other silicon innovations such as Graviton, Inferentia, Trainium, and AQUA. Now, the Nitro cards are Arm-based and do not use any X86 or X86/64 bit CPUs. Okay, so it confirms what we thought. So you may say, "Why should we even care about all this technical mumbo jumbo, Dave?" Well, a year ago, David Floyer and I published this piece explaining why Nitro and Graviton are secret weapons of Amazon that have been a decade in the making, and why everybody needs some type of Nitro to compete in the future. This is enabled, this Nitro innovations and the custom silicon enabled by the Annapurna acquisition. And AWS has the volume economics to make custom silicon. Not everybody can do it. And it's leveraging the Arm ecosystem, the standard software, and the fabrication volume, the manufacturing volume to revolutionize enterprise computing. Nitro, with the alternative processor, architectures like Graviton and others, enables AWS to be on a performance, cost, and power consumption curve that blows away anything we've ever seen from Intel. And Intel's disastrous earnings results that we saw this past week are a symptom of this mega trend that we've been talking about for years. In the same way that Intel and X86 destroyed the market for RISC chips, thanks to PC volumes, Arm is blowing away X86 with volume economics that cannot be matched by Intel. Thanks to, of course, to mobile and edge. Our prediction is that these innovations and the Arm ecosystem are migrating and will migrate further into enterprise computing, which is Intel's stronghold. Now, that stronghold is getting eaten away by the likes of AMD, Nvidia, and of course, Arm in the form of Graviton and other Arm-based alternatives. Apple, Tesla, Amazon, Google, Microsoft, Alibaba, and others are all designing custom silicon, and doing so much faster than Intel can go from design to tape out, roughly cutting that time in half. And the premise of this piece is that every company needs a Nitro to enable alternatives to the X86 in order to support emergent workloads that are data rich and AI-based, and to compete from an economic standpoint. So while at re:Inforce, we heard that the impetus for Nitro was security. Of course, the Arm ecosystem, and its ascendancy has enabled, in our view, AWS to create a platform that will set the enterprise computing market this decade and beyond. Okay, that's it for today. Thanks to Alex Morrison, who is on production. And he does the podcast. And Ken Schiffman, our newest member of our Boston Studio team is also on production. Kristen Martin and Cheryl Knight help spread the word on social media and in the community. And Rob Hof is our editor in chief over at SiliconANGLE. He does some great, great work for us. Remember, all these episodes are available as podcast. Wherever you listen, just search "Breaking Analysis" podcast. I publish each week on wikibon.com and siliconangle.com. Or you can email me directly at David.Vellante@siliconangle.com or DM me @dvellante, comment on my LinkedIn post. And please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. Be well, and we'll see you next time on "Breaking Analysis." (upbeat theme music)

Published Date : Jul 30 2022

SUMMARY :

This is "Breaking Analysis" and the Nasdaq was up nearly 250 points And so the Security Champion program the SecOps team to succeed. of the shared responsibility model, right? and it provides the services to the VMs.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MorrisonPERSON

0.99+

David FloyerPERSON

0.99+

Mike VersacePERSON

0.99+

MichaelPERSON

0.99+

AWSORGANIZATION

0.99+

Steven SchmidtPERSON

0.99+

AmazonORGANIZATION

0.99+

Kurt KufeldPERSON

0.99+

AppleORGANIZATION

0.99+

Dave VellantePERSON

0.99+

TeslaORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

J.D. BeanPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Arvind RaghuPERSON

0.99+

Lena SmartPERSON

0.99+

Kristen MartinPERSON

0.99+

Cheryl KnightPERSON

0.99+

40%QUANTITY

0.99+

Rob HofPERSON

0.99+

DavePERSON

0.99+

SchmidtPERSON

0.99+

Palo AltoLOCATION

0.99+

2022DATE

0.99+

fiveQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

two daysQUANTITY

0.99+

Annapurna LabsORGANIZATION

0.99+

6%QUANTITY

0.99+

SNAPORGANIZATION

0.99+

five-starQUANTITY

0.99+

Chip SymingtonPERSON

0.99+

47%QUANTITY

0.99+

AnnapurnaORGANIZATION

0.99+

$350 millionQUANTITY

0.99+

BostonLOCATION

0.99+

Merrit BaerPERSON

0.99+

CJ MosesPERSON

0.99+

40QUANTITY

0.99+

MerrittPERSON

0.99+

15%QUANTITY

0.99+

25%QUANTITY

0.99+

AMDORGANIZATION

0.99+

Christian Wiklund, unitQ | AWS Startup Showcase S2 E3


 

(upbeat music) >> Hello, everyone. Welcome to the theCUBE's presentation of the AWS Startup Showcase. The theme, this showcase is MarTech, the emerging cloud scale customer experiences. Season two of episode three, the ongoing series covering the startups, the hot startups, talking about analytics, data, all things MarTech. I'm your host, John Furrier, here joined by Christian Wiklund, founder and CEO of unitQ here, talk about harnessing the power of user feedback to empower marketing. Thanks for joining us today. >> Thank you so much, John. Happy to be here. >> In these new shifts in the market, when you got cloud scale, open source software is completely changing the software business. We know that. There's no longer a software category. It's cloud, integration, data. That's the new normal. That's the new category, right? So as companies are building their products, and want to do a good job, it used to be, you send out surveys, you try to get the product market fit. And if you were smart, you got it right the third, fourth, 10th time. If you were lucky, like some companies, you get it right the first time. But the holy grail is to get it right the first time. And now, this new data acquisition opportunities that you guys in the middle of that can tap customers or prospects or end users to get data before things are shipped, or built, or to iterate on products. This is the customer feedback loop or data, voice of the customer journey. It's a gold mine. And it's you guys, it's your secret weapon. Take us through what this is about now. I mean, it's not just surveys. What's different? >> So yeah, if we go back to why are we building unitQ? Which is we want to build a quality company. Which is basically, how do we enable other companies to build higher quality experiences by tapping into all of the existing data assets? And the one we are in particularly excited about is user feedback. So me and my co-founder, Nik, and we're doing now the second company together. We spent 14 years. So we're like an old married couple. We accept each other, and we don't fight anymore, which is great. We did a consumer company called Skout, which was sold five years ago. And Skout was kind of early in the whole mobile first. I guess, we were actually mobile first company. And when we launched this one, we immediately had the entire world as our marketplace, right? Like any modern company. We launch a product, we have support for many languages. It's multiple platforms. We have Android, iOS, web, big screens, small screens, and that brings some complexities as it relates to staying on top of the quality of the experience because how do I test everything? >> John: Yeah. >> Pre-production. How do I make sure that our Polish Android users are having a good day? And we found at Skout, personally, like I could discover million dollar bugs by just drinking coffee and reading feedback. And we're like, "Well, there's got to be a better way to actually harness the end user feedback. That they are leaving in so many different places." So, you know what, what unitQ does is that we basically aggregate all different sources of user feedback, which can be app store reviews, Reddit posts, Tweets, comments on your Facebook ads. It can be better Business Bureau Reports. We don't like to get to many of those, of course. But really, anything on the public domain that mentions or refers to your product, we want to ingest that data in this machine, and then all the private sources. So you probably have a support system deployed, a Zendesk, or an Intercom. You might have a chatbot like an Ada, or and so forth. And your end user is going to leave a lot of feedback there as well. So we take all of these channels, plug it into the machine, and then we're able to take this qualitative data. Which and I actually think like, when an end user leaves a piece of feedback, it's an act of love. They took time out of the day, and they're going to tell you, "Hey, this is not working for me," or, "Hey, this is working for me," and they're giving you feedback. But how do we package these very messy, multi-channel, multiple languages, all over the place data? How can we distill it into something that's quantifiable? Because I want to be able to monitor these different signals. So I want to turn user feedback into time series. 'Cause with time series, I can now treat this the same way as Datadog treats machine logs. I want to be able to see anomalies, and I want to know when something breaks. So what we do here is that we break down your data in something called quality monitors, which is basically machine learning models that can aggregate the same type of feedback data in this very fine grained and discrete buckets. And we deploy up to a thousand of these quality monitors per product. And so we can get down to the root cause. Let's say, passive reset link is not working. And it's in that root cause, the granularity that we see that companies take action on the data. And I think historically, there has been like the workflow between marketing and support, and engineering and product has been a bit broken. They've been siloed from a data perspective. They've been siloed from a workflow perspective, where support will get a bunch of tickets around some issue in production. And they're trained to copy and paste some examples, and throw it over the wall, file a Jira ticket, and then they don't know what happens. So what we see with the platform we built is that these teams are able to rally around the single source of troop or like, yes, passive recent link seems to have broken. This is not a user error. It's not a fix later, or I can't reproduce. We're looking at the data, and yes, something broke. We need to fix it. >> I mean, the data silos a huge issue. Different channels, omnichannel. Now, there's more and more channels that people are talking in. So that's huge. I want to get to that. But also, you said that it's a labor of love to leave a comment or a feedback. But also, I remember from my early days, breaking into the business at IBM and Hewlett-Packard, where I worked. People who complain are the most loyal customers, if you service them. So it's complaints. >> Christian: Yeah. >> It's leaving feedback. And then, there's also reading between the lines with app errors or potentially what's going on under the covers that people may not be complaining about, but they're leaving maybe gesture data or some sort of digital trail. >> Yeah. >> So this is the confluence of the multitude of data sources. And then you got the siloed locations. >> Siloed locations. >> It's complicated problem. >> It's very complicated. And when you think about, so I started, I came to Bay Area in 2005. My dream was to be a quant analyst on Wall Street, and I ended up in QA at VMware. So I started at VMware in Palo Alto, and didn't have a driver's license. I had to bike around, which was super exciting. And we were shipping box software, right? This was literally a box with a DVD that's been burned, and if that DVD had bugs in it, guess what it'll be very costly to then have to ship out, and everything. So I love the VMware example because the test cycles were long and brutal. It was like a six month deal to get through all these different cases, and they couldn't be any bugs. But then as the industry moved into the cloud, CI/CD, ship at will. And if you look at the modern company, you'll have at least 20 plus integrations into your product. Analytics, add that's the case, authentication, that's the case, and so forth. And these integrations, they morph, and they break. And you have connectivity issues. Is your product working as well on Caltrain, when you're driving up and down, versus wifi? You have language specific bugs that happen. Android is also quite a fragmented market. The binary may not perform as well on that device, or is that device. So how do we make sure that we test everything before we ship? The answer is, we can't. There's no company today that can test everything before the ship. In particular, in consumer. And the epiphany we had at our last company, Skout, was that, "Hey, wait a minute. The end user, they're testing every configuration." They're sitting on the latest device, the oldest device. They're sitting on Japanese language, on Swedish language. >> John: Yeah. >> They are in different code paths because our product executed differently, depending on if you were a paid user, or a freemium user, or if you were certain demographical data. There's so many ways that you would have to test. And PagerDuty actually had a study they came out with recently, where they said 51% of all end user impacting issues are discovered first by the end user, when they serve with a bunch of customers. And again, like the cool part is, they will tell you what's not working. So now, how do we tap into that? >> Yeah. >> So what I'd like to say is, "Hey, your end user is like your ultimate test group, and unitQ is the layer that converts them into your extended test team." Now, the signals they're producing, it's making it through to the different teams in the organization. >> I think that's the script that you guys are flipping. If I could just interject. Because to me, when I hear you talking, I hear, "Okay, you're letting the customers be an input into the product development process." And there's many different pipelines of that development. And that could be whether you're iterating, or geography, releases, all kinds of different pipelines to get to the market. But in the old days, it was like just customer satisfaction. Complain in a call center. >> Christian: Yeah. >> Or I'm complaining, how do I get support? Nothing made itself into the product improvement, except for slow moving, waterfall-based processes. And then, maybe six months later, a small tweak could be improved. >> Yes. >> Here, you're taking direct input from collective intelligence. Okay. >> Is that have input and on timing is very important here, right? So how do you know if the product is working as it should in all these different flavors and configurations right now? How do you know if it's working well? And how do you know if you're improving or not improving over time? And I think the industry, what can we look at, as far as when it relates to quality? So I can look at star ratings, right? So what's the star rating in the app store? Well, star ratings, that's an average over time. So that's something that you may have a lot of issues in production today, and you're going to get dinged on star ratings over the next few months. And then, it brings down the score. NPS is another one, where we're not going to run NPS surveys every day. We're going to run it once a quarter, maybe once a month, if we're really, really aggressive. That's also a snapshot in time. And we need to have the finger on the pulse of product quality today. I need to know if this release is good or not good. I need to know if anything broke. And I think that real time aspect, what we see as stuff sort of bubbles up the stack, and not into production, we see up to a 50% reduction in time to fix these end user impacting issues. And I think, we also need to appreciate when someone takes time out of the day to write an app review, or email support, or write that Reddit post, it's pretty serious. It's not going to be like, "Oh, I don't like the shade of blue on this button." It's going to be something like, "I got double billed," or "Hey, someone took over my account," or, "I can't reset my password anymore. The CAPTCHA, I'm solving it, but I can't get through to the next phase." And we see a lot of these trajectory impacting bugs and quality issues in these work, these flows in the product that you're not testing every day. So if you work at Snapchat, your employees probably going to use Snapchat every day. Are they going to sign up every day? No. Are they going to do passive reset every day? No. And these things are very hard to instrument, lower in the stack. >> Yeah, I think this is, and again, back to these big problems. It's smoke before fire, and you're essentially seeing it early with your process. Can you give an example of how this new focus or new mindset of user feedback data can help customers increase their experience? Can you give some examples, 'cause folks watching and be like, "Okay, I love this value. Sell me on this idea, I'm sold. Okay, I want to tap into my prospects, and my customers, my end users to help me improve my product." 'Cause again, we can measure everything now with data. >> Yeah. We can measure everything. we can even measure quality these days. So when we started this company, I went out to talk to a bunch of friends, who are entrepreneurs, and VCs, and board members, and I asked them this very simple question. So in your board meetings, or on all hands, how do you talk about quality of the product? Do you have a metric? And everyone said, no. Okay. So are you data driven company? Yes, we're very data driven. >> John: Yeah. Go data driven. >> But you're not really sure if quality, how do you compare against competition? Are you doing as good as them, worse, better? Are you improving over time, and how do you measure it? And they're like, "Well, it's kind of like a blind spot of the company." And then you ask, "Well, do you think quality of experience is important?" And they say, "Yeah." "Well, why?" "Well, top of fund and growth. Higher quality products going to spread faster organically, we're going to make better store ratings. We're going to have the storefronts going to look better." And of course, more importantly, they said the different conversion cycles in the product box itself. That if you have bugs and friction, or an interface that's hard to use, then the inputs, the signups, it's not going to convert as well. So you're going to get dinged on retention, engagement, conversion to paid, and so forth. And that's what we've seen with the companies we work with. It is that poor quality acts as a filter function for the entire business, if you're a product led company. So if you think about product led company, where the product is really the centerpiece. And if it performs really, really well, then it allows you to hire more engineers, you can spend more on marketing. Everything is fed by this product at them in the middle, and then quality can make that thing perform worse or better. And we developed a metric actually called the unitQ Score. So if you go to our website, unitq.com, we have indexed the 5,000 largest apps in the world. And we're able to then, on a daily basis, update the score. Because the score is not something you do once a month or once a quarter. It's something that changes continuously. So now, you can get a score between zero and 100. If you get the score 100, that means that our AI doesn't find any quality issues reported in that data set. And if your score is 90, that means that 10% will be a quality issue. So now you can do a lot of fun stuff. You can start benchmarking against competition. So you can see, "Well, I'm Spotify. How do I rank against Deezer, or SoundCloud, or others in my space?" And what we've seen is that as the score goes up, we see this real big impact on KPI, such as conversion, organic growth, retention, ultimately, revenue, right? And so that was very satisfying for us, when we launched it. quality actually still really, really matters. >> Yeah. >> And I think we all agree at test, but how do we make a science out of it? And that's so what we've done. And when we were very lucky early on to get some incredible brands that we work with. So Pinterest is a big customer of ours. We have Spotify. We just signed new bank, Chime. So like we even signed BetterHelp recently, and the world's largest Bible app. So when you look at the types of businesses that we work with, it's truly a universal, very broad field, where if you have a digital exhaust or feedback, I can guarantee you, there are insights in there that are being neglected. >> John: So Chris, I got to. >> So these manual workflows. Yeah, please go ahead. >> I got to ask you, because this is a really great example of this new shift, right? The new shift of leveraging data, flipping the script. Everything's flipping the script here, right? >> Yeah. >> So you're talking about, what the value proposition is? "Hey, board example's a good one. How do you measure quality? There's no KPI for that." So it's almost category creating in its own way. In that, this net new things, it's okay to be new, it's just new. So the question is, if I'm a customer, I buy it. I can see my product teams engaging with this. I can see how it can changes my marketing, and customer experience teams. How do I operationalize this? Okay. So what do I do? So do I reorganize my marketing team? So take me through the impact to the customer that you're seeing. What are they resonating towards? Obviously, getting that data is key, and that's holy gray, we all know that. But what do I got to do to change my environment? What's my operationalization piece of it? >> Yeah, and that's one of the coolest parts I think, and that is, let's start with your user base. We're not going to ask your users to ask your users to do something differently. They're already producing this data every day. They are tweeting about it. They're putting in app produce. They're emailing support. They're engaging with your support chatbot. They're already doing it. And every day that you're not leveraging that data, the data that was produced today is less valuable tomorrow. And in 30 days, I would argue, it's probably useless. >> John: Unless it's same guy commenting. >> Yeah. (Christian and John laughing) The first, we need to make everyone understand. Well, yeah, the data is there, and we don't need to do anything differently with the end user. And then, what we do is we ask the customer to tell us, "Where should we listen in the public domain? So do you want the Reddit post, the Trustpilot? What channels should we listen to?" And then, our machine basically starts ingesting that data. So we have integration with all these different sites. And then, to get access to private data, it'll be, if you're on Zendesk, you have to issue a Zendesk token, right? So you don't need any engineering hours, except your IT person will have to grant us access to the data source. And then, when we go live. We basically build up this taxonomy with the customers. So we don't we don't want to try and impose our view of the world, of how do you describe the product with these buckets, these quality monitors? So we work with the company to then build out this taxonomy. So it's almost like a bespoke solution that we can bootstrap with previous work we've done, where you don't have these very, very fine buckets of where stuff could go wrong. And then what we do is there are different ways to hook this into the workflow. So one is just to use our products. It's a SaaS product as anything else. So you log in, and you can then get this overview of how is quality trending in different markets, on different platforms, different languages, and what is impacting them? What is driving this unitQ Score that's not good enough? And all of these different signals, we can then hook into Jira for instance. We have a Jira integration. We have a PagerDuty integration. We can wake up engineers if certain things break. We also tag tickets in your support system, which is actually quite cool. Where, let's say, you have 200 people, who wrote into support, saying, "I got double billed on Android." It turns out, there are some bugs that double billed them. Well, now we can tag all of these users in Zendesk, and then the support team can then reach out to that segment of users and say, "Hey, we heard that you had this bug with double billing. We're so sorry. We're working on it." And then when we push fix, we can then email the same group again, and maybe give them a little gift card or something, for the thank you. So you can have, even big companies can have that small company experience. So, so it's groups that use us, like at Pinterest, we have 800 accounts. So it's really through marketing has vested interest because they want to know what is impacting the end user. Because brand and product, the lines are basically gone, right? >> John: Yeah. >> So if the product is not working, then my spend into this machine is going to be less efficient. The reputation of our company is going to be worse. And the challenge for marketers before unitQ was, how do I engage with engineering and product? I'm dealing with anecdotal data, and my own experience of like, "Hey, I've never seen these type of complaints before. I think something is going on." >> John: Yeah. >> And then engineering will be like, "Ah, you know, well, I have 5,000 bugs in Jira. Why does this one matter? When did it start? Is this a growing issue?" >> John: You have to replicate the problem, right? >> Replicate it then. >> And then it goes on and on and on. >> And a lot of times, reproducing bugs, it's really hard because it works on my device. Because you don't sit on that device that it happened on. >> Yup. >> So now, when marketing can come with indisputable data, and say, "Hey, something broke here." And we see the same with support. Product engineering, of course, for them, we talk about, "Hey, listen, you you've invested a lot in observability of your stack, haven't you?" "Yeah, yeah, yeah." "So you have a Datadog in the bottom?" "Absolutely." "And you have an APP D on the client?" "Absolutely." "Well, what about the last mile? How the product manifests itself? Shouldn't you monitor that as well using machines?" They're like, "Yeah, that'd be really cool." (John laughs) And we see this. There's no way to instrument everything, lowering the stack to capture these bugs that leak out. So it resonates really well there. And even for the engineers who's going to fix it. >> Yeah. >> I call it like empathy data. >> Yup. >> Where I get assigned a bug to fix. Well, now, I can read all the feedback. I can actually see, and I can see the feedback coming in. >> Yeah. >> Oh, there's users out there, suffering from this bug. And then when I fix it and I deploy the fix, and I see the trend go down to zero, and then I can celebrate it. So that whole feedback loop is (indistinct). >> And that's real time. It's usually missed too. This is the power of user feedback. You guys got a great product, unitQ. Great to have you on. Founder and CEO, Christian Wiklund. Thanks for coming on and sharing, and showcase. >> Thank you, John. For the last 30 seconds, the minute we have left, put a plug in for the company. What are you guys looking for? Give a quick pitch for the company, real quick, for the folks out there. Looking for more people, funding status, number of employees. Give a quick plug. >> Yes. So we raised our A Round from Google, and then we raised our B from Excel that we closed late last year. So we're not raising money. We are hiring across go-to-markets, engineering. And we love to work with people, who are passionate about quality and data. We're always, of course, looking for customers, who are interested in upping their game. And hey, listen, competing with features is really hard because you can copy features very quickly. Competing with content. Content is commodity. You're going to get the same movies more or less on all these different providers. And competing on price, we're not willing to do. You're going to pay 10 bucks a month for music. So how do you compete today? And if your competitor has a better fine tuned piano than your competitor will have better efficiencies, and they're going to retain customers and users better. And you don't want to lose on quality because it is actually a deterministic and fixable problem. So yeah, come talk to us if you want to up the game there. >> Great stuff. The iteration lean startup model, some say took craft out of building the product. But this is now bringing the craftsmanship into the product cycle, when you can get that data from customers and users. >> Yeah. >> Who are going to be happy that you fixed it, that you're listening. >> Yeah. >> And that the product got better. So it's a flywheel of loyalty, quality, brand, all off you can figure it out. It's the holy grail. >> I think it is. It's a gold mine. And every day you're not leveraging this assets, your use of feedback that's there, is a missed opportunity. >> Christian, thanks so much for coming on. Congratulations to you and your startup. You guys back together. The band is back together, up into the right, doing well. >> Yeah. We we'll check in with you later. Thanks for coming on this showcase. Appreciate it. >> Thank you, John. Appreciate it very much. >> Okay. AWS Startup Showcase. This is season two, episode three, the ongoing series. This one's about MarTech, cloud experiences are scaling. I'm John Furrier, your host. Thanks for watching. (upbeat music)

Published Date : Jun 29 2022

SUMMARY :

of the AWS Startup Showcase. Thank you so much, John. But the holy grail is to And the one we are in And so we can get down to the root cause. I mean, the data silos a huge issue. reading between the lines And then you got the siloed locations. And the epiphany we had at And again, like the cool part is, in the organization. But in the old days, it was the product improvement, Here, you're taking direct input And how do you know if you're improving Can you give an example So are you data driven company? And then you ask, And I think we all agree at test, So these manual workflows. I got to ask you, So the question is, if And every day that you're ask the customer to tell us, So if the product is not working, And then engineering will be like, And a lot of times, And even for the engineers Well, now, I can read all the feedback. and I see the trend go down to zero, Great to have you on. the minute we have left, So how do you compete today? of building the product. happy that you fixed it, And that the product got better. And every day you're not Congratulations to you and your startup. We we'll check in with you later. Appreciate it very much. I'm John Furrier, your host.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

JohnPERSON

0.99+

Christian WiklundPERSON

0.99+

IBMORGANIZATION

0.99+

John FurrierPERSON

0.99+

2005DATE

0.99+

Hewlett-PackardORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

10%QUANTITY

0.99+

six monthQUANTITY

0.99+

thirdQUANTITY

0.99+

fourthQUANTITY

0.99+

PinterestORGANIZATION

0.99+

800 accountsQUANTITY

0.99+

5,000 bugsQUANTITY

0.99+

51%QUANTITY

0.99+

14 yearsQUANTITY

0.99+

Bay AreaLOCATION

0.99+

90QUANTITY

0.99+

AndroidTITLE

0.99+

200 peopleQUANTITY

0.99+

NikPERSON

0.99+

SkoutORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

iOSTITLE

0.99+

ExcelTITLE

0.99+

tomorrowDATE

0.99+

first timeQUANTITY

0.99+

ChristianPERSON

0.99+

todayDATE

0.99+

unitQORGANIZATION

0.99+

5,000 largest appsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

second companyQUANTITY

0.99+

100QUANTITY

0.99+

JiraTITLE

0.99+

SpotifyORGANIZATION

0.99+

BibleTITLE

0.99+

30 daysQUANTITY

0.99+

FacebookORGANIZATION

0.99+

ZendeskORGANIZATION

0.99+

IntercomORGANIZATION

0.98+

ChimeORGANIZATION

0.98+

firstQUANTITY

0.98+

Wall StreetLOCATION

0.98+

once a monthQUANTITY

0.98+

RedditORGANIZATION

0.98+

once a quarterQUANTITY

0.98+

five years agoDATE

0.98+

million dollarQUANTITY

0.97+

first companyQUANTITY

0.97+

six months laterDATE

0.97+

zeroQUANTITY

0.97+

SwedishOTHER

0.97+

JapaneseOTHER

0.97+

late last yearDATE

0.96+

PagerDutyORGANIZATION

0.95+

AWSORGANIZATION

0.95+

10th timeQUANTITY

0.95+

Sunil Senan, Infosys & Chris Degnan, Snowflake | Snowflake Summit 2022


 

>>mhm. >>Good morning. Live from Las Vegas. That snowflake Summit 22. Lisa Martin With Day Volonte David's Great. We have three wall to wall days of coverage at Snowflake Summit 22 this year. >>Yeah, it's all about data and bringing data to applications. And we've got some big announcements coming this week. Super exciting >>collaboration around data. We are excited to welcome our first two guests before the keynote. We have seen Nielsen in S V. P of data and Analytics Service offering head at emphasis. And Chris Dignan alumni is back with us to chief revenue officer at stuff like guys. Great to have you on the programme. Thanks for having us. Thank you very much. So he'll tell us what's going on with emphasis and snowflake and the partnership. Give us all that good stuff. >>Yeah, No, I think with the convergence of, uh, data digital and computing economy, um, you know that convergence is creating so much possibilities for for customers, uh, snowflake and emphases working together to help our customers realise the vision and these possibilities that are getting driven. We share a very strategic partnership where we are thinking ahead for our customers in terms of what, uh, we can do together in order to build solutions in order to bring out the expertise that is needed for such transformations and also influencing the thinking, Um, and the and the point of view in the market together so that, you know there is there is cohesive approach to doing this transformation and getting to those business outcomes. So it's a It's a partnership that's very successful and its strategic for for our customers, and we continue to invest for the market. >>Got some great customer. Some of my favourite CVS, Nike, William Sanoma. Gotta love that one. Chris talked to us about the snowflake data cloud. What makes it so unique and compelling in the market? >>Well, I think our customers, really they are going through digital transformation today, and they're moving from on premise to the cloud and historically speaking, there just hasn't been the right tool set to help them do that. I think snowflake brings to the table an opportunity for them to take all of their data and take it and and allow it to go from one cloud to the other so they can sit on a W s it can sit on Azure can sit on G, C, P and I can move around from cloud to cloud, and they can do analytics on top of that. >>So data has been traditionally really hard. And we saw that in the big data movement. But we learned a lot. Uh, and AI has been, you know, challenging. So what are you seeing with with customers? What are they struggling with? And how are you guys helping them? >>Yeah. So if you look at the customer journey, they have invested in a number of technologies in the past and are now at a juncture where they need to transform that landscape. They have the challenges of legacy debt that they need to, you know, get rid of or transform. They have the challenges of really bringing, you know, a cohesive understanding within the enterprise as to what these possibilities are for their business. Given the strategy that they are pursuing, um, business and I t cycles are not necessarily aligned. Um, you have the challenge of very fragmented data landscape that they have created over a period of time. How do you, you know, put all these together and work with a specific outcome in mind so that you're not doing transformation for the purpose of transformation. But to be able to actually drive new business models, new data driven products and services ability for you to collaborate with your partners and create unique competitive advantage in the market. And how do you bring those purposes together with the transformation that that's really happening? And and that's where you know our our customers, um, you know, grapple with the challenges of bringing it together. So, >>Chris, how do you see? Because it was talking about, uh, legacy that I think technical debt. Um, you kind of started out making the data warehouse easier. Then this data cloud thing comes out. You're like, Oh, that's an interesting vision and all of a sudden it's way more than vision. You get this huge ecosystem you're extending, we're gonna hear the announcements this morning. We won't. We won't spill the beans, but but really expanding the data cloud. So it's hard to keep up with with where you're at. So I think modernisation, right? So how do you think about modernisation? How are your customers thinking about it? And what's the scope of Snowflake. >>Well, you know, I think historically, you asked about AI and Ml and, you know, in the A I world historically, they've lacked data, and I think because we're the data cloud, we're bringing data, you know, and making it available and democratising it for everybody. And then, you know, partners like emphasis are actually helping us bring, you know, applications and new business models to to the table to our customers and their innovating on top of the data that we already have in the Snowflake Data Club. >>Chris, can you talk about some of the verticals where you guys are successful with emphasis that the three that I mentioned are retailers, But I know that finance, healthcare and life sciences are are huge for smooth, like talk to me, give us a perspective of the verticals that are coming to you. Guys saying help us out with transport. >>You know, I'll give you just an example. So So in the in the retail space, for example, Kraft Heinz is a is a joint customer of ours. And, you know, they've been all in on on snowflakes, Data Cloud and one of our big customers as well it is is Albertsons, and Albertans realises, Oh my gosh, I have all this information around the consumer in in the grocery stores and Kraft Heinz. They want access to that, and they actually can make supply chain decisions a lot faster if they have access to it. So with snowflakes data sharing, we can actually allow them to share data. Albertans share data directly with Kraft, Heinz and Kraft. Heinz can actually make supply chain decisions in real time so that these are some of the stuff that emphasis and stuff like help our customers self. >>So traditionally, the data pipeline goes through some very highly specialised individuals, whether the data engineer, the data scientists and data analyst. So that example that you just gave our organisation you mentioned before democratisation. So democratisation needs to be as a businessperson, I actually can get access to the data. So in that example that you gave between Kraft, Heinz and and and Albertson, is it the the highly hyper specialised teams sharing that data? Or is it actually extending into the line of business focus? >>That's so that's the interesting part for us is I think, snowflake, we just recently reorganise my sales team this year into verticals, and the reason we did that is customers no longer want to talk to us about speeds and feeds of how fast my database goes. They want to actually talk about business outcomes. How do I solve for demand forecasting? How do I supply fix my supply chain issues? Those are things. Those are the. That's how we're aligning with emphasis. So well is they've been doing this for a long time, Can only we haven't. And so we need their help on getting us to the next level of of the sales motion and talking to our customers on solving these business challenges in >>terms of that next level. So no question for you. Where are the customer conversations happening? At what level? I mean, we've seen such dramatic changes in the market in the last couple of years. Now we're dealing with inflation rising interest rates. Ukraine. Are you seeing the conversations in terms of building data platforms rising up the C suite? As every company recognises, we're going to be a data company. We're not gonna be a business. >>Absolutely. And I think all the macroeconomic forces that you talked about that's working on the enterprises globally is actually leading them to think about how to future proof their business models. Right? And there are tonnes of learning that they've hired in the last two or three years and digitising in embracing more digital models. The conversation with the customers have really pivoted towards business outcome. It is a C suite conversation. It is no longer just an incremental change for the for the companies they recognise. That data has been touted as a strategic asset for a long time, but I think it's taking a purpose and a meaning as to what it does for for the customers, the conversations are around industry verticals. You know, what are the specific challenges and opportunities that the the enterprises have, uh, and how you realise those and these cuts across multiple different layers. You know, we're talking about how your democratised data, which in our point of view, is absolute, must in terms of putting a foundation that doesn't take super specialised people to be able to run every operation and every bit of data that you process we have invested in building autonomous data and a state that can process data as it comes in without any manual intervention and take it all the way to consumption but also investing in those industry solutions. Along with snowflake, we launched the healthcare and life Sciences solution. We launched the only channel for retail and CPG. And these are great examples of how Snowflake Foundation enables democratisation on one side but also help solve business problems. In fact, with Snowflake, we have a very, uh, special partnership because our point of view on data economy is about how you connect with the network partners externally, and snowflake brings native capabilities. On this, we leverage that to Dr Exchanges for our customers and one of the services company in the recycling business. Uh, we're actually building and in exchange, which will allow the data points from multiple different sources and partners to come together. So they have a better understanding of their customers, their operations, the field operations and things >>like building a data ecosystem. Yes. Alright, They they Is it a two sided market place where you guys are observers and providing the the technology and the process, you know, guidance. What's your role in that? >>Yeah. So, um, we were seeing their revolution coming? Uh, two stages. Maybe even more. Um, customers are comfortable building an ecosystem. That's kind of private for them. Which means that they know who they are sharing data with. They know what the data is getting used for. And how do you really put governance on this? So that on one side you can trust it on the other side. There is a good use of that data, Uh, and not, uh, you know, compromise on their quality or privacy and some of the other regulations. But we do see this opening up to the two sided market places as well. Uh, some of the industry's lend themselves extremely well for that kind of play. We have seen that happening in trading area. We've seen that happen. And, uh, you know, the credit checks and things like that which are usually open for, you know, those kind of ecosystem. But the conversations and the and the programmes are really leading towards towards that in the market. >>You know, Lisa, one of things I wrote about this weekend is I was decided to come to stuff like summit and and see one of the, you know, thesis I have is that we're going to move not just beyond analytics, including analytics, but also building data products that can be monetised and and I'm hoping we're going to see some of that here. Are you seeing that Christian in the customer? It's It's >>a great question, David. So So we have You know, I just thought of it as as he was talking about. We have a customer who's a very large customer of ours who's in the financial services space, and they handle roughly 40% of the credit card transactions that happen in the US and they're coming to us and saying they want to go from zero in data business today to a $2 billion business over the next five years, and they're leaning on us to help them do that. And one of the things that's exciting for me is they're coming to us not saying Hey, how do you do it? You know, they're saying, Hey, we want to build a consumption model on top of snowflake and we want to use you as the delivery mechanism and the billing mechanism to help us actually monetise that data. So yes, the answer is. You know, I I used to sell to, you know, chief Data Officers and and see IOS. Now I'm talking to VPs of sales and I'm talking to chief operating officers and I'm talking to CEOs about how do we actually create a new revenue stream? And that's just I mean, it's exhilarating to have those conversations. That's >>data products. They don't have to worry about the infrastructure that comes from the cloud. They don't have to worry about the governance, as Senior was saying, Just put >>it in stuff like Just >>put stuff like that. So I call it The super cloud is kind of a, you know, a funny little tongue in cheek. But it's happening. It's this layer. It's not just multiple clouds. You see a lot of your critical competitors adjacent competitors saying, Hey, we're now running in in Google or we're running in Azure. We've been running on AWS. This is different. This is different, isn't it? It's a cloud that floats above the The infrastructure of the hyper scale is, and that's that's a new era. I think >>it's a new error. I think they're you know, I think the hyper scholars want to, you know, keep us as a as a data warehouse and and we're not. The customers are not letting them so So I think that's you know where emphasis kind of saw the light early on. And they were our innovation partner of the year, uh, this past year and they're helping us in our customers innovate, >>but you're uniquely qualified to do that where? I don't think it's the hyper scholars agenda. At least I never say never with the hyper scale is, but yeah, they have focused on providing infrastructure. And, yeah, they have databases and other tools. But that that cross cloud that continuum to your point, talking to VPs of sales and how do you generate revenue? That maybe, is a conversation that they have, but not explicitly as to how to actually do it in a data >>cloud. That's right. I mean, those and those are the Those are the fun conversations because you're you're saying, Hey, we can actually create a new revenue stream. And how can we actually help you solve our joint customers problems? So, yes, it is. Well, >>that's competitive differentiation for businesses. I mean, this is, as I mentioned Every company has to be a data company. If they're not, they're probably not going to be around much longer. They've got to be able to to leverage a data platform like snowflake, to find insights, be able to act on them and create value new services, new products to stay competitive, to stay ahead of the competition. That's no longer nice to have >>100%. I mean, I think they're they're all scared. I mean, you know, like if you look in the financial services space, they look at some of the fintech, as you know, the giant £800 gorillas look at the small fintech has huge threats to the business, and they're coming to us and say, How can we innovate our business now? And they're looking at us as the the innovator, and they're looking at emphasis to help them do that. So I think these are These are incredible times. >>So the narrative on Wall Street, of course, this past earnings season was consumption and who has best visibility and and they they were able to snowflake had a couple of large customers dial down consumption, some consumer facing. Here's the thing. If you're selling a data product for more than it costs you to make. If you dial down consumption in the future, you're gonna dial down revenue. So that's it's going to become less and less discretionary over time. And that, to me, is the next error. That's really exciting. >>The key, The key there is understanding the unit of measure. I think that's the number. One question that we get from customers is what is the unit of measure that we care about, that we want to monetise because to your point, it costs you more to make the product. You're not going to sell it right? And so I think that those are the things that the energy that we're spending with customers today is advising them, jointly advising them on how to actually monetise the specific, you know, unit of measure that they care >>about because when they get the Amazon bill or the snowflake bill, the CFO starts knocking the door. The answer has to be well, look at all the revenue that we generated and all the operating profit and the free cash flow that we drove, and then it's like, Oh, I get it. Keep doing it well, if I'm >>if I'm going on sales calls with the VP of sales and his their sales team, fantastic, right generated helping them generate revenue, right? That's a great conversation >>dynamic. And I think the adoption is really driven through the value, uh, that they can drive in their ecosystem. Their products are similar to products and services that these companies sell. And if you're embedding data inside Syria into your products services, that makes you that much more competitive in the market and drive value for your stakeholders. And that's essentially the future business model that we're talking about. On one side, the other one is the agility. Things aren't remaining constant, they are constantly changing, and we talked about some of those forces earlier. All of this is changing. The landscape is changing the the needs in the economy and things like that, and how you adapt to those kind of models in the future and pivoted on data capabilities that lets you identify new opportunities and and create new value. >>Speaking of creating new value last question guys, before we wrap, what's the go to market approach here between the two companies working customers go to get engaged. I imagine both sides. >>Yeah. I mean, the way that partnership looks good to me is is sell with co selling. So So I think, you know, we look at developing joint solutions with emphasis. They've done a wonderful job of leading into our partnership. So, you know, Sue Neill and I have a regular cadence where we talked every quarter, and our sales teams and our partner teams are are all leaning in and co selling. I don't know if you >>have Absolutely, um, you know, we we proactively identify, you know, the opportunities for our customers. And we work together at all levels within, you know, between the two companies to be able to bring a cohesive solution and a proposition for the customers. Really help them understand how to, you know, what is it that they can, um, get to and how you get that journey actually executed. And it's a partnership that works very seamlessly through that entire process, not just upstream when we're selling, but also downstream and we're executing. And we've had tremendous success together and look forward to more. >>Congratulations on that success, guys. Thank you so much for coming on talking about new possibilities with data and AI and sharing some of the impact that the technologies are making. We appreciate your insights. >>Thank you. Thank >>you. Thank you So much >>for our guests and a Volonte. I'm Lisa Martin. You're watching the Cube live in Las Vegas from Snowflake Summit 22 back after the keynote with more breaking news. Mhm, mhm.

Published Date : Jun 14 2022

SUMMARY :

We have three wall to wall days of coverage Yeah, it's all about data and bringing data to applications. Great to have you on the programme. Um, and the and the point of view in the market together so that, you know there is there is cohesive Chris talked to us about the snowflake data cloud. I think snowflake brings to the table an opportunity for them to Uh, and AI has been, you know, challenging. And and that's where you know our our customers, um, you know, grapple with the challenges So how do you think about modernisation? and I think because we're the data cloud, we're bringing data, you know, and making it available and democratising Chris, can you talk about some of the verticals where you guys are successful with emphasis that the three that I mentioned are And, you know, they've been all in on on So in that example that you gave between Kraft, of the sales motion and talking to our customers on solving these business challenges in Are you seeing the conversations in terms and opportunities that the the enterprises have, uh, and how you realise those you know, guidance. Uh, and not, uh, you know, compromise on their quality or privacy and some and and see one of the, you know, thesis I have is that we're going to move not just me is they're coming to us not saying Hey, how do you do it? They don't have to worry about the infrastructure that comes from the cloud. So I call it The super cloud is kind of a, you know, a funny little tongue in cheek. I think they're you know, I think the hyper scholars want to, you know, keep us as a as a data warehouse talking to VPs of sales and how do you generate revenue? And how can we actually help you solve our joint customers problems? I mean, this is, as I mentioned Every company has to be a data company. space, they look at some of the fintech, as you know, the giant £800 gorillas look at the small fintech If you dial down consumption in the future, on how to actually monetise the specific, you know, unit of measure that they care The answer has to be well, look at all the revenue that we generated and all the operating profit and the free and how you adapt to those kind of models in the future and pivoted on data Speaking of creating new value last question guys, before we wrap, what's the go to market approach here between the two companies So So I think, you know, we look at developing joint solutions with emphasis. have Absolutely, um, you know, we we proactively identify, and AI and sharing some of the impact that the technologies are making. Thank you. Thank you So much Summit 22 back after the keynote with more breaking news.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

DavidPERSON

0.99+

ChrisPERSON

0.99+

Sue NeillPERSON

0.99+

LisaPERSON

0.99+

KraftORGANIZATION

0.99+

$2 billionQUANTITY

0.99+

Las VegasLOCATION

0.99+

Chris DegnanPERSON

0.99+

two companiesQUANTITY

0.99+

NikeORGANIZATION

0.99+

HeinzORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Kraft HeinzORGANIZATION

0.99+

Chris DignanPERSON

0.99+

AmazonORGANIZATION

0.99+

Snowflake FoundationORGANIZATION

0.99+

bothQUANTITY

0.99+

SyriaLOCATION

0.99+

USLOCATION

0.99+

IOSTITLE

0.99+

One questionQUANTITY

0.99+

£800QUANTITY

0.99+

threeQUANTITY

0.99+

Snowflake Summit 22EVENT

0.99+

InfosysORGANIZATION

0.99+

Snowflake Summit 22EVENT

0.99+

first two guestsQUANTITY

0.99+

this weekDATE

0.98+

SnowflakeORGANIZATION

0.98+

one sideQUANTITY

0.98+

oneQUANTITY

0.98+

Snowflake Summit 2022EVENT

0.98+

AlbertsonsORGANIZATION

0.98+

this yearDATE

0.97+

two sidedQUANTITY

0.97+

todayDATE

0.97+

40%QUANTITY

0.97+

UkraineLOCATION

0.96+

Sunil SenanPERSON

0.96+

GoogleORGANIZATION

0.96+

Kraft, Heinz and and and AlbertsonORGANIZATION

0.95+

AlbertansORGANIZATION

0.92+

two stagesQUANTITY

0.9+

three yearsQUANTITY

0.9+

this morningDATE

0.88+

one cloudQUANTITY

0.87+

CVSORGANIZATION

0.87+

100%QUANTITY

0.86+

past yearDATE

0.86+

snowflakeORGANIZATION

0.84+

Wall StreetLOCATION

0.84+

next five yearsDATE

0.84+

zeroQUANTITY

0.82+

last couple of yearsDATE

0.82+

WilliamORGANIZATION

0.76+

Snowflake Data ClubORGANIZATION

0.74+

twoQUANTITY

0.69+

VolontePERSON

0.69+

Summit 22EVENT

0.69+

SnowflakeEVENT

0.65+

AlbertansOTHER

0.61+

ChristianORGANIZATION

0.6+

AzureTITLE

0.6+

NielsenORGANIZATION

0.59+

Data CloudORGANIZATION

0.58+

ServiceORGANIZATION

0.53+

CubeTITLE

0.51+

SanomaPERSON

0.51+

tonnesQUANTITY

0.5+

lastDATE

0.37+

Volonte DavidORGANIZATION

0.34+

S V.ORGANIZATION

0.33+

Chee Chew, mParticle | CUBE Conversation


 

(upbeat music) >> Hello and welcome to this Cube Conversation. I'm here in Palo Alto, California. I'm John Furrier host of theCUBE, and I'm here with mparticle. With Chee Chew, Chief Product Officer. Thanks for joining us today. Thanks for coming on. >> Thank you. It's great to be here. >> So mparticle's doing some pretty amazing things around managing customer data end to end as a data platform. A lot of integrations. You guys are state of the art cloud scale for this new kind of use case of using the data for customer value in real time. A lot of good stuff going on. So I really want to dig into this whole prospect. So what is the company about first? Take a minute to explain what is mparticle for the folks watching? >> Yeah, absolutely. Well, if you think about the world today where it's like cloud computing and businesses are getting a lot of data from customers as consumers go online. And they have these cloud services that are collecting all this data about the customer. How do you get it organized? How do you have all that data that's in different departments, reconcile them and like give it to your departments. So they can really personalize the experiences. We've all had these experiences where, you know, like we're this loyal customer of a brand, we shop there a lot. And then we go over to like the customer service and they act like they have no idea who we are. Our job is to help businesses really understand the customer and be able to treat them in a personal way. To do the very best for every experience. >> Well, Chee you're in a really big spot there with the company, Chief Product... You got the keys to the kingdom over there. You're overseeing all the action. You got a platform, a bunch of solutions you're enabling. Customer data has been around for a long time. We hear big systems in the past, oh got to leverage the customer data. But why is the customer data more important now than ever as developers and cloud scale are emerging in. Why is customer data becoming more and more valuable to organizations? >> No. Well, customer data has been around for like decades and decades. The amount of customer data being generated online has just accelerated. It's been exponential. There's been more data collected in the past four years than the past 40 years. And like businesses are just starting to realize, how much of a goldmine that could be for them. If they could really harness it. And especially in today's world where treating it properly, respecting people's privacy, really doing well by the customer, earning the right to use that data is ever so important. The combination that brings the need for solutions like mparticle. >> Talk about some of the enablement that you guys offer your customers. You got a platform, you got a lot of moving parts in there. A lot of key components, a lot of integrations. With all the best platforms to connect to. We're in an API economy. So trust is huge. You got to have the data governance. Everything's got to work together. It's a really hard problem. How do you guys enable value there? What is the key product value that you guys are enabling? >> Yeah, it is a hard problem. And with the data being so important to businesses and treating it well and collecting it from all the different aspects, there are many places where we... Our customers really value the services we bring. As you mentioned, we have a large set of integrations. We can get data in from pretty much any system that you have. Even if you built it yourself, we have ways of enabling you to collect that data from all around the company. Then we reconcile them. So we create one single view of the customer. We adhere to all the privacy regulations around the world to make sure that you're compliant with not only laws but with the trust with your consumers. We clean that data and then we distribute it to all the systems where you really want to create personalized experiences. So the collection, the reconciliation, the cleaning, the conformance, and then the distribution. Those are all key events that we do to bring value to customers. >> It's funny in all these major shifts, you're seeing all the same things. You got to be a media company. You got to be a data company. Got to be a video company. Got to be a cloud company. So in the digital transformation, you know with machine learning and AI really at the center of the application value now, you can measure everything in a company. So, smart leadership saying, hey, if we can measure everything, don't we want to know what's going on with respect to our customer. The journey they call it. So, you know, there's the industry taglines of customer best in class experiences, capturing the moments that matter. Describe how you do that. Because moments that matter to me feel like something that's real time or something that's super important, that's contextualized. You got to get that context with that journey. How do you guys do that? This is something I'm intrigued about. >> Yeah, absolutely. And you know, I... This hearken backs to my experience when I was at Amazon doing retail and we really focus on personalization and the notion of when you go to one page or one screen on your mobile device and then you go to the very next page. That very next page has to be personalized with the things that you did on... Just seconds ago on the previous one. That idea of being at the interaction speed, keeping up with the customers. That's what, we've... What we provide for our brands. It's not enough to just collect the data, churn on it, do a bunch of like calculations and then tomorrow figure out what to do. Tomorrow figure out how to personalize it. It has to be in interaction time with our customers. >> John: It's interesting too. You'll have experience in big companies, hyperscalers with large, you know, media business and data. Bringing that to normal companies, enterprises, and mid-market, they have to then stand up their own staff. They have to operationalize this in a large data strategy that maximizes the value. How do brands do this effectively? Can you share best practice of what's the best way to stand up and operationalize the team, the developers, the strategy. >> Chee: Yeah and this is a great question. And right now with the world... The way the world and the industry is developing, businesses don't all do it the same way. Like at Amazon, we built our own. Now we had several hundred engineers in my team who are collecting the data, analyzing it, and really cleaning it. Not every company can afford a couple hundred engineers just to do this... Solve this one problem. Which is why I'm super excited about what we're doing at mparticle, where we're trying to make that available to every company in the world. Whether you're a huge brand, like an NBC, or you're a smaller, medium size startup. Like you have a lot of data and we can help make it accessible for you. Now, many companies do start and build it from scratch and the problems early on, seem very tractable. But then as new laws come out, as the platform changes, as Apple and iOS change the rules on what you can collect and what data you can't collect. That puts you on this treadmill of always like reinvesting and reinvesting in the data collection. And not as much at innovating on your business. And then many companies turn around and decide, oh I understand why you want a company like an mparticle, providing that service. >> It's interesting. You guys do a lot of that... The key value proposition that we hear a lot for successful companies. You take care of that the heavy differentiate... Undifferentiated heavy lifting. So the customer can focus on the value. This seems to be the theme of of the data problem that companies want to solve. There's a lot of grunt work that has to get done. A lot of, you know, get down and dirty and work on stuff. If you can just automate it, make it go faster, then you can apply more creative processes and tools onto getting more growth or more value out of the use case. Can you... Is that something that's happening here? >> Oh yeah, absolutely. You know, the dirty secret that if you talk to any like machine learning scientist data engineer, what they'll tell you is it seems like the world is sexy when you talk to new like computer science students about like building models. But when they go to industry they spend like 80 or 90% of their time cleaning data, getting access to data, like getting the right permissions. And they spend like 10 to 20% of the time actually building models and doing the really interesting things that you want your data science to do. That's a really expensive way of getting to your models. And that's why you're right. Services like, mparticle, like our core business is to take that grunt work and that... Things that might be less exciting and bespoke to your business. Like that's the stuff that we get excited about. And we want to provide the best op... Best in breed experience for our customers. >> Yeah. There's no doubt, every company will have to have this really complex, hard to solve platform problem. You either buy or build it. I mean, you're not... Not everyone's Amazon, right? So not everyone can do that. So you got to have the integrations, you got to have the personalizations, you got to have the data quality and you got to have the data governance in there too. You can't forget the fact that you'd be dealing with potentially trusted parties that don't work for you. Right? So this is a huge connection point that I want to just quickly get into. Quickly, APIs connects companies but now also connects data. How do you view that? How should customers think about the connection points when they start to share customer data with other companies? >> Yeah, you're totally right in that. Not only is it important for you to do this in terms of saving your time in engineering and all the amount of work you have, but the risk is super high. If you treat customers data incorrectly, you can break trust with your consumers. It takes a long time to build that trust and just a moment to lose it. And so it is more than just engineering time savings but it is also a risk to the business. Now... Then you go to down to like, how do you do it? Why APIs? The reason for us, our push on really the API platform is to give power to developers. Within your company, you may have some innovation that you want, some way you want to really differentiate yourself from the rest of the field. If we provided only standard UI. Standard ways of doing it, then our customers would all behave and have the same capabilities as every other customer. But by us providing APIs it allows our customers to really innovate and make the platform bend to their will. To support the unique ideas that they have. So that's our approach of why we really focus on the customer data infrastructure. >> John: Yeah, it's a great opportunity Chee, I really appreciate your time. Real final question for you, as folks look at this opportunity to have a data platform and mparticle, one that you have. They're going to probably ask you the question of, hey I got developers too. I'm hiring more and more cloud native developers. We're API first, obviously we're cloud native. We love that direction. We're distributed computing. All that great stuff at the edge. I got machine learning. But I really want to integrate, I want to control the experience. I want to be agile and fast. Can you help us? What's your answer to that question? >> Absolutely. If you look at the things that your engines are doing, and you ask them how much of what they're doing is similar to what you expect from other similar companies and how much is really unique to your business. You'll probably find that a minority of the work is really unique to that business. And the majority are things that are common problems that other companies struggle with. Our job is to help take that away. So you can really focus on what's unique, bespoke, and innovative for you. >> John: Follow up to that real quick, as you're the Chief Product Officer. Talk to the folks out there who are watching, who may not know what goes on in a product organization. You're making all kinds of trade offs. You got a product roadmap, you've got the 20 mile stare. You have a North Star. What should they know about mparticle, about the product that they... That's important for them to either pay attention to or they may not know about. >> You know, my... When I think about mparticle, it's not just a product but it's the whole offering. And what you want to know about mparticle is we really work hard to empower our customers, whether it's through the API platforms. So that you have the full flexibility to do whatever you want or through our customer service and our support teams. We are... Have a great reputation with our customers about really focusing on and unblocking them, enabling whatever the heart desires. >> John: Yeah and building on top of it. Sounds great. Chee, thanks for coming on. Appreciate the update on mparticle. Thanks for your time. Great to see you. >> Absolutely. Thank you for your time. >> Okay. This is theCUBE conversation. I'm John Furrier, host of theCUBE. Thanks for watching. (upbeat music)

Published Date : Jun 3 2022

SUMMARY :

Hello and welcome to It's great to be here. You guys are state of the art and like give it to your departments. You got the keys to earning the right to use that With all the best platforms to connect to. pretty much any system that you have. So in the digital transformation, you know and the notion of when you go to one page that maximizes the value. and reinvesting in the data collection. You take care of that the that you want your data science to do. and you got to have the data and all the amount of work you have, All that great stuff at the to what you expect from about the product that they... to do whatever you want or Appreciate the update on mparticle. Thank you for your time. I'm John Furrier, host of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

NBCORGANIZATION

0.99+

80QUANTITY

0.99+

10QUANTITY

0.99+

Chee ChewPERSON

0.99+

90%QUANTITY

0.99+

20 mileQUANTITY

0.99+

one pageQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

TomorrowDATE

0.99+

tomorrowDATE

0.99+

one screenQUANTITY

0.99+

todayDATE

0.99+

20%QUANTITY

0.98+

AppleORGANIZATION

0.98+

one problemQUANTITY

0.97+

Chee ChewPERSON

0.95+

mparticleORGANIZATION

0.95+

CheePERSON

0.95+

decadesQUANTITY

0.95+

North StarORGANIZATION

0.95+

iOSTITLE

0.94+

firstQUANTITY

0.93+

one single viewQUANTITY

0.86+

hundred engineersQUANTITY

0.84+

theCUBEORGANIZATION

0.82+

couple hundred engineersQUANTITY

0.78+

secondsDATE

0.76+

Cube ConversationEVENT

0.71+

past four yearsDATE

0.68+

past 40DATE

0.67+

yearsQUANTITY

0.61+

mparticleTITLE

0.58+

oneQUANTITY

0.58+

mParticlePERSON

0.57+

Show Wrap | Kubecon + Cloudnativecon Europe 2022


 

>> Narrator: The cube presents, the Kubecon and Cloudnativecon Europe, 2022 brought to you by Red Hat, the cloud native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in Kubecon and Cloudnativecon Europe, 2022. I'm your host Keith Townsend. It's been a amazing day, three days of coverage 7,500 people, 170 sponsors, a good mix of end user organizations, vendors, just people with open source at large. I've loved the conversations. We're not going to stop that coverage just because this is the last session of the conference. Colin Murphy, senior software engineer, Adobe, >> Adobe. >> Oh, wow. This is going to be fun. And then Liam Randall, the chair of CNCF Cloud Native WebAssembly Day. >> That's correct. >> And CNCF & CEO of Cosmonic. >> That's right. >> All right. First off, let's talk about the show. How has this been different than other, if at all of other Kubecons? >> Well, first I think we all have to do a tremendous round of applause, not only for the vendors, but the CNC staff and all the attendees for coming out. And you have to say, Kubecon is back. The online experiences have been awesome but this was the first one, where Hallwaycon was in full effect. And you had the opportunity to sit down and meet with so many intelligent and inspiring peers and really have a chance to learn about all the exciting innovations that have happened over the last year. >> Colin. >> Yeah, it's been my most enjoyable Kubecon I've ever been to. And I've been to a bunch of them over the last few years. Just the quality of people. The problems that we're solving right now, everywhere from this newer stuff that we're talking about today with WebAssembly but then all these big enterprises trying to getting involved in Kubernetes >> Colin, to your point about the problems that we're solving, in many ways the pandemic has dramatically accelerated the pace of innovation, especially inside the CNCF, which is by far the most critical repository of open source projects that enterprises, governments and individuals rely on around the world, in order to deliver new experiences and to have coped and scaled out within the pandemic over the last few years. >> Yeah, I'm getting this feel, this vibe of the overall show that feels like we're on the cuff for something. There's other shows throughout the year, that's more vendor focused that talk about cloud native. But I think this is going to be the industry conference where we're just getting together and talking about it and it's going to probably be, in the next couple of years, the biggest conference of the year, that's just my personal opinion. >> I actually really strongly agree with you. And I think that the reason for that is the diversity that we get from the open source focus of Kubecon Kubecon has started where the industry really started which was in shared community projects. And I was the executive at Capital One that led the donation of cloud custodian into the CNCF. And I've started and put many projects here. And one of the reasons that you do that is so that you can build real scalable communities, Vendors that oftentimes even have competing interest but it gives us a place where we can truly collaborate where we can set aside our personal agendas and our company's agendas. And we can focus on the problems at hand. And how do we really raise the bar for technology for everybody. >> Now you two are representing a project that, you know as we look at kind of, how the web has evolved the past few decades, there's standards, there's things that we know that work, there's things that we know that don't work and we're beyond cloud native, we're kind of resistant to change. Funny enough. >> That's right. >> So WebAssembly, talk to me about what problem is WebAssembly solving that need solving? >> I think it's fitting that here on the last day of Kubecon, we're starting with the newest standard for the web and for background, there's only four languages that make up what we think of as the modern web. There's JavaScript, there's HTML, there's CSS, and now there's a new idea that's WebAssembly. And it's maybe not a new idea but it's certainly a new standard, that's got massive adoption and acceleration. WebAssembly is best thought of as almost like a portable little virtual machine. And like a lot of great ideas like JavaScript, it was originally designed to bring new experiences to browsers everywhere. And as organizations looked at the portability and security value props that come from this tiny little virtual machine, it's made a wonderful addition to backend servers and as a platform for portability to bring solutions all the way out to the edge. >> So what are some of the business cases for WebAssembly? Like what problem, what business problem are we solving? >> So it, you know, we would not have been able to bring Photoshop to the web without WASM. >> Wow. >> And just to be clear, I had nothing to do with that effort. So I want to make sure everybody understands, but if you have a lot of C++ or C code and you want to bring that experience to the web browser which is a great cost savings, cause it's running on the client's machines, really low latency, high performance experiences in the browser, WASM, really the only way to go. >> So I'm getting hints of fruit berry, Java. >> Liam: Yeah, absolutely. >> Colin: Definitely. >> You know, the look, WebAssembly sounds similar to promises you've heard before, right ones, run anywhere. The difference is, is that WebAssembly is not driven by any one particular vendor. So there's no one vendor that's trying to bring a plug in to every single device. WebAssembly was a recognition, much like Kubecon, the point that we started with around the diversity of thought ideas and representation of shared interest, of how do we have a platform that's polyglot? Many people can bring languages to it, and solutions that we can share and then build from there. And it is unlocking some of the most amazing and innovative experiences, both on the web backend servers and all the way to the edge. Because WebAssembly is a tiny little virtual machine that runs everywhere. Adobe's leadership is absolutely incredible with the things that they're doing with WebAssembly. They did this awesome blog post with the Google Chrome team that talked about other performance improvements that were brought into Chrome and other browsers, in order to enable that kind of experience. >> So I get the general concept of WebAssembly and it's one of those things that I have to ask the question, and I appreciate that Adobe uses it but without the community, I mean, I've dedicated some of my team's resources over the years to some really cool projects and products that just died on the buying cause there was no community around. >> Yeah. >> Who else uses WebAssembly? >> Yeah, I think so. We actually, inside the CNCF now, have an entire day devoted just to WebAssembly and as the co-chair of the CNCF Cloud Native WebAssembly Day, we really focus on bringing those case studies to the forefront. So some of the more interesting talks that we had here and at some of the precursor weekend conferences were from BMW, for example, they talked about how they were excited about not only WebAssembly, but a framework that they use on WebAssembly called WASM cloud, that lets them a flexibly scale machine learning models from their own edge, in their own vehicles through to their developer's workstations and even take that data onto their regular cloud Kubernetes and scale analysis and analytics. They invested and they just released a machine learning framework for one of the many great WebAssembly projects called WASM cloud, which is a CNCF project, a member project here in the CNCF. >> So how does that fit in overall landscape? >> So think of WebAssembly, like you think of HTML. It's a technology that gives you a lot of concept and to accelerate your journey on those technologies, people create frameworks. For example, if you were going to write a UI, you would not very likely start with an empty document you'd start with a react or view. And in a similar vein, if you were going to start a new microservice or backend application, project for WebAssembly, you might use WASM cloud or you might use ATMO or you might use a Spin. Those are three different types of projects. They all have their own different value props and their own different opinions that they bring to them. But the point is is that this is a quickly evolving space and it's going to dramatically change the type of experiences that we bring, not only to web browsers but to servers and edges everywhere. >> So Colin, you mentioned C+ >> Colin: Yeah. >> And other coding. Well , talk to me about the ramp up. >> Oh, well, so, yeah, so, C++ there was a lot of work done in scripting, at Adobe. Taking our C++ code and bringing it into the browser. A lot of new instructions, Cimdi, that were brought to make a really powerful experience, but what's new now is the server side aspect of things. So, just what kind of, what Liam was talking about. Now we can run this stuff in the data center. It's not just for people's browsers anymore. And then we can also bring it out to the edge too, which is a new space that we can take advantage of really almost only through WebAssembly and some JavaScript. >> So wait, let me get this kind of under hook. Before, if I wanted a rich experience, I have to run a heavy VDI instance on the back end so that I'm basically getting remote desktop calls from a light thin client back to my backend server, that's heavy. >> That is heavy. >> WebAssembly is alternative to that? >> Yes, absolutely. Think of WebAssembly as a tiny little CPU that is a shim, that we can take the places that don't even traditionally have a concept of a processor. So inside the browser, for example, traditionally cloud native development on the backend has been dominated by things like Docker and Docker is a wonderful technology and Container is a wonderful technology that really drove the last 10 years of cloud native with the great lift and shift, if you will. Take our existing applications, package them up in this virtual desktop and then deliver them. But to deliver the next 10 years of experiences, we need solutions that let us have portability first and a security model that's portable across the entire landscape. So this isn't just browsers and servers on the back end, WebAssembly creates an a layer of equality from truly edge to edge. It's can transcend different CPUs, different operating systems. So where containers have this lower bound off you need to be running Linux and you need to be in a place where you're going to bring Kubernetes. WebAssembly is so small and portable, it transcends that lower bound. It can go to places like iOS. It can go to places like web browsers. It can even go to teeny tiny CPUs that don't even traditionally have a full on operating systems inside them. >> Colin: Right, places where you can't run Docker. >> So as I think about that, and I'm a developer and I'm running my back end and I'm running whatever web stack that I want, how does this work? Like, how do I get started with it? >> Well, there's some great stuff Liam already mentioned with WASM cloud and Frmion Spin. Microsoft is heavily involved now on providing cloud products that can take advantage of WebAssembly. So we've got a lot of languages, new languages coming in.net and Ruby, Rust is a big one, TinyGo, really just a lot of places to get involved. A lot of places to get started. >> At the highest level Finton Ryan, when he was at Gartner, he's a really well known analyst. He wrote something profound a few years ago. He said, WebAssembly is the one technology, You don't need a strategy to adopt. >> Mm. >> Because frankly you're already using it because there's so many wonderful experiences and products that are out there, like what Adobe's doing. This virtual CPU is not just a platform to run on cloud native and to build applications towards the edge. You can embed this virtual CPU inside of applications. So cases where you would want to allow your users to customize an application or to extend functionality. Give you an example, Shopify is a big believer in WebAssembly because while their platform covers, two standard deviations or 80% of the use cases, they have a wonderful marketplace of extensions that folks can use in order to customize the checkout process or apply specialized discounts or integrate into a partner ecosystem. So when you think about the requirements for those scenarios, they line up to the same requirements that we have in browsers and servers. I want real security. I want portability. I want reuseability. And ultimately I want to save money and go faster. So organizations everywhere should take a few minutes and do a heads up and think about one, where WebAssembly is already in their environment, inside of places like Envoy and Istio, some of the most popular projects in the cloud native ecosystem, outside of Kubernetes. And they should perhaps consider studying, how WebAssembly can help them to transform the experiences that they're delivering for their customers. This may be the last day of Kubecon, but this is certainly not the last time we're going to be talking about WebAssembly, I'll tell you that. >> So, last question, we've talked a lot about how to get started. How about day two, when I'm thinking about performance troubleshooting and ensuring clients have a great experience what's day two operation like? >> That's a really good question. So there's, I know that each language kind of brings their own tool chain and their, and you know we saw some great stuff on, on WASM day. You can look it up around the .net experience for debugging, They really tried to make it as seamless and the same as it was for native code. So, yeah, I think that's a great question. I mean, right now it's still trying to figure out server side, It's still, as Liam said, a shifting landscape. But we've got some great stuff out here already >> You know, I'd make an even bigger call than that. When I think about the last 20 years as computing has evolved, we've continued to move through these epics of tech that were dominated by a key abstraction. Think about the rise of virtualization with VMware and the transition to the cloud. The rise of containerization, we virtualized to OS. The rise of Kubernetes and CNCF itself, where we virtualize cloud APIs. I firmly believe that WebAssembly represents the next epic of tech. So I think that day two WebAssembly continues to become one of the dominant themes, not only across cloud native but across the entire technical computing landscape. And it represents a fundamentally gigantic opportunity for organizations such as Adobe, that are always market leading and at the cutting edge of tech, to bring new experiences to their customers and for vendors to bring new platforms and tools to companies that want to execute on that opportunity. >> Colin Murphy, Liam Randall, I want to thank you for joining the Cube at Kubecon Cloudnativecon 2022. I'm now having a JavaScript based app that I want to re-look at, and maybe re-platforming that to WebAssembly. It's some lot of good stuff there. We want to thank you for tuning in to our coverage of Kubecon Cloudnativecon. And we want to thank the organization for hosting us, here from Valencia, Spain. I'm Keith Townsend, and you're watching the Cube, the leader in high tech coverage. (bright music)

Published Date : May 20 2022

SUMMARY :

brought to you by Red Hat, I've loved the conversations. the chair of CNCF First off, let's talk about the show. that have happened over the last year. And I've been to a bunch of and to have coped and scaled and it's going to probably be, And one of the reasons that you do that how the web has evolved here on the last day of Kubecon, Photoshop to the web without WASM. WASM, really the only way to go. So I'm getting hints of and all the way to the edge. and products that just died on the buying and as the co-chair of and it's going to dramatically change Well , talk to me about the ramp up. and bringing it into the browser. instance on the back end and servers on the back end, where you can't run Docker. A lot of places to get started. is the one technology, and to build applications how to get started. and the same as it was for native code. and at the cutting edge of tech, that to WebAssembly.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

Liam RandallPERSON

0.99+

ColinPERSON

0.99+

Colin MurphyPERSON

0.99+

LiamPERSON

0.99+

AdobeORGANIZATION

0.99+

80%QUANTITY

0.99+

Red HatORGANIZATION

0.99+

BMWORGANIZATION

0.99+

oneQUANTITY

0.99+

170 sponsorsQUANTITY

0.99+

CosmonicORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

iOSTITLE

0.99+

Finton RyanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

C++TITLE

0.99+

twoQUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

two standard deviationsQUANTITY

0.99+

PhotoshopTITLE

0.99+

7,500 peopleQUANTITY

0.99+

LinuxTITLE

0.99+

CNCFORGANIZATION

0.99+

ShopifyORGANIZATION

0.99+

WebAssemblyTITLE

0.99+

ChromeTITLE

0.99+

JavaScriptTITLE

0.99+

RubyTITLE

0.99+

RustTITLE

0.99+

Capital OneORGANIZATION

0.98+

FirstQUANTITY

0.98+

first oneQUANTITY

0.98+

three daysQUANTITY

0.98+

GoogleORGANIZATION

0.98+

WASM cloudTITLE

0.98+

todayDATE

0.97+

each languageQUANTITY

0.97+

pandemicEVENT

0.97+

WASMTITLE

0.97+

firstQUANTITY

0.97+

C+TITLE

0.97+

KubeconORGANIZATION

0.97+

last yearDATE

0.97+

CimdiPERSON

0.96+

day twoQUANTITY

0.96+

Kubecon CloudnativeconTITLE

0.96+

four languagesQUANTITY

0.96+

KubernetesTITLE

0.95+

next couple of yearsDATE

0.95+

bothQUANTITY

0.94+

2022DATE

0.94+

HTMLTITLE

0.93+

CTITLE

0.93+

JavaTITLE

0.93+

ATMOTITLE

0.92+

yearsDATE

0.9+

Kubecon KubeconORGANIZATION

0.87+

Nick Van Wiggeren, PlanetScale | Kubecon + Cloudnativecon Europe 2022


 

>> Narrator: theCUBE presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain, KubeCon, CloudNativeCon Europe 2022. I'm Keith Townsend, your host. And we're continuing the conversations around ecosystem cloud native, 7,500 people here, 170 plus show for sponsors. It is for open source conference, I think the destination. I might even premise that this may be, this may eventually roll to the biggest tech conference in the industry, maybe outside of AWS re:Invent. My next guest is Nick van Wiggeren. >> Wiggeren. >> VP engineering of PlanetScale. Nick, I'm going to start off the conversation right off the bat PlanetScale cloud native database, why do we need another database? >> Well, why don't you need another database? I mean, are you happy with yours? Is anyone happy with theirs? >> That's a good question. I don't think anyone is quite happy with, I don't know, I've never seen a excited database user, except for guys with really (murmurs) guys with great beards. >> Yeah. >> Keith: Or guys with gray hair maybe. >> Yeah. Outside of the dungeon I think... >> Keith: Right. >> No one is really is happy with their database, and that's what we're here to change. We're not just building the database, we're actually building the whole kind of start to finish experience, so that people can get more done. >> So what do you mean by getting more done? Because MySQL has been the underpinnings of like massive cloud database deployments. >> 100% >> It has been the de-facto standard. >> Nick: Yep. >> For cloud databases. >> Nick: Yep. >> What is PlanetScale doing in enabling us to do that I can't do with something like a MySQL or a SQL server? >> Great question. So we are MySQL compatible. So under the hood it's a lot of the MySQL you know and love. But on top of that we've layered workflows, we've layered scalability, we've layered serverless. So that you can get all of the the parts of the MySQL, that dependability, the thing that people have used for 20, 30 years, right? People don't even know a world before MySQL. But then you also get this ability to make schema changes faster. So you can kind of do your work quicker get to the business objectives faster. You can scale farther. So when you get to your MySQL and you say, well, can we handle adding this one feature on top? Can we handle the user growth we've got? You don't have to worry about that either. So it's kind of the best of both worlds. We've got one foot in history and we've got one foot in the new kind of cloud native database world. We want to give everyone the best of both. >> So when I think of serverless because that's the buzzy world. >> Yeah. >> But when I think of serverless I think about developers being able to write code. >> Yep. >> Deploy the code, not worry about VM sizes. >> Yep. >> Amount of disk space. >> Yep. >> CPU, et cetera. But we're talking about databases. >> Yep. >> I got to describe what type of disk I want to use. I got to describe the performance levels. >> Yep. >> I got all the descriptive stuff that I have to do about infrastructures. Databases are not... >> Yep. >> Keith: Serverless. >> Yep. >> They're the furthest thing from it. >> So despite what the name may say, I can guarantee you PlanetScale, your PlanetScale database does run on at least one server, usually more than one. But the idea is exactly what you said. So especially when you're starting off, when you're first beginning your, let's say database journey. That's a word I use a lot. The furthest thing from your mind is, how many CPUs do I need? How many disk iOS do I need? How much memory do I need? What we want you to be able to do is get started on focusing on shipping your code, right? The same way that Lambda, the same way that Kubernetes, and all of these other cloud native technologies just help people get done what they want to get done. PlanetScale is the same way, you want a database, you sign up, you click two buttons, you've got a database. We'll handle scaling the disk as you grow, we'll handle giving you more resources. And when you get to a spot where you're really starting to think about, my database has got hundreds of gigabytes or petabytes, terabytes, that's when we'll start to talk to you a little bit more about, hey, you know it really does run on a server, we ain't got to help you with the capacity planning, but there's no reason people should have to do that up front. I mean, that stinks. When you want to use a database you want to use a database. You don't want to use, 747 with 27 different knobs. You just want to get going. >> So, also when I think of serverless and cloud native, I think of stateless. >> Yep. >> Now there's stateless with databases, help me reconcile like, when you say it's cloud native. >> Nick: Yep. >> How is it cloud native when I think of cloud native as stateless? >> Yeah. So it's cloud native because it exists where you want it in the cloud, right? No matter where you've deployed your application on your own cloud, on a public cloud, or something like that, our job is to meet you and match the same level of velocity and the same level of change that you've got on your kind of cloud native setup. So there's a lot of state, right? We are your state and that's a big responsibility. And so what we want to do is, we want to let you experiment with the rest of the stateless workloads, and be right there next to you so that you can kind of get done what you need to get done. >> All right. So this concept of clicking two buttons... >> Nick: Yeah. >> And deploying, it's a database. >> Nick: Yep. >> It has to run somewhere. So let's say that I'm in AWS. >> Nick: Yep. >> And I have AWS VPC. What does it look like from a developer's perspective to consume the service? >> Yeah. So we've got a couple of different offerings, and AWS is a great example. So at the very kind of the most basic database unit you click, you get an endpoint, a host name, a password, and the username. You feed that right into your application and it's TLS secure and stuff like that, goes right into the database no problem. As you grow larger and larger, we can use things like AWS PrivateLink and stuff like that, to actually start to integrate more with your AWS environment, all the way over to what we call PlanetScale Managed. Which is where we actually deploy your data plan in your AWS account. So you give us some permissions and we kind of create a sub-account and stuff like that. And we can actually start sending pods, and hold clusters and stuff like that into your AWS account, give you a PrivateLink, so that everything looks like it's kind of wrapped up in your ownership but you still get the same kind of PlanetScale cloud experience, cloud native experience. >> So how do I make calls to the database? I mean, do I have to install a new... >> Nick: Great question. >> Like agent, or do some weird SQL configuration on my end? Or like what's the experience? >> Nope, we just need MySQL. Same way you'd go, install MySQL if you're on a Mac or app store to install MySQL on analytics PC, you just username, password, database name, and stuff like that, you feed that into your app and it just works. >> All right. So databases are typically security. >> Nick: Yep. >> When my security person. >> Nick: Yep. >> Sees a new database. >> Nick: Yep. >> Oh, they get excited. They're like, oh my job... >> Nick: I bet they do. >> My job just got real easy. I can find like eight or nine different findings. >> Right. >> How do you help me with compliance? >> Yeah. >> And answering these tough security questions from security? >> Great question. So security's at the core of what we do, right? We've got security people ourselves. We do the same thing for all the new vendors that we onboard. So we invest a lot. For example, the only way you can connect to a PlanetScale database even if you're using PrivateLink, even if you're not touching the public internet at all, is over TLS secured endpoint, right? From the very first day, the very first beta that we had we knew not a single byte goes over the internet that's not encrypted. It's encrypted at rest, we have audit logging, we do a ton internally as well to make sure that, what's happening to your database is something you can find out. The favorite thing that I think though is all your schema changes are tracked on PlanetScale, because we provide an entire workflow for your schema changes. We actually have like a GitHub Polar Request style thing, your security folks can actually look and say, what changes were made to the database day in and day out. They can go back and there's a full history of that log. So you actually have, I think better security than a lot of other databases where you've got to build all these tools and stuff like that, it's all built into PlanetScale. >> So, we started out the conversation with two clicks but I'm a developer. >> Nick: Yeah. >> And I'm developing a service at scale. >> Yep. >> I want to have a SaaS offering. How do I automate the deployment of the database and the management of the database across multiple customers? >> Yeah, so everything is API driven. We've got an API that you can use supervision databases to make schema changes, to make whatever changes you want to that database. We have an API that powers our website, the same API that customers can use to kind of automate any part of the workflow that they want. There's actually someone who did talk earlier using, I think, wwww.crossplane.io, or they can use Kubernetes custom resource definitions to provision PlanetScale databases completely automatically. So you can even do it as part of your standard deployment workflow. Just create a PlanetScale database, create a password, inject it in your app, all automatically. >> So Nick, as I'm thinking about scale. >> Yep. >> I'm thinking about multiple customers. >> Nick: Yep. >> I have a successful product. >> Nick: Yep. >> And now these customers are coming to me with different requirements. One customer wants to upgrade once every 1/4, another one, it's like, you know what? Just bring it on. Like bring the schema changes on. >> Yep. >> I want the latest features, et cetera. >> Nick: Right. >> How do I manage that with PlanetScale? When I'm thinking about MySQL it's a little, that can be a little difficult. >> Nick: Yeah. >> But how does PlanetScale help me solve that problem? >> Yeah. So, again I think it's that same workflow engine that we've built. So every database has its own kind of deploy queue, its own migration system. So you can automate all these processes and say, on this database, I want to change this schema this way, on this database I'm going to hold off. You can use our API to drive a view into like, well, what's the schema on this database? What's schema on this database? What version am I running on this database? And you can actually bring all that in. And if you were really successful you'd have this single plane of glass where you can see what's the status of all my databases and how are they doing, all powered by kind of the PlanetScale API. >> So we can't talk about databases without talking about backup. >> Nick: Yep. >> And recovery. >> Yep. >> How do I back this thing up and make sure that I can fall back? If someone deleted a table. >> Nick: Yep. >> It happens all the time in production. >> Nick: Yeah, 100%. >> How do I recover from it? >> So there's two pieces to this, and I'm going to talk about two different ways that we can help you solve this problem. One of them is, every PlanetScale database comes with backups built in and we test them fairly often, right? We use these backups. We actually give you a free daily backup on every database 'cause it's important to us as well. We want to be able to restore from backup, we want to be able to do failovers and stuff like that, all that is handled automatically. The other thing though is this feature that we launched in March called the PlanetScale Rewind. And what Rewind is, is actually a schema migration undo button. So let's say, you're a developer you're dropping a table or a column, you mean to drop this, but you drop the other one on accident, or you thought this column was unused but it wasn't. You know when you do something wrong, you cause an incident and you get that sick feeling in your stomach. >> Oh, I'm sorry. I've pulled a drive that was written not ready file and it was horrible. >> Exactly. And you kind of start to go, oh man, what am I going to do next? Everyone watching this right now is probably squirming in their seat a bit, you know the feeling. >> Yeah, I know the feeling >> Well, PlanetScale gives you an undo button. So you can click, undo migration, for 30 minutes after you do the migration and we'll revert your schema with all the data in it back to what your database looked like before you did that migration. Drop a column on accident, drop a table on accident, click the Rewind button, there's all the data there. And, the new rights that you've taken while that's happened are there as well. So it's not just a restore to a point in time backup. It's actually that we've replicated your rights sent them to both the old and the new schema, and we can get you right back to where you started, downtime solved. >> Both: So. >> Nick: Go ahead. >> DBAs are DBAs, whether they've become now reformed DBAs that are cloud architects, but they're DBAs. So there's a couple of things that they're going to want to know, one, how do I get my zero back up in my hands? >> Yeah. >> I want my, it's MySQL data. >> Nick: Yeah. >> I want my MySQL backup. >> Yeah. So you can just take backups off the database yourself the same way that you're doing today, right? MySQL dump, MySQL backup, and all those kinds of things. If you don't trust PlanetScale, and look, I'm all about backups, right? You want them in two different data centers on different mediums, you can just add on your own backup tools that you have right now and also use that. I'd like you to trust that PlanetScale has the backups as well. But if you want to keep doing that and run your own system, we're totally cool with that as well. In fact, I'd go as far as to say, I recommend it. You never have too many backups. >> So in a moment we're going to run Kube clock. So get your... >> Okay, all right. >> You know, stand tall. >> All right. >> I'll get ready. I'm going to... >> Nick: I'm tall, I'm tall. >> We're both tall. The last question before Kube clock. >> Nick: Yeah. >> It is, let's talk a little nerve knobs. >> Nick: Okay. >> The reform DBA. >> Nick: Yeah. >> They want, they're like, oh, this query ran a little bit slow. I know I can squeeze a little bit more out of that. >> Nick: Yeah. >> Who do they talk to? >> Yeah. So that's a great question. So we provide you some insights on the product itself, right? So you can take a look and see how are my queries performing and stuff like that. Our goal, our job is to surface to you all the metrics that you need to make that decision. 'Cause at the end of the day, a reform DBA or not it is still a skill to analyze the performance of a MySQL query, run and explain, kind of figure all that out. We can't do all of that for you. So we want to give you the information you need either knowledge or you know, stuff to learn whatever it is because some of it does have to come back to, what's my schema? What's my query? And how can I optimize it? I'm missing an index and stuff like that. >> All right. So, you're early adopter of the Kube clock. >> Okay. >> I have to, people say they're ready. >> Nick: Ooh, okay. >> All the time people say they're ready. >> Nick: Woo. >> But I'm not quite sure that they're ready. >> Nick: Well, now I'm nervous. >> So are you ready? >> Do I have any other choice? >> No, you don't. >> Nick: Then I am. >> But are you ready? >> Sure, let's go. >> All right. Start the Kube clock. (upbeat music) >> Nick: All right, what do you want me to do? >> Go. >> All right. >> You said you were ready. >> I'm ready, all right, I'm ready. All right. >> Okay, I'll reset. I'll give you, I'll give, see people say they're ready. >> All right. You're right. You're right. >> Start the Kube clock, go. >> Okay. Are you happy with how your database works? Are you happy with the velocity? Are you happy with what your engineers and what your teams can do with their database? >> Follow the dream not the... Well, follow the green... >> You got to be. >> Not the dream. >> You got to be able to deliver. At the end of the day you got to deliver what the business wants. It's not about performance. >> You got to crawl before you go. You got to crawl, you got to crawl. >> It's not just about is my query fast, it's not just about is my query right, it's about, are my customers getting what they want? >> You're here, you deserve a seat at the table. >> And that's what PlanetScale provides, right? PlanetScale... >> Keith: Ten more seconds. >> PlanetScale is a tool for getting done what you need to get done as a business. That's what we're here for. Ultimately, we want to be the best database for developing software. >> Keith: Two, one. >> That's it. End it there. >> Nick, you took a shot, I'm buying it. Great job. You know, this is fun. Our jobs are complex. >> Yep. >> Databases are hard. >> Yep. >> It is the, where your organization keeps the most valuable assets that you have. >> Nick: A 100%. >> And we are having these tough conversations. >> Nick: Yep. >> Here in Valencia, you're talking to the leader in tech coverage. From Valencia, Spain, I'm Keith Townsend, and you're watching theCUBE, the leader in high tech coverage. (upbeat music)

Published Date : May 20 2022

SUMMARY :

brought to you by Red Hat, in the industry, conversation right off the bat I don't think anyone is quite happy with, Outside of the dungeon I think... We're not just building the database, So what do you mean it's a lot of the MySQL you know and love. because that's the buzzy world. being able to write code. Deploy the code, But we're talking about databases. I got to describe what I got all the descriptive stuff But the idea is exactly what you said. I think of stateless. when you say it's cloud native. and be right there next to you So this concept of clicking two buttons... And deploying, So let's say that I'm in AWS. consume the service? So you give us some permissions So how do I make calls to the database? you feed that into your So databases are typically security. Oh, they get excited. I can find like eight or the only way you can connect So, we started out the and the management of the database So you can even do it another one, it's like, you know what? How do I manage that with PlanetScale? So you can automate all these processes So we can't talk about databases and make sure that I can fall back? that we can help you solve this problem. and it was horrible. And you kind of start to go, and we can get you right that they're going to want to know, So you can just take backups going to run Kube clock. I'm going to... The last question before Kube clock. It is, I know I can squeeze a the metrics that you need of the Kube clock. I have to, sure that they're ready. Start the Kube clock. All right. see people say they're ready. All right. Are you happy with what your engineers Well, follow the green... you got to deliver what You got to crawl before you go. you deserve a seat at the table. And that's what what you need to get done as a business. End it there. Nick, you took a shot, the most valuable assets that you have. And we are having the leader in high tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DeLisaPERSON

0.99+

KeithPERSON

0.99+

Rebecca KnightPERSON

0.99+

AnviPERSON

0.99+

2009DATE

0.99+

Keith TownsendPERSON

0.99+

EuropeLOCATION

0.99+

Nick van WiggerenPERSON

0.99+

Avni KhatriPERSON

0.99+

JigyasaPERSON

0.99+

IndiaLOCATION

0.99+

CanadaLOCATION

0.99+

Nick Van WiggerenPERSON

0.99+

one yearQUANTITY

0.99+

MexicoLOCATION

0.99+

Jigyasa GroverPERSON

0.99+

CambridgeLOCATION

0.99+

Red HatORGANIZATION

0.99+

two piecesQUANTITY

0.99+

NickPERSON

0.99+

ValenciaLOCATION

0.99+

fiveQUANTITY

0.99+

OaxacaLOCATION

0.99+

eightQUANTITY

0.99+

New DelhiLOCATION

0.99+

RomaniaLOCATION

0.99+

AWSORGANIZATION

0.99+

Khan AcademyORGANIZATION

0.99+

DeLisa AlexanderPERSON

0.99+

MarchDATE

0.99+

10 yearQUANTITY

0.99+

100%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

five yearQUANTITY

0.99+

22 labsQUANTITY

0.99+

BostonLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

eight yearsQUANTITY

0.99+

one footQUANTITY

0.99+

five yearsQUANTITY

0.99+

MySQLTITLE

0.99+

AntequeraLOCATION

0.99+

7,500 peopleQUANTITY

0.99+

Monday nightDATE

0.99+

five countriesQUANTITY

0.99+

two new labsQUANTITY

0.99+

two different waysQUANTITY

0.99+

last weekDATE

0.99+

80%QUANTITY

0.99+

20QUANTITY

0.99+

Boston, MassachusettsLOCATION

0.99+

Oaxaca CityLOCATION

0.99+

30 minutesQUANTITY

0.99+

iOSTITLE

0.99+

27 different knobsQUANTITY

0.99+

TwoQUANTITY

0.99+

KubeConEVENT

0.99+

Gian Merlino, Imply.io | AWS Startup Showcase S2 E2


 

(upbeat music) >> Hello, and welcome to theCUBE's presentation of the AWS Startup Showcase: Data as Code. This is Season 2, Episode 2 of the ongoing SaaS covering exciting startups from the AWS ecosystem and we're going to talk about the future of enterprise data analytics. I'm your host, John Furrier and today we're joined by Gian Merlino CTO and co-founder of Imply.io. Welcome to theCUBE. >> Hey, thanks for having me. >> Building analytics apps with Apache Druid and Imply is what the focus of this talk is and your company being showcased today. So thanks for coming on. You guys have been in the streaming data large scale for many, many years of pioneer going back. This past decade has been the key focus. Druid's unique position in that market has been key, you guys been empowering it. Take a minute to explain what you guys are doing over there at Imply. >> Yeah, for sure. So I guess to talk about Imply, I'll talk about Druid first. Imply is a open source based company and Apache Druid is the open source project that the Imply product's built around. So what Druid's all about is it's a database to power analytical applications. And there's a couple things I want to talk about there. The first off is, is why do we need that? And the second is why are we good at, and I'll just a little flavor of both. So why do we need database to power analytical apps? It's the same reason we need databases to power transactional apps. I mean, the requirements of these applications are different analytical applications, apps where you have tons of data coming in, you have lots of different people wanting to interact with that data, see what's happening both real time and historical. The requirements of that kind of application have sort of given rise to a new kind of database that Druid is one example of. There's others, of course, out there in both the open source and non open source world. And what makes Druid really good at it is, people often say what is Druid's big secret? How is it so good? Why is it so fast? And I never know what to say to that. I always sort of go to, well it's just getting all the little details right. It's a lot of pieces that individually need to be engineered, you build up software in layers, you build up a database in layers, just like any other piece of software. And to have really high performance and to do really well at a specific purpose, you kind of have to get each layer right and have each layer have as little overhead as possible. And so just a lot of kind of nitty gritty engineering work. >> What's interesting about the trends over the past 10 years in particular, maybe you can go back 10, 15 years is state of the art database was, stream a bunch of data put it into a pile, index it, interrogate it, get some reports, pretty basic stuff and then all of a sudden now you have with cloud, thousands of databases out there, potentially hundreds of databases living in the wild. So now data with Kafka and Kinesis, these kinds of technologies streaming data's happening in real time so you don't have time to put it in a pile or index it. You want real time analytics. And so perhaps whether they're mobile app, Instagrams of the world, this is now what people want in the enterprise. You guys are the heart of this. Can you talk about that dynamic of getting data quickly at scale? >> So our thinking is that actually both things matter. Realtime data matters but also historical context matters. And the best way to get historical context out of data is to put it in a pile, index it, so to speak, and then the best way to get realtime context to what's happening right now is to be able to operate on these streams. And so one of the things that we do in Druid, I wish I had more time to talk about it but one of the things that we do in Druid is we kind of integrate this real time processing and this historical processing. So we actually have a system that we call the historical system that does what you're saying, take all this data, put in a pile, index it for all your historical data. And we have a system that we call the realtime system that is pulling data in from things like Kafka, Kinesis, getting data pushed into it as the case may be. And this system is responsible for all the data that's recent, maybe the last hour or two of data will be handled by this system and then the older stuff handled by historical system. And our query layer blends these two together seamlessly so a user never needs to think about whether they're querying realtime data or historical data. It's presented as a blended view. >> It's interesting and you know a lot of the people just say, Hey, I don't really have the expertise, and now they're trying to learn it so their default was throw into a data lake. So that brings back that historical. So the rise of the data lake, you're seeing Databricks and others out there doing very well with the data lakes. How do you guys fit into that 'cause that makes it a lot of sense too cause that looks like historical information? >> So data lakes are great technology. We love that kind of stuff. I would say that a really popular pattern, with Druid there's actually two very popular patterns. One is, I would say streaming forward. So stream focus where you connect up to something like Kafka and you load data to stream and then we will actually take that data, we'll store all the historical data that came from the stream and instead of blend those two together. And another other pattern that's also very common is the data lake pattern. So you have a data lake and then you're sort of mirroring that data from the data lake into Druid. This is really common when you have a data lake that you want to be able to build an application on top of, you want to say I have this data in the data lake, I have my table, I want to build an application that has hundreds of people using it, that has really fast response time, that is always online. And so when I mirror that data into Druid and then build my app on top of that. >> Gian take me through the progression of the maturity cycle here. As you look back even a few years, the pioneers and the hardcore streaming data using data analytics at scale that you guys are doing with Druid was really a few percentage of the population doing that. And then as the hyperscale became mainstream, it's now in the enterprise, how stable is it? What's the current state of the art relative to the stability and adoption of the techniques that you guys are seeing? >> I think what we're seeing right now at this stage in the game, and this is something that we kind of see at the commercial side of Imply, what we're seeing at this stage of the game is that these kinds of realization that you actually can get a lot of value out of data by building interactive apps around it and by allowing people to kind of slice and dice it and play with it and just kind of getting out there to everybody, that there is a lot of value here and that it is actually very feasible to do with current technology. So I've been working on this problem, just in my own career for the past decade, 10 years ago where we were is even the most high tech of tech companies were like, well, I could sort of see the value. It seems like it might be difficult. And we're kind of getting from there to the high tech companies realizing that it is valuable and it is very doable. And I think that was something there was a tipping point that I saw a few years ago when these Druid and database like really started to blow up. And I think now we're seeing that beyond sort of the high tech companies, which is great to see. >> And a lot of people see the value of the data and they see the application as data as code means the application developers really want to have that functionality. Can you share the roadmap for the next 12 months for you guys on what's coming next? What's coming around the corner? >> Yeah, for sure. I mentioned during the Apache open source community, different products we're one member of that community, very prominent one but one member so I'll talk a bit about what we're doing for the Druid project as part of our effort to make Druid better and take it to the next level. And then I'll talk about some of the stuff we're doing on the, I guess, the Druid sort of commercial side. So on the Druid side, stuff that we're doing to make Druid better, take it to the next level, the big thing is something that we really started writing about a few weeks ago, the multi-stage query engine that we're working on, a new multi-stage query engine. If you're interested, the full details are on blog on our website and also on GitHub on Apache Druid GitHub, but short version is Druid's. We're sort of extending Druid's Query engine to support more and varied kinds of queries with a focus on sort of reporting queries, more complex queries. Druid's core query engine has classically been extremely good at doing rapid fire queries very quickly, so think thousands of queries per second where each query is maybe something that involves a filter in a group eye like a relatively straightforward query but we're just doing thousands of them constantly. Historically folks have not reached for technologies like Druid is, really complex and a thousand line sequel queries, complex supporting needs. Although people really do need to do both interactive stuff and complex stuff on the same dataset and so that's why we're building out these capabilities in Druid. And then on the implied commercial side, the big effort for this year is Polaris which is our cloud based Druid offering. >> Talk about the relationship between Druid and Imply? Share with the folks out there how that works. >> So Druid is, like I mentioned before, it's Apache Druid so it's a community based project. It's not a project that is owned by Imply, some open source projects are sort of owned or sponsored by a particular organization. Druid is not, Druid is an independent project. Imply is the biggest contributor to Druid. So the imply engineering team is contributing tons of stuff constantly and we're really putting a lot of the work in to improve Druid although it is a community effort. >> You guys are launching a new SaaS service on AWS. Can you tell me about what that's happening there, what it's all about? >> Yeah, so we actually launched that a couple weeks ago. It's called Polaris. It's very cool. So historically there's been two ways, you can either get started with Apache Druid, it's open source, you install it yourself, or you can get started with Imply Enterprise which is our enterprise offering. And these are the two ways you can get started historically. One of the issues of getting started with Apache Druid is that it is a very complicated distributed database. It's simple enough to run on a single server but once you want to scale things out, once you get all these things set up, you may want someone to take some of that operational burden off your hands. And on the Imply Enterprise side, it says right there in the name, it's enterprise product. It's something that may take a little bit of time to get started with. It's not something you can just roll up with a credit card and sign up for. So Polaris is really about of having a cloud product that's sort of designed to be really easy to get started with, really self-service that kind of stuff. So kind of providing a really nice getting started experience that does take that maintenance burden and operational burden away from you but is also sort of as easy to get started with as something that's database would be. >> So a more developer friendly than from an onboarding standpoint, classic. >> Exactly. Much more developer friendly is what we're going for with that product. >> So take me through the state of the art data as code in your mind 'cause infrastructure is code, DevOps has been awesome, that's cloud scale, we've seen that. Data as Code is a term we coined but means data's in the developer process. How do you see data being integrated into the workflow for developers in the future? >> Great question. I mean all kinds of ways. Part of the reason that, I kind of alluded to this earlier, building analytical applications, building applications based on data and based on letting people do analysis, how valuable it is and I guess to develop in that context there's kind of two big ways that we sort of see these things getting pushed out. One is developers building apps for other people to use. So think like, I want to build something like Google analytics, I want to build something that clicks my web traffic and then lets the marketing team slice and dice through it and make decisions about how well the marketing's doing. You can build something like that with databases like Druid and products like what we're having in Imply. I guess the other way is things that are actually helping developers do their own job. So kind of like use your own product or use it for yourself. And in this world, you kind of have things like... So going beyond what I think my favorite use case, I'll just talk about one. My favorite use case is so I'm really into performance, I spend the last 10 years of my life working on high performance database so obviously I'm into this kind of stuff. I love when people use our product to help make their own products faster. So this concept of performance monitoring and performance management for applications. One thing that I've seen some of our customers do and some of our users do that I really love is when you kind of take that performance data of your own app, as far as it can possibly go take it to the next level. I think the basic level of using performance data is I collect performance data from my application deployed out there in the world and I can just use it for monitoring. I can say, okay my response times are getting high in this region, maybe there's something wrong with that region. One of the very original use cases for Druid was that Netflix doing performance analysis, performance analysis more exciting than monitoring because you're not just understanding that there's a performance, is good or bad in whatever region sort of getting very fine grain. You're saying in this region, on this server rack for these devices, I'm seeing a degradation or I'm seeing a increase. You can see things like Apple just rolled out a new version of iOS and on that new version of iOS, my app is performing worse than the older version. And even though not many devices are on that new version yet I can kind of see that because I have the ability to get really deep in the data and then I can start slicing nice that more. I can say for those new iOS people, is it all iOS devices? Is it just the iPhone? Is it just the iPad? And that kind of stuff is just one example but it's an example that I really like. >> It's kind of like the data about the data was always good to have context, you're like data analytics for data analytics to see how it's working at scale. This is interesting because now you're bringing up the classic finding the needle in the haystack of needles, so to speak where you have so much data out there like edge cases, edge computing, for instance, you have devices sending data off. There's so much data coming in, the scale is a big issue. This is kind of where you guys seem to be a nice fit for, large scale data ingestion, large scaled data management, large scale data insights kind of all rolled in to one. Is that kind of-? >> Yeah, for sure. One of the things that we knew we had to do with Druid was we were building it for the sort of internet age and so we knew it had to scale well. So the original use case for Druid, the very first one that we ended up building for, the reason we build in the first place is because that original use case had massive scale and we struggled finding something, we were literally trying to do what we see people doing now which is we're trying to build an app on a massive data set and we're struggling to do it. And so we knew it had to scale to massive data sets. And so that's a little flavor of kind know how that works is, like I was mentioning earlier this, this realtime system and historical system, the realtime system is scalable, it's scalable out if you're reading from Kafka, we scale out just like any other Kafka consumer. And then the historical system is all based on what we call segments which are these files that has a few million rows per file. And a cluster is really big, might have thousands of servers, millions of segments, but it's a design that is kind of, it's a design that does scale to these multi-trillion road tables. >> It's interesting, you go back when you probably started, you had Twitter, Netflix, Facebook, I mean a handful of companies that were at the scale. Now, the trend is you're on this wave where those hyperscalers and, or these unique huge scale app companies are now mainstream enterprise. So as you guys roll out the enterprise version of building analytics and applications, which Druid and Imply, they got to going to get religion on this. And I think it's not hard because it's distributed computing which they're used to. So how is that enterprise transition going because I can imagine people would want it and are just kicking the tires or learning and then trying to put it into action. How are you seeing the adoption of the enterprise piece of it? >> The thing that's driving the interest is for sure doing more and more stuff on the internet because anything that happens on the internet whether it's apps or web based, there's more and more happening there and anything that is connected to the internet, anything that's serving customers on the internet, it's going to generate an absolute mountain of data. And the only question is not if you're going to have that much data, you do if you're doing anything on the internet, the only question is what are you going to do with it? So that's I think what drives the interest, is people want to try to get value out of this. And then what drives the actual adoption is I think, I don't want to necessarily talk about specific folks but within every industry I would say there's people that are leaders, there's organizations that are leaders, teams that are leaders, what drives a lot of interest is seeing someone in your own industry that has adopted new technology and has gotten a lot of value out of it. So a big part of what we do at Imply is that identify those leaders, work with them and then you can talk about how it's helped them in their business. And then also I guess the classic enterprise thing, what they're looking for is a sense of stability, a sense of supportability, a sense of robustness and this is something that comes with maturity. I think that the super high tech companies are comfortable using some open source software that's rolled off the presses a few months ago; he big enterprises are looking for something that has corporate backing, they're looking for something that's been around for a while and I think that Druid technologies like it are breaching that little maturity right now. >> It's interesting that supply chain has come up in the software side. That conversation is a lot now, you're hearing about open source being great, but in the cloud scale, you can get the data in there to identify opportunities and also potentially vulnerabilities is big discussion. Question for you on the cloud native side, how do you see cloud native, cloud scale with services like serverless Lambda, edge merging, it's easier to get into the cloud scale. How do you see the enterprise being hardened out with Druid and Imply? >> I think the cloud stuff is great, we love using it to build all of our own stuff, our product is of course built on other cloud technologies and I think these technologies built on each other, you sort of have like I mentioned earlier, all software is built in layers and cloud architecture is the same thing. What we see ourselves as doing is we're building the next layer of that stack. So we're building the analytics database layer. You saw when people first started doing these in public cloud, the very first two services that came out you can get a virtual machine and you can store some data and you can retrieve that data but there's no real analytics on it, there's just kind of storage and retrieval. And then as time goes on higher and higher levels get built out delivering more and more value and then the levels mature as they go up. And so the the bottom of layers are incredibly mature, the top most layers are cutting edge and there's a kind of a maturity gradient between those two. And so what we're doing is we're building out one of those layers. >> Awesome extraction layers, faster performance, great stuff. Final question for you, Gian, what's your vision for the future? How do you Imply and Druid it going? What's it look like five years from now? >> I think that for sure it seems like that there's two big trends that are happening in the world and it's going to sound a little bit self serving for me to say it but I believe what we're doing here says, I'm here 'cause I believe it, I believe in open source and I believe in cloud stuff. That's why I'm really excited that what we're doing is we're building a great cloud product based on a great open source project. I think that's the kind of company that I would want to buy from if I wasn't at this company and I was just building something, I would want to buy a great cloud product that's backed by a great open source project. So I think the kind of the way I see the industry going, the way I see us going and I think would be a great place to end up just kind of as an engineering world, as an industry is a lot of these really great open source projects doing things like what Kubernetes doing containers, we're doing with analytics et cetera. And then really first class really well done cloud versions of each one of them and so you can kind of choose, do you want to get down and dirty with the open source or do you want to choose just kind of have the abstraction of the cloud. >> That's awesome. Cloud scale, cloud flexibility, community getting down and dirty open source, the best of both worlds. Great solution. Goin, thanks for coming on and thanks for sharing here in the Showcase. Thanks for coming on theCUBE. >> Thank you too. >> Okay, this is theCUBE Showcase Season 2, Episode 2. I'm John Furrier, your host. Data as Code is the theme of this episode. Thanks for watching. (upbeat music)

Published Date : Apr 26 2022

SUMMARY :

of the AWS Startup Showcase: Data as Code. Take a minute to explain what you guys are And the second is why are we good at, Instagrams of the world, And so one of the things know a lot of the people data that came from the of the art relative to the that beyond sort of the the next 12 months for you So on the Druid side, Talk about the relationship Imply is the biggest contributor to Druid. Can you tell me about what And on the Imply Enterprise side, So a more developer friendly than from we're going for with that product. means data's in the developer process. I have the ability to get It's kind of like the One of the things that of the enterprise piece of it? I guess the classic enterprise thing, but in the cloud scale, And so the the bottom of How do you Imply and Druid it going? and so you can kind of choose, here in the Showcase. Data as Code is the theme of this episode.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FurrierPERSON

0.99+

Gian MerlinoPERSON

0.99+

AWSORGANIZATION

0.99+

AppleORGANIZATION

0.99+

two waysQUANTITY

0.99+

iOSTITLE

0.99+

NetflixORGANIZATION

0.99+

10QUANTITY

0.99+

each layerQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

millionsQUANTITY

0.99+

DruidTITLE

0.99+

iPadCOMMERCIAL_ITEM

0.99+

firstQUANTITY

0.99+

OneQUANTITY

0.99+

secondQUANTITY

0.99+

thousandsQUANTITY

0.99+

twoQUANTITY

0.99+

ImplyORGANIZATION

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

each queryQUANTITY

0.99+

theCUBEORGANIZATION

0.98+

TwitterORGANIZATION

0.98+

FacebookORGANIZATION

0.98+

GianPERSON

0.98+

KafkaTITLE

0.98+

Imply.ioORGANIZATION

0.97+

one exampleQUANTITY

0.97+

first two servicesQUANTITY

0.97+

hundreds of peopleQUANTITY

0.97+

each oneQUANTITY

0.97+

two big waysQUANTITY

0.97+

10 years agoDATE

0.96+

past decadeDATE

0.96+

first classQUANTITY

0.96+

one memberQUANTITY

0.96+

LambdaTITLE

0.96+

two big trendsQUANTITY

0.96+

ApacheORGANIZATION

0.95+

both worldsQUANTITY

0.95+

PolarisORGANIZATION

0.95+

one memberQUANTITY

0.95+

todayDATE

0.95+

Jon Dahl, Mux | AWS Startup Showcase S2 E2


 

(upbeat music) >> Welcome, everyone, to theCUBE's presentation of the AWS Startup Showcase. And this episode two of season two is called "Data as Code," the ongoing series covering exciting new startups in the AWS ecosystem. I'm John Furrier, your host of theCUBE. Today, we're excited to be joined by Jon Dahl, who is the co-founder and CEO of MUX, a hot new startup building cloud video for developers, video with data. John, great to see you. We did an interview on theCube Conversation. Went into big detail of the awesomeness of your company and the trend that you're on. Welcome back. >> Thank you, glad to be here. >> So, video is everywhere, and video for pivot to video, you hear all these kind of terms in the industry, but now more than ever, video is everywhere and people are building with it, and it's becoming part of the developer experience in applications. So people have to stand up video into their code fast, and data is code, video is data. So you guys are specializing this. Take us through that dynamic. >> Yeah, so video clearly is a growing part of how people are building applications. We see a lot of trends of categories that did not involve video in the past making a major move towards video. I think what Peloton did five years ago to the world of fitness, that was not really a big category. Now video fitness is a huge thing. Video in education, video in business settings, video in a lot of places. I think Marc Andreessen famously said, "Software is eating the world" as a pretty, pretty good indicator of what the internet is actually doing to the economy. I think there's a lot of ways in which video right now is eating software. So categories that we're not video first are becoming video first. And that's what we help with. >> It's not obvious to like most software developers when they think about video, video industries, it's industry shows around video, NAB, others. People know, the video folks know what's going on in video, but when you start to bring it mainstream, it becomes an expectation in the apps. And it's not that easy, it's almost a provision video is hard for a developer 'cause you got to know the full, I guess, stack of video. That's like low level and then kind of just basic high level, just play something. So, in between, this is a media stack kind of dynamic. Can you talk about how hard it is to build video for developers? How is it going to become easier? >> Yeah, I mean, I've lived this story for too long, maybe 13 years now, when I first build my first video stack. And, you know, I'll sometimes say, I think it's kind of a miracle every time a video plays on the internet because the internet is not a medium designed for video. It's been hijacked by video, video is 70% of internet traffic today in an unreliable, sort of untrusted network space, which is totally different than how television used to work or cable or things like that. So yeah, so video is hard because there's so many problems from top to bottom that need to be solved to make video work. So you have to worry about video compression encoding, which is a complicated topic in itself. You have to worry about delivering video around the world at scale, delivering it at low cost, at low latency, with good performance, you have to worry about devices and how every device, Android, iOS, web, TVs, every device handles video differently and so there's a lot of work there. And at the end of the day, these are kind of unofficial standards that everyone's using. So one of the miracles is like, if you want to watch a video, somehow you have to get like Apple and Google to agree on things, which is not always easy. And so there's just so many layers of complexity that are behind it. I think one way to think about it is, if you want to put an image online, you just put an image online. And if you want to put video online, you build complex software, and that's the exact problem that MUX was started to help solve. >> It's interesting you guys have almost creating a whole new category around video infrastructure. And as you look at, you mentioned stack, video stack. I'm looking at a market where the notion of a media stack is developing, and you're seeing these verticals having similar dynamics with cloud. And if you go back to the early days of cloud computing, what was the developer experience or entrepreneurial experience, you had to actually do a lot of stuff before you even do anything, provision a server. And this has all kind of been covered in great detail in the glory of Agile and whatnot. It was expensive, and you had that actually engineer before you could even stand up any code. Now you got video that same thing's happening. So the developers have two choices, go do a bunch of stuff complex, building their own infrastructure, which is like building a data center, or lean in on MUX and say, "Hey, thank you for doing all that years of experience building out the stacks to take that hard part away," but using APIs that they have. This is a developer focused problem that you guys are solving. >> Yeah, that's right. my last company was a company called Zencoder, that was an API to video encoding. So it was kind of an API to a small part of what MUX does today, just one of those problems. And I think the thing that we got right at Zencoder, that we're doing again here at MUX, was building four developers first. So our number one persona is a software developer. Not necessarily a video expert, just we think any developer should be able to build with video. It shouldn't be like, yeah, got to go be a specialist to use this technology, because it should become just of the internet. Video should just be something that any developer can work with. So yeah, so we build for developers first, which means we spend a lot of time thinking about API design, we spend a lot of time thinking about documentation, transparent pricing, the right features, great support and all those kind of things that tend to be characteristics of good developer companies. >> Tell me about the pipe lining of the products. I'm a developer, I work for a company, my boss is putting pressure on me. We need video, we have all this library, it's all stacking up. We hired some people, they left. Where's the video, we've stored it somewhere. I mean, it's a nightmare, right? So I'm like, okay, I'm cloud native, I got an API. I need to get my product to market fast, 'cause that is what Agile developers want. So how do you describe that acceleration for time to market? You mentioned you guys are API first, video first. How do these customers get their product into the market as fast as possible? >> Yeah, well, I mean the first thing we do is we put what we think is probably on average, three to four months of hard engineering work behind a single API call. So if you want to build a video platform, we tell our customers like, "Hey, you can do that." You probably need a team, you probably need video experts on your team so hire them or train them. And then it takes several months just to kind of to get video flowing. One API call at MUX gives you on-demand video or live video that works at scale, works around the world with good performance, good reliability, a rich feature set. So maybe just a couple specific examples, we worked with Robin Hood a few years ago to bring video into their newsfeed, which was hugely successful for them. And they went from talking to us for the first time to a big launch in, I think it was three months, but the actual code time there was like really short. I want to say they had like a proof of concept up and running in a couple days, and then the full launch in three months. Another customer of ours, Bandcamp, I think switched from a legacy provider to MUX in two weeks in band. So one of the big advantages of going a little bit higher in the abstraction layer than just building it yourself is that time to market. >> Talk about this notion of video pipeline 'cause I know I've heard people I talk about, "Hey, I just want to get my product out there. I don't want to get stuck in the weeds on video pipeline." What does that mean for folks that aren't understanding the nuances of video? >> Yeah, I mean, it's all the steps that it takes to publish video. So from ingesting the video, if it's live video from making sure that you have secure, reliable ingest of that live feed potentially around the world to the transcoding, which is we talked a little bit about, but it is a, you know, on its own is a massively complicated problem. And doing that, well, doing that well is hard. Part of the reason it's hard is you really have to know where you're publishing too. And you might want to transcode video differently for different devices, for different types of content. You know, the pipeline typically would also include all of the workflow items you want to do with the video. You want to thumbnail a video, you want clip, create clips of the video, maybe you want to restream the video to Facebook or Twitter or a social platform. You want to archive the video, you want it to be available for downloads after an event. If it's just a, if it's a VOD upload, if it's not live in the first place. You have all those things and you might want to do simulated live with the video. You might want to actually record something and then play it back as a live stream. So, the pipeline Ty typically refers to everything from the ingest of the video to the time that the bits are delivered to a device. >> You know, I hear a lot of people talking about video these days, whether it's events, training, just want peer to peer experience, video is powerful, but customers want to own their own platform, right? They want to have the infrastructure as a service. They kind of want platform as a service, this is cloud talk now, but they want to have their own capability to build it out. This allows them to get what they want. And so you see this, like, is it SaaS? Is it platform? People want customization? So kind of the general purpose video solution does it really exist or doesn't? I mean, 'cause this is the question. Can I just buy software and work or is it going to be customized always? How do you see that? Because this becomes a huge discussion point. Is it a SaaS product or someone's going to make a SaaS product? >> Yeah, so I think one of the most important elements of designing any software, but especially when you get into infrastructure is choosing an abstraction level. So if you think of computing, you can go all the way down to building a data center, you can go all the way down to getting a colo and racking a server like maybe some of us used to do, who are older than others. And that's one way to run a server. On the other extreme, you have just think of the early days of cloud competing, you had app engine, which was a really fantastic, really incredible product. It was one push deploy of, I think Python code, if I remember correctly, and everything just worked. But right in the middle of those, you had EC2, which was, EC2 is basically an API to a server. And it turns out that that abstraction level, not Colo, not the full app engine kind of platform, but the API to virtual server was the right abstraction level for maybe the last 15 years. Maybe now some of the higher level application platforms are doing really well, maybe the needs will shift. But I think that's a little bit of how we think about video. What developers want is an API to video. They don't want an API to the building blocks of video, an API to transcoding, to video storage, to edge caching. They want an API to video. On the other extreme, they don't want a big application that's a drop in white label video in a box like a Shopify kind of thing. Shopify is great, but developers don't want to build on top of Shopify. In the payments world developers want Stripe. And that abstraction level of the API to the actual thing you're getting tends to be the abstraction level that developers want to build on. And the reason for that is, it's the most productive layer to build on. You get maximum flexibility and also maximum velocity when you have that API directly to a function like video. So, we like to tell our customers like you, you own your video when you build on top of MUX, you have full control over everything, how it's stored, when it's stored, where it goes, how it's published, we handle all of the hard technology and we give our customers all of the flexibility in terms of designing their products. >> I want to get back some use case, but you brought that up I might as well just jump to my next point. I'd like you to come back and circle back on some references 'cause I know you have some. You said building on infrastructure that you own, this is a fundamental cloud concept. You mentioned API to a server for the nerds out there that know that that's cool, but the people who aren't super nerdy, that means you're basically got an interface into a server behind the scenes. You're doing the same for video. So, that is a big thing around building services. So what wide range of services can we expect beyond MUX? If I'm going to have an API to video, what could I do possibly? >> What sort of experience could you build? >> Yes, I got a team of developers saying I'm all in API to video, I don't want to do all that transit got straight there, I want to build experiences, video experiences on my app. >> Yeah, I mean, I think, one way to think about it is that, what's the range of key use cases that people do with video? We tend to think about six at MUX, one is kind of the places where the content is, the prop. So one of the things that use video is you can create great video. Think of online courses or fitness or entertainment or news or things like that. That's kind of the first thing everyone thinks of, when you think video, you think Netflix, and that's great. But we see a lot of really interesting uses of video in the world of social media. So customers of ours like Visco, which is an incredible photo sharing application, really for photographers who really care about the craft. And they were able to bring video in and bring that same kind of Visco experience to video using MUX. We think about B2B tools, videos. When you think about it, all video is, is a high bandwidth way of communicating. And so customers are as like HubSpot use video for the marketing platform, for business collaboration, you'll see a lot of growth of video in terms of helping businesses engage their customers or engage with their employees. We see live events obviously have been a massive category over the last few years. You know, we were all forced into a world where we had to do live events two years ago, but I think now we're reemerging into a world where the online part of a conference will be just as important as the in-person component of a conference. So that's another big use case we see. >> Well, full disclosure, if you're watching this live right now, it's being powered by MUX. So shout out, we use MUX on theCUBE platform that you're experiencing in this. Actually in real time, 'cause this is one application, there's many more. So video as code, is data as code is the theme, that's going to bring up the data ops. Video also is code because (laughs) it's just like you said, it's just communicating, but it gets converted to data. So data ops, video ops could be its own new category. What's your reaction to that? >> Yeah, I mean, I think, I have a couple thoughts on that. The first thought is, video is a way that, because the way that companies interact with customers or users, it's really important to have good monitoring and analytics of your video. And so the first product we ever built was actually a product called MUX video, sorry, MUX data, which is the best way to monitor a video platform at scale. So we work with a lot of the big broadcasters, we work with like CBS and Fox Sports and Discovery. We work with big tech companies like Reddit and Vimeo to help them monitor their video. And you just get a huge amount of insight when you look at robust analytics about video delivery that you can use to optimize performance, to make sure that streaming works well globally, especially in hard to reach places or on every device. That's we actually build a MUX data platform first because when we started MUX, we spent time with some of our friends at companies like YouTube and Netflix, and got to know how they use data to power their video platforms. And they do really sophisticated things with data to ensure that their streams well, and we wanted to build the product that would help everyone else do that. So, that's one use. I think the other obvious use is just really understanding what people are doing with their video, who's watching what, what's engaging, those kind of things. >> Yeah, data is definitely there. You guys mentioned some great brands that are working with you guys, and they're doing it because of the developer experience. And I'd like you to explain, if you don't mind, in your words, why is the MUX developer experience so good? What are some of the results you're seeing from your customers? What are they saying to you? Obviously when you win, you get good feedback. What are some of the things that they're saying and what specific develop experiences do they like the best? >> Yeah, I mean, I think that the most gratifying thing about being a startup founder is when your customers like what you're doing. And so we get a lot of this, but it's always, we always pay attention to what customers say. But yeah, people, the number one thing developers say when they think about MUX is that the developer experience is great. I think when they say that, what they mean is two things, first is it's easy to work with, which helps them move faster, software velocity is so important. Every company in the world is investing and wants to move quickly and to build quickly. And so if you can help a team speed up, that's massively valuable. The second thing I think when people like our developer experience is, you know, in a lot of ways that think that we get out of the way and we let them do what they want to do. So well, designed APIs are a key part of that, coming back to abstraction, making sure that you're not forcing customers into decisions that they actually want to make themselves. Like, if our video player only had one design, that that would not be, that would not work for most developers, 'cause developers want to bring their own design and style and workflow and feel to their video. And so, yeah, so I think the way we do that is just think comprehensively about how APIs are designed, think about the workflows that users are trying to accomplish with video, and make sure that we have the right APIs, make sure they're the right information, we have the right webhooks, we have the right SDKs, all of those things in place so that they can build what they want. >> We were just having a conversation on theCUBE, Dave Vellante and I, and our team, and I'd love to get you a reaction to this. And it's more and more, a riff real quick. We're seeing a trend where video as code, data as code, media stack, where you're starting to see the emergence of the media developer, where the application of media looks a lot like kind of software developer, where the app, media as an app. It could be a chat, it could be a peer to peer video, it could be part of an event platform, but with all the recent advances, in UX designers, coders, the front end looks like an emergence of these creators that are essentially media developers for all intent and purpose, they're coding media. What's your reaction to that? How do you see that evolving? >> I think the. >> Or do you agree with it? >> It's okay. >> Yeah, yeah. >> Well, I think a couple things. I think one thing, I think this goes along through saying, but maybe it's disagreement, is that we don't think you should have to be an expert at video or at media to create and produce or create and publish good video, good audio, good images, those kind of things. And so, you know, I think if you look at software overall, I think of 10 years ago, the kind of DevOps movement, where there was kind of a movement away from specialization in software where the same software developer could build and deploy the same software developer maybe could do front end and back end. And we want to bring that to video as well. So you don't have to be a specialist to do it. On the other hand, I do think that investments and tooling, all the way from video creation, which is not our world, but there's a lot of amazing companies out there that are making it easier to produce video, to shoot video, to edit, a lot of interesting innovations there all the way to what we do, which is helping people stream and publish video and video experiences. You know, I think another way about it is, that tool set and companies doing that let anyone be a media developer, which I think is important. >> It's like DevOps turning into low-code, no-code, eventually it's just composability almost like just, you know, "Hey Siri, give me some video." That kind of thing. Final question for you why I got you here, at the end of the day, the decision between a lot of people's build versus buy, "I got to get a developer. Why not just roll my own?" You mentioned data center, "I want to build a data center." So why MUX versus do it yourself? >> Yeah, I mean, part of the reason we started this company is we have a pretty, pretty strong opinion on this. When you think about it, when we started MUX five years ago, six years ago, if you were a developer and you wanted to accept credit cards, if you wanted to bring payment processing into your application, you didn't go build a payment gateway. You just probably used Stripe. And if you wanted to send text messages, you didn't build your own SMS gateway, you probably used Twilio. But if you were a developer and you wanted to stream video, you built your own video gateway, you built your own video application, which was really complex. Like we talked about, you know, probably three, four months of work to get something basic up and running, probably not live video that's probably only on demand video at that point. And you get no benefit by doing it yourself. You're no better than anyone else because you rolled your own video stack. What you get is risk that you might not do a good job, maybe you do worse than your competitors, and you also get distraction where you've just taken, you take 10 engineers and 10 sprints and you apply it to a problem that doesn't actually really give you differentiated value to your users. So we started MUX so that people would not have to do that. It's fine if you want to build your own video platform, once you get to a certain scale, if you can afford a dozen engineers for a VOD platform and you have some really massively differentiated use case, you know, maybe, live is, I don't know, I don't have the rule of thumb, live videos maybe five times harder than on demand video to work with. But you know, in general, like there's such a shortage of software engineers today and software engineers have, frankly, are in such high demand. Like you see what happens in the marketplace and the hiring markets, how competitive it is. You need to use your software team where they're maximally effective, and where they're maximally effective is building differentiation into your products for your customers. And video is just not that, like very few companies actually differentiate on their video technology. So we want to be that team for everyone else. We're 200 people building the absolute best video infrastructure as APIs for developers and making that available to everyone else. >> John, great to have you on with the showcase, love the company, love what you guys do. Video as code, data as code, great stuff. Final plug for the company, for the developers out there and prospects watching for MUX, why should they go to MUX? What are you guys up to? What's the big benefit? >> I mean, first, just check us out. Try try our APIs, read our docs, talk to our support team. We put a lot of work into making our platform the best, you know, as you dig deeper, I think you'd be looking at the performance around, the global performance of what we do, looking at our analytics stack and the insight you get into video streaming. We have an emerging open source video player that's really exciting, and I think is going to be the direction that open source players go for the next decade. And then, you know, we're a quickly growing team. We're 60 people at the beginning of last year. You know, we're one 50 at the beginning of this year, and we're going to a add, we're going to grow really quickly again this year. And this whole team is dedicated to building the best video structure for developers. >> Great job, Jon. Thank you so much for spending the time sharing the story of MUX here on the show, Amazon Startup Showcase season two, episode two, thanks so much. >> Thank you, John. >> Okay, I'm John Furrier, your host of theCUBE. This is season two, episode two, the ongoing series cover the most exciting startups from the AWS Cloud Ecosystem. Talking data analytics here, video cloud, video as a service, video infrastructure, video APIs, hottest thing going on right now, and you're watching it live here on theCUBE. Thanks for watching. (upbeat music)

Published Date : Mar 30 2022

SUMMARY :

Went into big detail of the of terms in the industry, "Software is eating the world" People know, the video folks And if you want to put video online, And if you go back to the just of the internet. lining of the products. So if you want to build a video platform, the nuances of video? all of the workflow items you So kind of the general On the other extreme, you have just think infrastructure that you own, saying I'm all in API to video, So one of the things that use video is it's just like you said, that you can use to optimize performance, And I'd like you to is that the developer experience is great. you a reaction to this. that to video as well. at the end of the day, the absolute best video infrastructure love the company, love what you guys do. and the insight you get of MUX here on the show, from the AWS Cloud Ecosystem.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Marc AndreessenPERSON

0.99+

Jon DahlPERSON

0.99+

John FurrierPERSON

0.99+

70%QUANTITY

0.99+

CBSORGANIZATION

0.99+

13 yearsQUANTITY

0.99+

YouTubeORGANIZATION

0.99+

AppleORGANIZATION

0.99+

JohnPERSON

0.99+

JonPERSON

0.99+

NetflixORGANIZATION

0.99+

Dave VellantePERSON

0.99+

10 engineersQUANTITY

0.99+

GoogleORGANIZATION

0.99+

threeQUANTITY

0.99+

VimeoORGANIZATION

0.99+

DiscoveryORGANIZATION

0.99+

RedditORGANIZATION

0.99+

10 sprintsQUANTITY

0.99+

two weeksQUANTITY

0.99+

Fox SportsORGANIZATION

0.99+

60 peopleQUANTITY

0.99+

200 peopleQUANTITY

0.99+

AWSORGANIZATION

0.99+

PythonTITLE

0.99+

two thingsQUANTITY

0.99+

four monthsQUANTITY

0.99+

firstQUANTITY

0.99+

SiriTITLE

0.99+

iOSTITLE

0.99+

three monthsQUANTITY

0.99+

six years agoDATE

0.99+

EC2TITLE

0.99+

first thoughtQUANTITY

0.99+

FacebookORGANIZATION

0.99+

BandcampORGANIZATION

0.99+

next decadeDATE

0.99+

five years agoDATE

0.99+

first productQUANTITY

0.99+

Data as CodeTITLE

0.99+

MUXORGANIZATION

0.99+

TodayDATE

0.99+

five timesQUANTITY

0.99+

ViscoORGANIZATION

0.99+

AndroidTITLE

0.98+

theCUBEORGANIZATION

0.98+

first timeQUANTITY

0.98+

this yearDATE

0.98+

ZencoderORGANIZATION

0.98+

oneQUANTITY

0.98+

last yearDATE

0.98+

10 years agoDATE

0.98+

TwitterORGANIZATION

0.98+

two choicesQUANTITY

0.98+

Robin HoodPERSON

0.97+

two years agoDATE

0.97+

TwilioORGANIZATION

0.97+

HubSpotORGANIZATION

0.96+

one applicationQUANTITY

0.96+

OneQUANTITY

0.96+

ShopifyORGANIZATION

0.96+

one designQUANTITY

0.96+

one thingQUANTITY

0.96+

StripeORGANIZATION

0.95+

first videoQUANTITY

0.95+

second thingQUANTITY

0.95+

one wayQUANTITY

0.94+

AgileTITLE

0.94+

one pushQUANTITY

0.93+

first thingQUANTITY

0.92+

Breaking Analysis: The Improbable Rise of Kubernetes


 

>> From theCUBE studios in Palo Alto, in Boston, bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vollante. >> The rise of Kubernetes came about through a combination of forces that were, in hindsight, quite a long shot. Amazon's dominance created momentum for Cloud native application development, and the need for newer and simpler experiences, beyond just easily spinning up computer as a service. This wave crashed into innovations from a startup named Docker, and a reluctant competitor in Google, that needed a way to change the game on Amazon and the Cloud. Now, add in the effort of Red Hat, which needed a new path beyond Enterprise Linux, and oh, by the way, it was just about to commit to a path of a Kubernetes alternative for OpenShift and figure out a governance structure to hurt all the cats and the ecosystem and you get the remarkable ascendancy of Kubernetes. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this breaking analysis, we tapped the back stories of a new documentary that explains the improbable events that led to the creation of Kubernetes. We'll share some new survey data from ETR and commentary from the many early the innovators who came on theCUBE during the exciting period since the founding of Docker in 2013, which marked a new era in computing, because we're talking about Kubernetes and developers today, the hoodie is on. And there's a new two part documentary that I just referenced, it's out and it was produced by Honeypot on Kubernetes, part one and part two, tells a story of how Kubernetes came to prominence and many of the players that made it happen. Now, a lot of these players, including Tim Hawkin Kelsey Hightower, Craig McLuckie, Joe Beda, Brian Grant Solomon Hykes, Jerry Chen and others came on theCUBE during formative years of containers going mainstream and the rise of Kubernetes. John Furrier and Stu Miniman were at the many shows we covered back then and they unpacked what was happening at the time. We'll share the commentary from the guests that they interviewed and try to add some context. Now let's start with the concept of developer defined structure, DDI. Jerry Chen was at VMware and he could see the trends that were evolving. He left VMware to become a venture capitalist at Greylock. Docker was his first investment. And he saw the future this way. >> What happens is when you define infrastructure software you can program it. You make it portable. And that the beauty of this cloud wave what I call DDI's. Now, to your point is every piece of infrastructure from storage, networking, to compute has an API, right? And, and AWS there was an early trend where S3, EBS, EC2 had API. >> As building blocks too. >> As building blocks, exactly. >> Not monolithic. >> Monolithic building blocks every little building bone block has it own API and just like Docker really is the API for this unit of the cloud enables developers to define how they want to build their applications, how to network them know as Wills talked about, and how you want to secure them and how you want to store them. And so the beauty of this generation is now developers are determining how apps are built, not just at the, you know, end user, you know, iPhone app layer the data layer, the storage layer, the networking layer. So every single level is being disrupted by this concept of a DDI and where, how you build use and actually purchase IT has changed. And you're seeing the incumbent vendors like Oracle, VMware Microsoft try to react but you're seeing a whole new generation startup. >> Now what Jerry was explaining is that this new abstraction layer that was being built here's some ETR data that quantifies that and shows where we are today. The chart shows net score or spending momentum on the vertical axis and market share which represents the pervasiveness in the survey set. So as Jerry and the innovators who created Docker saw the cloud was becoming prominent and you can see it still has spending velocity that's elevated above that 40% red line which is kind of a magic mark of momentum. And of course, it's very prominent on the X axis as well. And you see the low level infrastructure virtualization and that even floats above servers and storage and networking right. Back in 2013 the conversation with VMware. And by the way, I remember having this conversation deeply at the time with Chad Sakac was we're going to make this low level infrastructure invisible, and we intend to make virtualization invisible, IE simplified. And so, you see above the two arrows there related to containers, container orchestration and container platforms, which are abstraction layers and services above the underlying VMs and hardware. And you can see the momentum that they have right there with the cloud and AI and RPA. So you had these forces that Jerry described that were taking shape, and this picture kind of summarizes how they came together to form Kubernetes. And the upper left, Of course you see AWS and we inserted a picture from a post we did, right after the first reinvent in 2012, it was obvious to us at the time that the cloud gorilla was AWS and had all this momentum. Now, Solomon Hykes, the founder of Docker, you see there in the upper right. He saw the need to simplify the packaging of applications for cloud developers. Here's how he described it. Back in 2014 in theCUBE with John Furrier >> Container is a unit of deployment, right? It's the format in which you package your application all the files, all the executables libraries all the dependencies in one thing that you can move to any server and deploy in a repeatable way. So it's similar to how you would run an iOS app on an iPhone, for example. >> A Docker at the time was a 30% company and it just changed its name from .cloud. And back to the diagram you have Google with a red question mark. So why would you need more than what Docker had created. Craig McLuckie, who was a product manager at Google back then explains the need for yet another abstraction. >> We created the strong separation between infrastructure operations and application operations. And so, Docker has created a portable framework to take it, basically a binary and run it anywhere which is an amazing capability, but that's not enough. You also need to be able to manage that with a framework that can run anywhere. And so, the union of Docker and Kubernetes provides this framework where you're completely abstracted from the underlying infrastructure. You could use VMware, you could use Red Hat open stack deployment. You could run on another major cloud provider like rec. >> Now Google had this huge cloud infrastructure but no commercial cloud business compete with AWS. At least not one that was taken seriously at the time. So it needed a way to change the game. And it had this thing called Google Borg, which is a container management system and scheduler and Google looked at what was happening with virtualization and said, you know, we obviously could do better Joe Beda, who was with Google at the time explains their mindset going back to the beginning. >> Craig and I started up Google compute engine VM as a service. And the odd thing to recognize is that, nobody who had been in Google for a long time thought that there was anything to this VM stuff, right? Cause Google had been on containers for so long. That was their mindset board was the way that stuff was actually deployed. So, you know, my boss at the time, who's now at Cloudera booted up a VM for the first time, and anybody in the outside world be like, Hey, that's really cool. And his response was like, well now what? Right. You're sitting at a prompt. Like that's not super interesting. How do I run my app? Right. Which is, that's what everybody's been struggling with, with cloud is not how do I get a VM up? How do I actually run my code? >> Okay. So Google never really did virtualization. They were looking at the market and said, okay what can we do to make Google relevant in cloud. Here's Eric Brewer from Google. Talking on theCUBE about Google's thought process at the time. >> One interest things about Google is it essentially makes no use of virtual machines internally. And that's because Google started in 1998 which is the same year that VMware started was kind of brought the modern virtual machine to bear. And so Google infrastructure tends to be built really on kind of classic Unix processes and communication. And so scaling that up, you get a system that works a lot with just processes and containers. So kind of when I saw containers come along with Docker, we said, well, that's a good model for us. And we can take what we know internally which was called Borg a big scheduler. And we can turn that into Kubernetes and we'll open source it. And suddenly we have kind of a cloud version of Google that works the way we would like it to work. >> Now, Eric Brewer gave us the bumper sticker version of the story there. What he reveals in the documentary that I referenced earlier is that initially Google was like, why would we open source our secret sauce to help competitors? So folks like Tim Hockin and Brian Grant who were on the original Kubernetes team, went to management and pressed hard to convince them to bless open sourcing Kubernetes. Here's Hockin's explanation. >> When Docker landed, we saw the community building and building and building. I mean, that was a snowball of its own, right? And as it caught on we realized we know what this is going to we know once you embrace the Docker mindset that you very quickly need something to manage all of your Docker nodes, once you get beyond two or three of them, and we know how to build that, right? We got a ton of experience here. Like we went to our leadership and said, you know, please this is going to happen with us or without us. And I think it, the world would be better if we helped. >> So the open source strategy became more compelling as they studied the problem because it gave Google a way to neutralize AWS's advantage because with containers you could develop on AWS for example, and then run the application anywhere like Google's cloud. So it not only gave developers a path off of AWS. If Google could develop a strong service on GCP they could monetize that play. Now, focus your attention back to the diagram which shows this smiling, Alex Polvi from Core OS which was acquired by Red Hat in 2018. And he saw the need to bring Linux into the cloud. I mean, after all Linux was powering the internet it was the OS for enterprise apps. And he saw the need to extend its path into the cloud. Now here's how he described it at an OpenStack event in 2015. >> Similar to what happened with Linux. Like yes, there is still need for Linux and Windows and other OSs out there. But by and large on production, web infrastructure it's all Linux now. And you were able to get onto one stack. And how were you able to do that? It was, it was by having a truly open consistent API and a commitment into not breaking APIs and, so on. That allowed Linux to really become ubiquitous in the data center. Yes, there are other OSs, but Linux buy in large for production infrastructure, what is being used. And I think you'll see a similar phenomenon happen for this next level up cause we're treating the whole data center as a computer instead of trading one in visual instance is just the computer. And that's the stuff that Kubernetes to me and someone is doing. And I think there will be one that shakes out over time and we believe that'll be Kubernetes. >> So Alex saw the need for a dominant container orchestration platform. And you heard him, they made the right bet. It would be Kubernetes. Now Red Hat, Red Hat is been around since 1993. So it has a lot of on-prem. So it needed a future path to the cloud. So they rang up Google and said, hey. What do you guys have going on in this space? So Google, was kind of non-committal, but it did expose that they were thinking about doing something that was you know, pre Kubernetes. It was before it was called Kubernetes. But hey, we have this thing and we're thinking about open sourcing it, but Google's internal debates, and you know, some of the arm twisting from the engine engineers, it was taking too long. So Red Hat said, well, screw it. We got to move forward with OpenShift. So we'll do what Apple and Airbnb and Heroku are doing and we'll build on an alternative. And so they were ready to go with Mesos which was very much more sophisticated than Kubernetes at the time and much more mature, but then Google the last minute said, hey, let's do this. So Clayton Coleman with Red Hat, he was an architect. And he leaned in right away. He was one of the first outside committers outside of Google. But you still led these competing forces in the market. And internally there were debates. Do we go with simplicity or do we go with system scale? And Hen Goldberg from Google explains why they focus first on simplicity in getting that right. >> We had to defend of why we are only supporting 100 nodes in the first release of Kubernetes. And they explained that they know how to build for scale. They've done that. They know how to do it, but realistically most of users don't need large clusters. So why create this complexity? >> So Goldberg explains that rather than competing right away with say Mesos or Docker swarm, which were far more baked they made the bet to keep it simple and go for adoption and ubiquity, which obviously turned out to be the right choice. But the last piece of the puzzle was governance. Now Google promised to open source Kubernetes but when it started to open up to contributors outside of Google, the code was still controlled by Google and developers had to sign Google paper that said Google could still do whatever it wanted. It could sub license, et cetera. So Google had to pass the Baton to an independent entity and that's how CNCF was started. Kubernetes was its first project. And let's listen to Chris Aniszczyk of the CNCF explain >> CNCF is all about providing a neutral home for cloud native technology. And, you know, it's been about almost two years since our first board meeting. And the idea was, you know there's a certain set of technology out there, you know that are essentially microservice based that like live in containers that are essentially orchestrated by some process, right? That's essentially what we mean when we say cloud native right. And CNCF was seated with Kubernetes as its first project. And you know, as, as we've seen over the last couple years Kubernetes has grown, you know, quite well they have a large community a diverse con you know, contributor base and have done, you know, kind of extremely well. They're one of actually the fastest, you know highest velocity, open source projects out there, maybe. >> Okay. So this is how we got to where we are today. This ETR data shows container orchestration offerings. It's the same X Y graph that we showed earlier. And you can see where Kubernetes lands not we're standing that Kubernetes not a company but respondents, you know, they doing Kubernetes. They maybe don't know, you know, whose platform and it's hard with the ETR taxon economy as a fuzzy and survey data because Kubernetes is increasingly becoming embedded into cloud platforms. And IT pros, they may not even know which one specifically. And so the reason we've linked these two platforms Kubernetes and Red Hat OpenShift is because OpenShift right now is a dominant revenue player in the space and is increasingly popular PaaS layer. Yeah. You could download Kubernetes and do what you want with it. But if you're really building enterprise apps you're going to need support. And that's where OpenShift comes in. And there's not much data on this but we did find this chart from AMDA which show was the container software market, whatever that really is. And Red Hat has got 50% of it. This is revenue. And, you know, we know the muscle of IBM is behind OpenShift. So there's really not hard to believe. Now we've got some other data points that show how Kubernetes is becoming less visible and more embedded under of the hood. If you will, as this chart shows this is data from CNCF's annual survey they had 1800 respondents here, and the data showed that 79% of respondents use certified Kubernetes hosted platforms. Amazon elastic container service for Kubernetes was the most prominent 39% followed by Azure Kubernetes service at 23% in Azure AKS engine at 17%. With Google's GKE, Google Kubernetes engine behind those three. Now. You have to ask, okay, Google. Google's management Initially they had concerns. You know, why are we open sourcing such a key technology? And the premise was, it would level the playing field. And for sure it has, but you have to ask has it driven the monetization Google was after? And I would've to say no, it probably didn't. But think about where Google would've been. If it hadn't open source Kubernetes how relevant would it be in the cloud discussion. Despite its distant third position behind AWS and Microsoft or even fourth, if you include Alibaba without Kubernetes Google probably would be much less prominent or possibly even irrelevant in cloud, enterprise cloud. Okay. Let's wrap up with some comments on the state of Kubernetes and maybe a thought or two about, you know, where we're headed. So look, no shocker Kubernetes for all its improbable beginning has gone mainstream in the past year or so. We're seeing much more maturity and support for state full workloads and big ecosystem support with respect to better security and continued simplification. But you know, it's still pretty complex. It's getting better, but it's not VMware level of maturity. For example, of course. Now adoption has always been strong for Kubernetes, for cloud native companies who start with containers on day one, but we're seeing many more. IT organizations adopting Kubernetes as it matures. It's interesting, you know, Docker set out to be the system of the cloud and Kubernetes has really kind of become that. Docker desktop is where Docker's action really is. That's where Docker is thriving. It sold off Docker swarm to Mirantis has made some tweaks. Docker has made some tweaks to its licensing model to be able to continue to evolve its its business. To hear more about that at DockerCon. And as we said, years ago we expected Kubernetes to become less visible Stu Miniman and I talked about this in one of our predictions post and really become more embedded into other platforms. And that's exactly what's happening here but it's still complicated. Remember, remember the... Go back to the early and mid cycle of VMware understanding things like application performance you needed folks in lab coats to really remediate problems and dig in and peel the onion and scale the system you know, and in some ways you're seeing that dynamic repeated with Kubernetes, security performance scale recovery, when something goes wrong all are made more difficult by the rapid pace at which the ecosystem is evolving Kubernetes. But it's definitely headed in the right direction. So what's next for Kubernetes we would expect further simplification and you're going to see more abstractions. We live in this world of almost perpetual abstractions. Now, as Kubernetes improves support from multi cluster it will be begin to treat those clusters as a unified group. So kind of abstracting multiple clusters and treating them as, as one to be managed together. And this is going to create a lot of ecosystem focus on scaling globally. Okay, once you do that, you're going to have to worry about latency and then you're going to have to keep pace with security as you expand the, the threat area. And then of course recovery what happens when something goes wrong, more complexity, the harder it is to recover and that's going to require new services to share resources across clusters. So look for that. You also should expect more automation. It's going to be driven by the host cloud providers as Kubernetes supports more state full applications and begins to extend its cluster management. Cloud providers will inject as much automation as possible into the system. Now and finally, as these capabilities mature we would expect to see better support for data intensive workloads like, AI and Machine learning and inference. Schedule with these workloads becomes harder because they're so resource intensive and performance management becomes more complex. So that's going to have to evolve. I mean, frankly, many of the things that Kubernetes team way back when, you know they back burn it early on, for example, you saw in Docker swarm or Mesos they're going to start to enter the scene now with Kubernetes as they start to sort of prioritize some of those more complex functions. Now, the last thing I'll ask you to think about is what's next beyond Kubernetes, you know this isn't it right with serverless and IOT in the edge and new data, heavy workloads there's something that's going to disrupt Kubernetes. So in that, by the way, in that CNCF survey nearly 40% of respondents were using serverless and that's going to keep growing. So how is that going to change the development model? You know, Andy Jassy once famously said that if they had to start over with Amazon retail, they'd start with serverless. So let's keep an eye on the horizon to see what's coming next. All right, that's it for now. I want to thank my colleagues, Stephanie Chan who helped research this week's topics and Alex Myerson on the production team, who also manages the breaking analysis podcast, Kristin Martin and Cheryl Knight help get the word out on socials, so thanks to all of you. Remember these episodes, they're all available as podcasts wherever you listen, just search breaking analysis podcast. Don't forget to check out ETR website @etr.ai. We'll also publish. We publish a full report every week on wikibon.com and Silicon angle.com. You can get in touch with me, email me directly david.villane@Siliconangle.com or DM me at D Vollante. You can comment on our LinkedIn post. This is Dave Vollante for theCUBE insights powered by ETR. Have a great week, everybody. Thanks for watching. Stay safe, be well. And we'll see you next time. (upbeat music)

Published Date : Feb 12 2022

SUMMARY :

bringing you data driven and many of the players And that the beauty of this And so the beauty of this He saw the need to simplify It's the format in which A Docker at the time was a 30% company And so, the union of Docker and Kubernetes and said, you know, we And the odd thing to recognize is that, at the time. And so scaling that up, you and pressed hard to convince them and said, you know, please And he saw the need to And that's the stuff that Kubernetes and you know, some of the arm twisting in the first release of Kubernetes. of Google, the code was And the idea was, you know and dig in and peel the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stephanie ChanPERSON

0.99+

Chris AniszczykPERSON

0.99+

HockinPERSON

0.99+

Dave VollantePERSON

0.99+

Solomon HykesPERSON

0.99+

Craig McLuckiePERSON

0.99+

Cheryl KnightPERSON

0.99+

Jerry ChenPERSON

0.99+

Alex MyersonPERSON

0.99+

Kristin MartinPERSON

0.99+

Brian GrantPERSON

0.99+

Eric BrewerPERSON

0.99+

1998DATE

0.99+

MicrosoftORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Tim HockinPERSON

0.99+

Andy JassyPERSON

0.99+

2013DATE

0.99+

Alex PolviPERSON

0.99+

Palo AltoLOCATION

0.99+

AmazonORGANIZATION

0.99+

Craig McLuckiePERSON

0.99+

Clayton ColemanPERSON

0.99+

2018DATE

0.99+

2014DATE

0.99+

IBMORGANIZATION

0.99+

50%QUANTITY

0.99+

JerryPERSON

0.99+

AppleORGANIZATION

0.99+

2012DATE

0.99+

Joe BedaPERSON

0.99+

GoogleORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

CNCFORGANIZATION

0.99+

17%QUANTITY

0.99+

John FurrierPERSON

0.99+

30%QUANTITY

0.99+

40%QUANTITY

0.99+

OracleORGANIZATION

0.99+

23%QUANTITY

0.99+

iOSTITLE

0.99+

1800 respondentsQUANTITY

0.99+

AlibabaORGANIZATION

0.99+

2015DATE

0.99+

39%QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

AirbnbORGANIZATION

0.99+

Hen GoldbergPERSON

0.99+

fourthQUANTITY

0.99+

twoQUANTITY

0.99+

Chad SakacPERSON

0.99+

threeQUANTITY

0.99+

david.villane@Siliconangle.comOTHER

0.99+

first projectQUANTITY

0.99+

CraigPERSON

0.99+

VMwareORGANIZATION

0.99+

ETRORGANIZATION

0.99+

Tom Miller & Ankur Jain, Merkle | AWS re:Invent 2021


 

>>Okay, We're back at AWS Re. Invent. You're watching the >>cubes. Continuous coverage >>coverage. This is Day four. I think it's the first time it reinvent. We've done four days. This is our ninth year covering Reinvent. Tom Miller is here is the senior vice president of Alliances. And he's joined by Anchor Jane. Who's the global cloud? Practically practise lead at Merkel. Guys, good to see you. Thanks for coming on. Thank you, Tom. Tell us about Merkel. For those who might not be familiar with you. >>So Merkel is a customer experience management company. That is, um, under the Dentsu umbrella. Dense. Who is a global media agency? We represent one of the pillars which is global, our customer experience management. And they also have media and creative. And what Merkel does is provide that technology to help bring that creative and media together. They're a tech company. Yes. >>Okay, so there's some big big tail winds, changes, trends going on in the market. Obviously the pandemic. You know, the force marched to digital. Uh, there's regulation. What are some of the big waves that you guys are seeing that you're trying to ride? >>So what we're seeing is, uh we've got, uh, as a start. We've got a lot of existing databases with clients that are on Prem that we manage today within a sequel environment or so forth. And they need to move that to a cloud environment to be more flexible, more agile, provide them with more data to be able to follow that customer experience that they want with their clients, that they're all realising they need to be in a digital environment. And so that's a big push for us working with AWS and helping move our clients into that cloud environments. >>And you're relatively new to the ws world, right? Maybe you can talk >>about that anchor actually, as a partner. We may be new, but Merkel works with AWS has been working with AWS for over five years as a customer as a customer. So what we did was last year we formalise the relationship with us to be, uh, an advanced partner now. So we were part of the restock programme, basically which is a pool of very select partners. And Merkel comes in with the specialisation of marketing. So as Tom said, you know, we're part of, uh Dentsu umbrella are our core focuses on customer experience, transformation and how we do that Customer experience. Transformation is through digital transformation, data transformation. And that's where we see AWS being a very good partner to us to modernise the solutions that Martin can take to the market. >>So your on Prem databases is probably a lot of diversity on a lot of technical that when the cloud more agility, infinite resources do you have a tech stack? Are you more of an integrator? Right tool for the right job? Maybe you could describe >>your I can take that what time just described. So let me give you some perspective on what these databases are. These databases are essentially Markle, helping big brands 1400 Fortune 500 brands to organise their marketing ecosystem, especially Martek ecosystem. So these databases, they house customer touchpoints customer customer data from disparate sources, and they basically integrate that data in one central place and then bolt on analytics, data science, artificial intelligence, machine learning on top of it, helping them with those email campaigns or direct mail campaigns, social campaigns. So that's what these databases are all about, and and these databases currently set on Prem on Merkel's own data centre. And we have a huge opportunity to kind of take those databases and modernise them. Give all these ai ml type of capabilities advanced analytic capabilities to our customers by using AWS is the platform to kind of migrate. And you do that as a service. We do that as a service. >>Strategically, you're sort of transforming your business to help your customers transform their business right? Take away. It's it's classic. I mean, you really it's happening. This theme of, you know a W started with taking away the undifferentiated heavy lifting for infrastructure. Now you're seeing NASDAQ. Goldman Sachs. You guys in the media world essentially building your own clouds, right? That's the strategy. Yes, super clouds. We call >>them Super Cloud. Yeah, it's about helping our clients understand What is it they're trying to accomplish? And for the most part, they're trying to understand the customer journey where the customer is, how they're driving that experience with them and understanding that experience through the journey and doing that in the cloud makes it tremendously easier and more economical form. >>I was listening to the, uh, snowflake earnings call from last night and they were talking about, you know, a couple of big verticals, one being media and all. I keep talking about direct direct to consumer, right? You're hearing that a lot of media companies want to interact and build community directly. They don't want to necessarily. I mean, you don't want to go through a third party anymore if you don't have to, Technology is enabling that is that kind of the play here? >>Yes, Director Consumer is a huge player. Companies which were traditionally brick and mortar based or relied on a supply chain of dealers and distributors are now basically transforming themselves to be direct to consumer. They want to sell directly to the consumer. Personalisation comes becomes a big theme, especially indeed to see type of environment, because now those customers are expecting brands to know what's there like. What's their dislike? Which products which services are they interested in? So that's that's all kind of advanced analytics machine learning powered solutions. These are big data problems that all these brands are kind of trying to solve. That's where Merkel is partnering with AWS to bring all those technologies and and build those next generation solutions for access. So what kind >>of initiatives are you working >>on? So there are, like, 34 areas that we are working very closely with AWS number one. I would say Think about our marketers friend, you know, and they have a transformation like direct to consumer on the channel e commerce, these types of capabilities in mind. But they don't know where to start. What tools? What technologies will be part of that ecosystem. That's where Merkel provides consulting services to to give them a road map, give them recommendations on how to structure these big, large strategic initiatives. That's number one we are doing in partnership with AWS to reach out to our joint customers and help them transform those ecosystems. Number two as Tom mentioned migrations, helping chief data officers, chief technology officers, chief marketing officers modernise their environment by migrating them to cloud number three. Merkel has a solution called mercury, which is essentially all about customer identity. How do we identify a customer across multiple channels? We are Modernising all that solution of making that available on AWS marketplace for customers to actually easily use that solution. And number four, I would say, is helping them set up data foundation. That's through intelligent marketing Data Lake leveraging AWS technologies like blue, red shift and and actually modernise their data platforms. And number four is more around clean rooms, which is bring on your first party data. Join it with Amazon data to see how those customers are behaving when they are making a purchase on amazon dot com, which gives insights to these brands to reshape their marketing strategy to those customers. So those are like four or five focus areas. So I was >>gonna ask you about the data and the data strategy like, who owns the data? You're kind of alchemists that your clients have first party data and you might recommend bringing in other data sources. And you're sort of creating this new cocktail. Who owns the data? >>Well, ultimately, client also data because that that's their customers' data. Uh, to your point on, we helped them enrich that data by bringing in third party data, which is what we call is. So Merkel has a service called data source, which is essentially a collection of data that we acquire about customers. Their likes, their dislikes, their buying power, their interests so we monetise all that data. And the idea is to take those data assets and make them available on AWS data exchange so that it becomes very easy for brands to use their first party data. Take this third party data from Merkel and then, uh, segment their customers much more intelligently. >>And the CMO is your sort of ideal customer profile. >>Yeah, CMO is our main customer profile and we'll work with the chief data officer Will work with the chief technology officer. We kind of we bridge both sides. We can go technology and marketing and bring them both together. So you have a CMO who's trying to solve for some type of issue. And you have a chief technology officer who wants to improve their infrastructure. And we know how to bring them together into a conversation and help both parties get both get what they want. >>And I suppose the chief digital officer fits in there too. Yeah, he fits in their CDOs. Chief Digital officer CMO. Sometimes they're all they're one and the same. Other times they're mixed. I've seen see IOS and and CDOs together. Sure, you sort of. It's all data. It's all >>day. >>Yeah, some of the roles that come into play, as as Tom mentioned. And you mentioned C I o c T. O s chief information officer, chief technology officer, chief data officer, more from the side. And then we have the CMOS chief digital officers from the marketing side. So the secret sauce that Merkel brings to the table is that we know the language, what I t speaks and what business speaks. So when we talk about the business initiatives like direct to consumer Omni Channel E commerce, those are more business driven initiatives. That's where Merkel comes in to kind of help them with our expertise over the last 30 years on on how to run these strategic initiatives. And then at the same time, how do we translate translate those strategic initiatives into it transformation because it does require a lot of idea transformation to happen underneath. That's where AWS also helps us. So we kind of span across both sides of the horizon. >>So you got data. You've got tools, you've got software. You've got expertise that now you're making that available as a as a service. That's right. How far are you into that? journey of satisfying your business. >>Well, the cloud journey started almost, I would say, 5 to 7 years ago at Merkel, >>where you started, where you began leveraging the cloud. That's right. And then the light bulb went off >>the cloud again. We use clouds in multiple aspects, from general computing perspective, leveraging fully managed services that AWS offers. So that's one aspect, which is to bring in data from disparate sources, house it, analyse it and and derive intelligence. The second piece on the cloud side is, uh, SAS, offering software as a service offerings like Adobe Salesforce and other CDP platforms. So Merkel covers a huge spectrum. When it comes to cloud and you got >>a combination, you have a consulting business and also >>so Merkel has multiple service lines. Consulting business is one of them where we can help them on how to approach these transformational initiatives and give them blueprints and roadmaps and strategy. Then we can also help them understand what the customer strategy should be, so that they can market very intelligently to their end customers. Then we have a technology business, which is all about leveraging cloud and advanced analytics. Then we have data business that data assets that I was talking about, that we monetise. We have promotions and loyalty. We have media, so we recover multiple services portfolio. >>How do you mentioned analytics a couple times? How do you tie that? Back to the to the to the sales function. I would imagine your your clients are increasingly asking for analytics so they can manage their dashboards and and make sure they're above the line. How is that evolving? Yes, >>So that's a very important line because, you know, data is data, right? You bring in the data, but what you do with the data, how you know, how you ask questions and how you derive intelligence from it? Because that's the actionable part. So a few areas I'll give you one or two examples on how those analytics kind of come into picture. Let's imagine a brand which is trying to sell a particular product or a particular service to the to a set of customers Now who those set of customers are, You know where they should target this, who their target customers are, what the demographics are that's all done through and analytics and what I gave you is a very simple example. There are so many advanced examples, you know, that come into artificial intelligence machine learning those type of aspects as well. So analytics definitely play a huge role on how these brands need to sell and personalised the offerings that they're going to offer to. The customers >>used to be really pure art, right? It's really >>not anymore. It's all data driven. Moneyball. Moneyball? >>Yes, exactly. Exactly. Maybe still a little bit of hard in there, right? It doesn't hurt. It doesn't hurt to have a little creative flair still, but you've got to go with the data. >>That's where the expertise comes in, right? That's where the experience comes in and how you take that science and combine it with the art to present it to the end customer. That's exactly you know. It's a combination, >>and we also take the time to educate our clients on how we're doing it. So it's not done in a black box, so they can learn and grow themselves where they may end up developing their own group to handle it, as opposed to outsourcing with Merkel, >>teach them how to fish. Last question. Where do you see this in 2 to 3 years. Where do you want to take it? >>I think future is Cloud AWS being the market leader. I think aws has a huge role to play. Um, we are very excited to be partners with AWS. I think it's a match made in heaven. AWS cells in, uh, majority of the sales happen in our focus is marketing. I think if we can bring both the worlds together, I think that would be a very powerful story for us to be >>good news for AWS. They little your DNA can rub off on them would be good, guys. Thanks so much for coming to the Cube. Thank you. All right. Thank you for watching everybody. This is Dave Volonte for the Cube Day four aws re invent. Were the Cube the global leader in high tech coverage? Right back. Mhm. Mhm. Mhm.

Published Date : Dec 2 2021

SUMMARY :

You're watching the Tom Miller is here is the senior vice president of Alliances. is provide that technology to help bring that creative and media together. What are some of the big waves that you guys are seeing that you're trying to ride? And they need to move that to a cloud environment So as Tom said, you know, we're part of, uh Dentsu umbrella And you do that as a service. I mean, you really it's happening. And for the most part, they're trying to understand the Technology is enabling that is that kind of the play here? These are big data problems that all these brands are kind of trying to solve. I would say Think about our marketers friend, you know, and they have a transformation clients have first party data and you might recommend bringing in other data sources. And the idea is to take those data assets and make them available on AWS So you have a CMO And I suppose the chief digital officer fits in there too. So the secret sauce that Merkel brings to the table is that we know the language, So you got data. where you started, where you began leveraging the cloud. When it comes to cloud and you got Then we have a technology business, which is all about leveraging cloud and advanced analytics. the to the sales function. You bring in the data, but what you do with the data, how you know, how you ask questions and how you derive It's all data driven. It doesn't hurt to have a little creative flair still, but you've got to go with the data. That's where the experience comes in and how you take that science So it's not done in a black box, so they can learn and grow Where do you want to take it? I think aws has a huge role to play. Thanks so much for coming to the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TomPERSON

0.99+

Tom MillerPERSON

0.99+

MerkelPERSON

0.99+

AWSORGANIZATION

0.99+

Dave VolontePERSON

0.99+

Goldman SachsORGANIZATION

0.99+

NASDAQORGANIZATION

0.99+

2QUANTITY

0.99+

second pieceQUANTITY

0.99+

IOSTITLE

0.99+

ninth yearQUANTITY

0.99+

oneQUANTITY

0.99+

last yearDATE

0.99+

AmazonORGANIZATION

0.99+

fourQUANTITY

0.99+

awsORGANIZATION

0.99+

both sidesQUANTITY

0.99+

both partiesQUANTITY

0.99+

34 areasQUANTITY

0.99+

bothQUANTITY

0.99+

four daysQUANTITY

0.99+

one aspectQUANTITY

0.99+

last nightDATE

0.99+

over five yearsQUANTITY

0.99+

MartekORGANIZATION

0.99+

5DATE

0.99+

3 yearsQUANTITY

0.98+

DentsuORGANIZATION

0.98+

SASORGANIZATION

0.97+

Day fourQUANTITY

0.97+

first timeQUANTITY

0.97+

Ankur Jain,PERSON

0.96+

CMOSORGANIZATION

0.96+

AdobeORGANIZATION

0.96+

7 years agoDATE

0.96+

pandemicEVENT

0.96+

two examplesQUANTITY

0.95+

first partyQUANTITY

0.95+

CubeCOMMERCIAL_ITEM

0.94+

todayDATE

0.93+

five focus areasQUANTITY

0.91+

JanePERSON

0.9+

Number twoQUANTITY

0.89+

ReinventTITLE

0.89+

last 30 yearsDATE

0.89+

InventEVENT

0.85+

number fourQUANTITY

0.84+

one central placeQUANTITY

0.82+

Omni Channel EORGANIZATION

0.81+

MartinPERSON

0.78+

number oneQUANTITY

0.77+

four awsQUANTITY

0.76+

Kimberly Leyenaar, Broadcom


 

(upbeat music) >> Hello everyone, and welcome to this CUBE conversation where we're going to go deep into system performance. We're here with an expert. Kim Leyenaar is the Principal Performance Architect at Broadcom. Kim. Great to see you. Thanks so much for coming on. >> Thanks so much too. >> So you have a deep background in performance, performance assessment, benchmarking, modeling. Tell us a little bit about your background, your role. >> Thanks. So I've been a storage performance engineer and architect for about 22 years. And I'm specifically been for abroad with Broadcom for I think next month is going to be my 14 year mark. So what I do there is initially I built and I manage their international performance team, but about six years ago I moved back into architecture, and what my roles right now are is I generate performance projections for all of our next generation products. And then I also work on marketing material and I interface with a lot of the customers and debugging customer issues, and looking at how our customers are actually using our storage. >> Great. Now we have a graphic that we want to share. It talks to how storage has evolved over the past decade. So my question is what changes have you seen in storage and how has that impacted the way you approach benchmarking. In this graphic we got sort of big four items that impact performance, memory processor, IO pathways, and the storage media itself, but walk us through this data if you would. >> Sure. So what I put together is a little bit of what we've seen over the past 15 to 20 years. So I've been doing this for about 22 years and kind of going back and focusing a little bit on the storage, we looked back at hard disk, they ruled for. And nearly they had almost 50 years of ruling. And our first hard drive that came out back in the 1950s was only capable of five megabytes in capacity. and one and a half iOS per second. It had almost a full second in terms of seat time. So we've come a long way since then. But when I first came on, we were looking at Ultra 320 SCSI. And one of the biggest memories that I have of that was my office is located close to our tech support. And I could hear the first question was always, what's your termination like? And so we had some challenges with SCSI, and then we moved on into SAS and data protocols. And we continued to move on. But right now, back in the early 2000s when I came on board, the best drives really could do maybe 400 iOS per second. Maybe two 250 megabytes per second, with millisecond response times. And so when I was benchmarking way back when it was always like, well, IOPS are IOPS. We were always faster than what the drives to do. And that was just how it was. The drives were always the bottleneck in the system. And so things started changing though by the early 2000s, mid 2000s. We started seeing different technologies come out. We started seeing that virtualization and multi-tenant infrastructures becoming really popular. And then we had cloud computing that was well on the horizon. And so at this point, we're like, well, wait a minute, we really can't make processors that much faster. And so everybody got excited to include (indistinct) and the home came out but, they had two cores per processor and four cores per processor. And so we saw a little time period where actually the processing capability kind of pulled ahead of everybody else. And memory was falling behind. We had good old DVR, 2, 6, 67. It was new with the time, but we only had maybe one or two memory channels per processor. And then in 2007 we saw disk capacity hit one terabyte. And we started seeing a little bit of an imbalance because we were seeing these drives are getting massive, but their performance per drive was not really kind of keeping up. So now we see a revolution around 2010. And my co-worker and I at the time, we have these little USB discs, if you recall, we would put them in. They were so fast. We were joking at the time. "Hey, you know what, wonder if we could make a raid array out of these little USB disks?" They were just so fast. The idea was actually kind of crazy until we started seeing it actually happen. So in 2010 SSD started revolutionizing storage. And the first SSDs that we really worked with these plaint LS-300 and they were amazing because they were so over-provisioned that they had almost the same reader, right performance. But to go from a drive that could do maybe 400 IOS per second to a drive like 40,000 plus iOS per second, really changed our thought process about how our storage controller could actually try and keep up with the rest of the system. So we started falling behind. That was a big challenge for us. And then in 2014, NVMe came around as well. So now we've got these drives, they're 30 terabytes. They can do one and a half million iOS per second, and over 6,000 megabytes per second. But they were expensive. So people start relegating SSDs more towards tiered storage or cash. And as the prices of these drives kind of came down, they became a lot more mainstream. And then the memory channels started picking up. And they started doubling every few years. And we're looking now at DVR 5 4800. And now we're looking at cores that used to go from two to four cores per processor up to 48 with some of the latest different processes that are out there. So our ability to consume the computing and the storage resources, it's astounding, you know, it's like that whole saying, 'build it and they will come.' Because I'm always amazed, I'm like, how are we going to possibly utilize all this memory bandwidth? How are we going to utilize all these cores? But we do. And the trick to this is having just a balanced infrastructure. It's really critical. Because if you have a performance mismatch between your server and your storage, you really lose a lot of productivity and it does impact your revenue. >> So that's such a key point. Pardon, begin that slide up again with the four points. And that last point that you made Kim about balance. And so here you have these, electronic speeds with memory and IO, and then you've got the spinning disc, this mechanical disc. You mentioned that SSD kind of changed the game, but it used to be, when I looked at benchmarks, it was always the D stage bandwidth of the cash out to the spinning disc was always the bottleneck. And, you go back to the days of you it's symmetrics, right? The huge backend disk bandwidth was how they dealt with that. But, and then you had things the oxymoron of the day was high spin speed disks of a high performance disk. Compared to memories. And, so the next chart that we have is show some really amazing performance increases over the years. And so you see these bars on the left-hand side, it looks at historical performance for 4k random IOPS. And on the right-hand side, it's the storage controller performance for sequential bandwidth from 2008 to 2022. That's 22 is that yellow line. It's astounding the increases. I wonder if you could tell us what we're looking at here, when did SSD come in and how did that affect your thinking? (laughs) >> So I remember back in 2007, we were kind of on the precipice of SSDs. We saw it, the writing was on the wall. We had our first three gig SAS and SATA capable HPAs that had come out. And it was a shock because we were like, wow, we're going to really quickly become the bottleneck once this becomes more mainstream. And you're so right though about people work in, building these massive hard drive based back ends in order to handle kind of that tiered architecture that we were seeing that back in the early 2010s kind of when the pricing was just so sky high. And I remember looking at our SAS controllers, our very first one, and that was when I first came in at 2007. We had just launched our first SAS controller. We're so proud of ourselves. And I started going how many IOPS can this thing, even handled? We couldn't even attach enough drives to figure it out. So what we would do is we'd do these little tricks where we would do a five 12 byte read, and we would do it on a 4k boundary, so that it was actually reading sequentially from the disc, but we were handling these discrete IOPS. So we were like, oh, we can do around 35,000. Well, that's just not going to hit it anymore. Bandwidth wise we were doing great. Really our limitation and our bottleneck on bandwidth was always either the host or the backend. So, our controllers are there basically, there were three bottlenecks for our storage controllers. The first one is the bottleneck from the host to the controller. So that is typically a PCIe connection. And then there's another bottleneck on the controller to the disc. And that's really the number of ports that we have. And then the third one is the discs themselves. So in typical storage, that's what we look at. And we say, well, how do we improve this? So some of these are just kind of evolutionary, such as PCIE generations. And we're going to talk a little bit about that, but some of them are really revolutionary, and those are some of the things that we've been doing over the last five or six years to try and make sure that we are no longer the bottleneck. And we can enable these really, really fast drives. >> So can I ask a question? I'm sorry to interrupted but on these blue bars here. So these all spinning disks, I presume, out years they're not. Like when did flash come in to these blue bars? is that..you said 27 you started looking at it, but on these benchmarks, is it all spinning disc? Is it all flash? How should we interpret that? >> No, no. Initially they were actually all hard drives. And the way that we would identify, the max iOS would be by doing very small sequential reads to these hard drives. We just didn't have SSDs at that point. And then somewhere around 2010 is where we.. it was very early in that chart, we were able to start incorporating SSD technology into our benchmarking. And so what you're looking at here is really the max that our controller is capable of. So we would throw as many drives as we could and do what we needed to do in order to just make sure our controller was the bottleneck and what can we expose. >> So the drive then when SSD came in was no longer the bottleneck. So you guys had to sort of invent and rethink sort of how, what your innovation and your technology, because, I mean, these are astounding increases in performance. I mean, I think in the left-hand side, we've built this out pad, you got 170 X increase for the 4k random IOPS, and you've got a 20 X increase for the sequential bandwidth. How were you able to achieve that level of performance over time? >> Well, in terms of the sequential bandwidth, really those come naturally by increases in the PCIe or the SAS generation. So we just make sure we stay out of the way, and we enable that bandwidth. But the IOPS that's where it got really, really tricky. So we had to start thinking about different things. So, first of all, we started optimizing all of our pathways, all of our IO management, we increased the processing capabilities on our IO controllers. We added more on-chip memory. We started putting in IO accelerators, these hardware accelerators. We put in SAS poor kind of enhancements. We even went and improved our driver to make sure that our driver was as thin as possible. So we can make sure that we can enable all the IOPS on systems. But a big thing happening a few couple of generations ago was we started introducing something called tri capable controllers, which means that you could attach NVMe. You could attach SAS or you could attach SATA. So you could have this really amazing deployment of storage infrastructure based around your customized needs and your cost requirements by using one controller. >> Yeah. So anybody who's ever been to a trade show where they were displaying a glass case with a Winchester disc drive, for example, you see it's spinning and its actuators is moving, wow, that's so fast. Well, no. That's like a tourist slower. It's like a snail compared to the system's speed. So it's, in a way life was easy back in those days, because when you did a right to a disk, you had plenty of time to do stuff, right. And now it's changed. And so I want to talk about Gen3 versus Gen4, and how all this relates to what's new in Gen4 and the impacts of PCIe here, you have a chart here that you've shared with us that talks to that. And I wonder if you could elaborate on that, Kim. >> Sure. But first, you said something that kind of hit my funny bone there. And I remember I made a visit once about 15 or 20 years ago to IBM. And this gentleman actually had one of those old ones in his office and he referred to them as disk files. And he never until the day he retired, he'd never stopped calling them disc files. And it's kind of funny to be a part of that history. >> Yeah. DASD. They used to call it. (both laughing) >> SD, DASD. I used to get all kinds of, you know, you don't know what it was like back then, but yeah. But now nowadays we've got it quite easily enabled because back then, we had, SD DASD and all that. And then, ATA and then SCSI, well now we've got PCIe. And what's fabulous about PCIe is that it just has the generations are already planned out. It's incredible. You know, we're looking at right now, Gen3 moving to Gen4, and that's a lot about what we're going to be talking about. And that's what we're trying to test out. What is Gen4 PCIe when to bias? And it really is. It's fantastic. And PCIe came around about 18 years ago and Broadcom is, and we do participate and contribute to the PCIe SIG, which is, who develops the standards for PCIe, but the host in both our host interface in our NVMe desk and utilize the standards. So this is really, really a big deal, really critical for us. But if you take a look here, you can see that in terms of the capabilities of it, it's really is buying us a lot. So most of our drives right now NVMe drives tend to be by four. And a lot of people will connect them. And what that means is four lanes of NVMe and a lot of people that will connect them either at by one or by two kind of depending on what their storage infrastructure will allow. But the majority of them you could buy, or there are so, as you can see right now, we've gone from eight gig transfers per second to 16 gig of transfers per second. What that means is for a by four, we're going from one drive being able to do 4,000 to do an almost 8,000 megabytes per second. And in terms of those 4k IOPS that really evade us, they were really really tough sometimes to squeeze out of these drives, but now we're got 1 million, all we have to 2 million, it's just, it's insane. You know, just the increase in performance. And there's a lot of other standards that are going to be sitting on top of PCIe. So it's not going away anytime soon. We've got to open standards like CXL and things like that, but we also have graphics cards. You've got all of your hosts connections, they're also sitting on PCIe. So it's fantastic. It's backwards, it's orbits compatible, and it really is going to be our future. >> So this is all well and good. And I think I really believe that a lot of times in our industry, the challenges in the plumbing are underappreciated. But let's make it real for the audience because we have all these new workloads coming out, AI, heavily data oriented. So I want to get your thoughts on what types of workloads are going to benefit from Gen4 performance increases. In other words, what does it mean for application performance? You shared a chart that lists some of the key workloads, and I wonder if we could go through those. >> Yeah, yeah. I could have a large list of different workloads that are able to consume large amounts of data, whether or not it's in small or large kind of bytes of data. But as you know right now, and I said earlier, our ability to consume these compute and storage resources is amazing. So you build it and we'll use it. And the world's data we're expected to grow 61% to 175 zettabytes by the year 2025, according to IDC. So that's just a lot of data to manage. It's a lot of data to have, and it's something that's sitting around, but to be useful, you have to actually be able to access it. And that's kind of where we come in. So who is accessing it? What kind of applications? I spend a lot of time trying to understand that. And recently I attended a virtual conference SDC and what I like to do when I attend these conferences is to try to figure out what the buzz words are. What's everybody talking about? Because every year it's a little bit different, but this year was edge, edge everything. And so I kind of put edge on there first in, even you can ask anybody what's edge computing and it's going to mean a lot of different things, but basically it's all the computing outside of the cloud. That's happening typically at the edge of the network. So it tends to encompass a lot of real time processing on those instant data. So in the data is usually coming from either users or different sensors. It's that last mile. It's where we kind of put a lot of our content caching. And, I uncovered some interesting stuff when I was attending this virtual conference and they say only about 25% of all the usable data actually even reach the data center. The rest is ephemeral and it's localized, locally and in real time. So what it does is in the goal of edge computing is to try and reduce the bandwidth costs for these kinds of IOT devices that go over a long distance. But the reality is the growth of real-time applications that require these kinds of local processing are going to drive this technology forward over the coming years. So Dave, your toaster and your dishwasher they're, IOT edge devices probably in the next year, if they're not already. So edge is a really big one and consumes a lot of the data. >> The buzzword does your now is met the metaverse, it's almost like the movie, the matrix is going to come in real time. But the fact is it's all this data, a lot of videos, some of the ones that I would call out here, you mentioned facial recognition, real-time analytics. A lot of the edge is going to be real-time inferencing, applying AI. And these are just a massive, massive data sets that you again, you and of course your customers are enabling. >> When we first came out with our very first Gen3 product, our marketing team actually asked me, "Hey, how can we show users how they can consume this?" So I actually set up a head to environment. I decided I'm going to learn how to do this. I set up this massive environment with Hadoop, and at the time they called big data, the 3V's, I don't know if you remember these big 3Vs, the volume, velocity and variety. Well Dave, did you know, there are now 10 Vs? So besides those three, we got velocity, we got valued, we got variability, validity, vulnerability, volatility, visualization. So I'm thinking we need just to add another beat of that. >> Yeah. (both laughing) Well, that's interesting. You mentioned that, and that sort of came out of the big data world, a dupe world, which was very centralized. You're seeing the cloud is expanding, the world's getting, you know, data is by its very nature decentralized. And so you've got to have the ability to do an analysis in place. A lot of the edge analytics are going to be done in real time. Yes, sure. Some of it's going to go back in the cloud for detailed modeling, but we are the next decade Kim, ain't going to be like the last I often say. (laughing) I'll give you the last word. I mean, how do you see this sort of evolving, who's going to be adopting this stuff. Give us a sort of a timeframe for this kind of rollout in your world. >> In terms of the timeframe. I mean really nobody knows, but we feel like Gen5, that it's coming out next year. It may not be a full rollout, but we're going to start seeing Gen5 devices and Gen5 infrastructure is being built out over the next year. And then follow very, very, very quickly by Gen6. And so what we're seeing though is, we're starting to see these graphics processors, These GPU's, and I'm coming out as well, that are going to be connecting, using PCIe interfaces as well. So being able to access lots and lots and lots of data locally is going to be a really, really big deal and order because worldwide, all of our companies they're using business analytics. Data is money. And the person that actually can improve their operational efficiency, bolster those sales and increase your customer satisfaction. Those are the companies that are going on to win. And those are the companies that are going to be able to effectively store, retrieve and analyze all the data that they're collecting over the years. And that requires an abundance of data. >> Data is money and it's interesting. It kind of all goes back to when Steve jobs decided to put flash inside of an iPhone and the industry exploded, consumer economics kicked in 5G now edge AI, a lot of the things you talked about, GPU's the neural processing unit. It's all going to be coming together in this decade. Very exciting. Kim, thanks so much for sharing this data and your perspectives. I'd love to have you back when you got some new perspectives, new benchmark data. Let's do that. Okay. >> I look forward to it. Thanks so much. >> You're very welcome. And thank you for watching this CUBE conversation. This is Dave Vellante and we'll see you next time. (upbeat music)

Published Date : Nov 11 2021

SUMMARY :

Kim Leyenaar is the Principal So you have a deep a lot of the customers and how has that impacted the And I could hear the And, so the next chart that we have And it was a shock because we were like, in to these blue bars? And the way that we would identify, So the drive then when SSD came in Well, in terms of the And I wonder if you could And it's kind of funny to They used to call it. and a lot of people that will But let's make it real for the audience and consumes a lot of the data. the matrix is going to come in real time. and at the time they the ability to do an analysis And the person that actually can improve a lot of the things you talked about, I look forward to it. And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

2007DATE

0.99+

BroadcomORGANIZATION

0.99+

2008DATE

0.99+

Kim LeyenaarPERSON

0.99+

2014DATE

0.99+

61%QUANTITY

0.99+

Kimberly LeyenaarPERSON

0.99+

4,000QUANTITY

0.99+

14 yearQUANTITY

0.99+

2010DATE

0.99+

20 XQUANTITY

0.99+

DavePERSON

0.99+

1 millionQUANTITY

0.99+

KimPERSON

0.99+

IBMORGANIZATION

0.99+

two coresQUANTITY

0.99+

third oneQUANTITY

0.99+

2022DATE

0.99+

2 millionQUANTITY

0.99+

16 gigQUANTITY

0.99+

first questionQUANTITY

0.99+

twoQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

five megabytesQUANTITY

0.99+

10 VsQUANTITY

0.99+

oneQUANTITY

0.99+

170 XQUANTITY

0.99+

eight gigQUANTITY

0.99+

30 terabytesQUANTITY

0.99+

threeQUANTITY

0.99+

mid 2000sDATE

0.99+

400QUANTITY

0.99+

early 2000sDATE

0.99+

one and a half millionQUANTITY

0.99+

one terabyteQUANTITY

0.99+

firstQUANTITY

0.99+

four coresQUANTITY

0.99+

175 zettabytesQUANTITY

0.99+

next yearDATE

0.99+

three bottlenecksQUANTITY

0.99+

early 2010sDATE

0.99+

next decadeDATE

0.99+

early 2000sDATE

0.99+

4kQUANTITY

0.99+

one driveQUANTITY

0.99+

first oneQUANTITY

0.99+

IDCORGANIZATION

0.99+

LS-300COMMERCIAL_ITEM

0.98+

next monthDATE

0.98+

1950sDATE

0.98+

about 22 yearsQUANTITY

0.98+

one controllerQUANTITY

0.98+

2025DATE

0.98+

iOSTITLE

0.98+

StevePERSON

0.98+

WinchesterORGANIZATION

0.98+

fiveQUANTITY

0.98+

DVR 5 4800COMMERCIAL_ITEM

0.98+

bothQUANTITY

0.98+

four lanesQUANTITY

0.98+

around 35,000QUANTITY

0.97+

first three gigQUANTITY

0.97+

about six years agoDATE

0.96+

about 25%QUANTITY

0.96+

first hard driveQUANTITY

0.96+

over 6,000 megabytes per secondQUANTITY

0.96+

20 years agoDATE

0.96+

almost 50 yearsQUANTITY

0.96+

22QUANTITY

0.95+

one and a halfQUANTITY

0.95+

Infinidat Power Panel | CUBEconversation


 

[Music] hello and welcome to this power panel where we go deep with three storage industry vets two from infinidat in an analyst view to find out what's happening in the high-end storage business and what's new with infinidat which has recently added significant depth to its executive ranks and we're going to review the progress on infinidat's infinibox ssa a low-latency all-solid state system designed for the most intensive enterprise workloads to do that we're joined by phil bullinger the chief executive officer of it finidet ken steinhardt is the field cto at infinidat and we bring in the analyst view with eric bergener who's the vice president of research infrastructure systems platforms and technologies group at idc all three cube alums gents welcome back to the cube good to see you thanks very much dave good to be here thanks david as always a pleasure phil let me start with you as i mentioned up top you've been top grading your team we covered the herzog news beefing up your marketing and also upping your game and emea and apj go to market recently give us the business update on the company since you became ceo earlier this year yeah dave i'd be happy to you know the uh i joined the company in january and it's been a it's been a fast 11 months uh exciting exciting times at infinidad as you know really beginning last fall the company has gone through quite a renaissance a change in the executive leadership team uh i was really excited to join the company we brought on you know a new cfo new chief human resources officer new chief legal officer operations head of operations and most recently as has been you know widely reported we brought in eric to head up our marketing organization as a cmo and then last week richard bradbury in in london to head up international sales so very excited about the team we brought together it's uh it's resulted in or it's been the culmination of a lot of work this year to accelerate the growth of infinidat and that's exactly what we've done it's the company has posted quarter after quarter of significant revenue growth we've been accelerating our rate and pace of adding large new fortune 500 global 2000 accounts and the results show it definitely the one of the most exciting things i think this year has been infinidat has pretty rapidly evolved from a single product line uh company around the infinibox architecture which is what made us unique at the start and still makes us very unique as a company and we've really expanded out from there on that same common software-defined architecture to the ssa the solid state array which we're going to talk about in some in some depth today and then our backup appliance our data protection appliance as well all running the same software and what we see now in the field uh many customers are expanding quickly beyond you know the traditional infinibox business uh to the other parts of our portfolio and our sales teams in turn are expanding their selling motion from kind of an infinibox approach to a portfolio approach and it's it's really helping accelerate the growth of the company yeah that's great to hear you really got a deep bench and of course you you know a lot of people in the industry so you're tapping a lot of your your colleagues okay let's get into the market i want to bring in uh the analyst perspective eric can you give us some context when we talk about things like ultra low latency storage what's the market look like to you help us understand the profile of the customer the workloads the market segment if you would well you bet so i'll start off with a macro trend which is clearly there's more real-time data being captured every year in fact by 2024 24 of all of the data captured and stored will be real-time and that puts very different performance requirements on the storage infrastructure than what we've seen in years past a lot of this is driven by digital transformation we've seen new workload types come in big data analytics real-time big data analytics and obviously we've got legacy workloads that need to be handled as well one other trend i'll mention that is really pointing up this need for low latency consistent low latency is workload consolidation we're seeing a lot of enterprises look to move to fewer storage platforms consolidate more storage workloads onto fewer systems and to do that they really need low latency consistent low latency platforms to be able to achieve that and continue to meet their service level agreements great thank you for that all right ken let's bring you into the conversation steiny what are the business impacts of of latency i want you to help us understand when and why is high latency a problem what are the positive impacts of having a consistent low latency uh opportunity or option and what kind of workloads and customers need that right the world has really changed i mean when when dinosaurs like me started in this industry the only people that really knew about performance were the people in the data center and then as things moved into online computing over the years then people within your own organization would care about performance if things weren't going well and it was really the erp revolution the 1990s that sort of opened uh people's eyes to the need for performance particularly for storage performance where now it's not just your internal users but your suppliers are now seeing what your systems look like fast forward to today in a web-based internet world everyone can see with customer facing applications whether you're delivering what they want or not and to answer your question it really comes down to competitive differentiation for the users that can deliver a better user customer experience if you and i'm sure everybody can relate if you go online and try to place an order especially with the holiday season coming up if there's one particular site that is able to give you instantaneous response you're more likely to do business there than somebody where you're going to be waiting and it literally is that simple it used to be that we cared about bandwidth and we used to care about ios per second and the third attribute latency really has become the only one that really matters going forward we found that most customers tell us that these days almost anyone can meet their requirements for bandwidth and ios per second with very few outlying cases where that's not true but the ever unachievable zero latency instantaneous response that's always going to be able to give people competitive differentiation in everything that they do and whoever can provide that is going to be in a very good position to help them serve their customers better yeah eric that stat you threw out of 24 real time uh and that that sort of underscores the need but phil i wonder how how this fits if you could talk about how that fits into your tam expansion strategy i think that's the job of of every ceo is to think about the expanding the tam it seems like you know a lot of people might say it's not necessarily the largest market but it's strategic and maybe opens up some downstream opportunities is that how you're thinking about it or based on what ken just said you expect this to to grow over time oh we definitely expect it to grow uh dave you know the the history of infinidat has been around our infinibox product targeting the primary storage market at the at the higher end of that market you know it's we've enjoyed operating in a eight nine 10 billion dollar tan through the years and that it continues to grow and we continue to outpace market growth within that tam which is exciting what this uh what the ssa really does is it opens up a tier of workload performance that we see more and more emerging in the primary data center the infinibox classic infinibox architecture we have very very fast as we say it typically outperforms most of our all-flash uh array competitors but clearly there there are a tier of workloads that are growing in the data center that require very very tight tail latencies and and that segment is certainly growing it's where some of the most demanding workloads are on the infinibox ssa was really built to expand our participation in those segments of the market and as i mentioned up front at the same time also taking that that software architecture and moving it into the the data protection space as well which is a whole nother market space that we're opening up for the company so we really see our tam this year with more of the this portfolio approach expanding quite a bit eric how how do you see it well those real-time applications that you talked about that require that consistent ultra-low latency grow kind of in in parallel with that that time curve you know will they become a bigger part of that the the overall storage team and and the workload mix how does idc see it yeah so so they actually are going to be growing over time and a lot of that's driven by the fact of the expectations that um steinhart mentioned a little bit earlier just on the part of customers right what they expect when they interact with your i.t infrastructure so we see that absolutely growing going forward i will make a quick comment about you know when all flash arrays first hit back in 2012 um in the 10 years since they started shipping they now generate over 80 of the primary revenues out there in in the primary storage arena so clearly they've taken over an interesting aspect of what's going on here is that a lot of companies now write rfps specifically requiring an all-flash array and what's going to be interesting for infinidat is despite the fact that they could deliver better performance than many of those systems in the past they couldn't really go after the business where that rfp was written for an afa spec well now they'll certainly have the opportunity to do that in my estimation that's going to give them access to about an additional 5 billion in tam by 2025 so this is big for them as a company yeah that's a 50 increase in tamp so okay well eric you just set up my my follow-up question to you ken was going to be the tougher questions uh which we've you and i have had some healthy debates about this but i know you'll have answers so so for years you've argued that your cached architecture and magic sauce algorithms if i caught that could outperform all flash arrays we're using spinning disks so eric talked about the sort of check off item but are there other reasons for the change of heart why and why does the world need another afa doesn't this cut against your petabyte scale messaging i wonder if you could sort of add some color to that sure a great question and the good news is infinibox still does typically outperform all flash arrays but usually that's for average of latency performance and we're tending to get because we're a a caching architecture not a tiered architecture and we're caching to dram which is an order of magnitude faster than flash or even storage class memory technologies it's our software magic and that software defined storage approach that we've had that now effectively is extended to solid state arrays and some customers told us that you know we love your performance it's incredible but if you could let us effectively be confident that we're seeing you know some millisecond sub half millisecond performance consistently for every single io you're going to give us competitive differentiation and this is one of the reasons why we chose to call the product a solid state array as opposed to merely an all-flash array the more common ubiquitous term and it's because we're not dependent on a specific technology we're using dram we can use virtually any technology on the back end and in this case we've chosen to use flash but it's the software that is able to provide that caching to the front end dram that makes things different so that's one aspect is it's the software that really makes the difference it's been the software all along and still on this architecture still mentions going to across the multiple products it's still the software it's also that in that class of ultra high performance architecturally because it is based on the infinibox architecture we're able to deliver 100 availability which is another aspect that the market has evolved to come to expect and it's not rocket science or magic how we do it the godfather of computer science john von neumann all the way back in the 1950s theorized all the way back then that the right way to do ultra high availability and integrity in i.t systems of any type is in threes triple redundancy and in our case amazingly we're the only architecture that uses triple redundant active active components for every single mission critical component on the system and that gives a level of confidence to people from an availability perspective to go with that performance that is just unmatched in the market and then bring all of that together with a set it and forget it mentality for ease of use and simplicity of management and as phil mentioned being able to have a single architecture that can address now not only the ultra high performance but across the entire swath of as eric mentioned consolidation which is a key aspect as well driving this in addition to those real-time applications that he mentioned and even being able to take it down into our our infiniguard data protection device but all with the same common base of software common interface common user experience and unmatched availability and we've got something that we really think people are going to like and they've certainly been proving that of late well i was going to ask you you know what makes the the infinibox ssa different but i think you just laid it out but your contention is this is totally unique in the marketplace is that right ken yes indeed this is a unique architecture and i i literally as a computer scientist myself truly am genuinely surprised that no other vendor in the market has taken the wisdom of the godfather of computer science john von neumann and put it into practice except in the storage world for this particular architecture which transcends our entire realm all the way from the performance down to the data protection phil i mean you have a very wide observation space in this industry and a good strong historical perspective do you think the expectations for performance and this notion of ultra low latencies you know becoming more demanding is is there a parallel so first of all why is that we've talked about a little bit but is there a parallel to the way availability remember you could have escalated over the years um because it was such a problem and now it's really become table stakes and that last mile is so hard but what are your thoughts on that i i think i think absolutely dave you know the the hallmark of infinidat is this white glove concierge level customer experience that we deliver and it's it's affirmed uh year after year in unsolicited enterprise customer feedback uh above every other competitor in our space uh infinidat sets itself apart for this um and i think that's a big part of what continues to drive and fuel the growth and success of the company i just want to touch on a couple things that ken and and eric mentioned the ssa absolutely opens up our tan because we get to we get a lot more at bats now but i think a lot of the industry looks at infinidat as well those guys are are hard drive zealots right they've their architecture is all based on rotating disk that's what they believe in and it's a hybrid versus afa world out there and they were increasingly not on the right bus and that's just absolutely not true in that our our neural cache and what ken talked about what made us unique at the start i think actually only increasingly differentiates us going forward in terms of the the set it and forget it the intelligence of our architecture the ability of that dram based cache to adapt so dynamically without any knobs and and configuration changes to massive changes in workload scale and user scale and it does it with no drama in fact most of our customers the most common feedback we get is that your platform just kind of disappears into our data infrastructure we don't think about it we don't worry about it when we install an infiniti an infinidat rack our intentions are never to come back you know we're not there showing up with trays of disk under our arms trying to upgrade a mission-critical platform that's just not our model what the ssa does is it gives our customers choice it's not about infinidat saying that used to be the shiny object now this is our new shiny object please everybody now go buy that what where where we position our ssa is it's a it's a tco latency sla choice that they can make between exactly identical customer experiences so instead of an old hybrid and a new afa we've got that same software architecture set it and forget it the neural cache and customers can choose what back-end persistent store they want based on the tco and the sla that they want to deliver to a given set of applications so probably the most significant thing that i've seen happen in the last six months at infinidat is a lot of our largest customers the the fortune 15s the fortune 50s the fortune 100s who have been long-standing infinidat customers are now on almost every sort of re-tranche of or trancha purchase orders into us we're now seeing a mix we're seeing a mix of some ssa and some classic infinibox because they're mixing and matching in a given data center down a given row these applications need this sla these applications need this la and we're able to give them that choice and frankly we don't we don't intentionally try to steer them one direction or the other they they're smart they do the math they can pick and choose what experience they want knowing that irrespective of what front door they go through into the infinidat portfolio they're going to get that same experience so i'm hearing it's not just a an rfp check off item it's more than that the market is heading in that direction eric's data on on real time and we're certainly seeing that the data-driven applications the injection of ai and you know systems making decisions in in real time um and i i'm also hearing phil that you're building on your core principles i'm hearing the white glove service the media agnostic the set it and forget it sort of principles that you guys were founded on is you're carrying that through to this this opportunity we absolutely are in the reason and you ask a good question before and i want to more completely answer it i think availability and customer experience are incredibly important today more so than ever because data center economics and data center efficiency um are more important than ever before is as customers evaluate what workloads belong in the public cloud what workloads do i want on-prem irrespective of those decisions they're trying to optimize their their operational expenses their capex expenses and so one thing that infinidat has always excelled at is consolidation bringing multiple users multiple workloads into the same common platform in the data center it says floor space and watts and and uh you know storage administration resources but to do consolidation well you've got to be incredibly reliable and incredibly predictable without a lot of fuss and drama associated with it and so i think the thing that has made infinidat really strong through the years with being a very good consolidation platform is more important now than ever before in in the enterprise storage space because it is really about data center efficiency and uh administration efficiency associated with that yeah thank you for that phil now actually ken let me come back to you i want to ask you a question about consolidation and you and i and and doc our business friend rest his soul have had some some great conversations about this over time but but as you consolidate people are sometimes worried about the blast radius could you address that concern sure well um phil alluded to software and uh it is the cornerstone of everything we bring to the table and it's not just that deep learning that transcends all the intelligence phil talked about in terms of that full wide range of product it's also protection of data across multiple sites and in multiple ways so we were very fortunate in that when we started to create this product since it is a modern product we got to start with a clean sheet of paper and basically look at everything that had been done before and even with some of the very people who created some of the original software for replication in the market were able to then say if i could do it again how would i do it today and how would it be better so we started with local replication and snapshot technology which is the foundation for being able to do full active active replication across two sites today where you can have true zero rpo no data loss even in the face of any kind of failure of a site of a server of a network of a storage device of a connection as well as zero rto immediate consistent operation with no human intervention and we can extend from that out to remote sites literally anywhere in the world in multiples where you can have additional copies of information and at any of them you can be using not only for protection against natural disasters and floods and things like that but from a cyber security perspective immutable snapshots being able to provide data that you know the bad actors can't compromise in multiple locations so we can protect today against virtually any kind of failure scenario across the swath of infinibox or infinibox ssa you can even connect infinite boxes and infinibox ssas because they are the same architecture exactly as phil said what we're seeing is people deploying mostly infinibox because it addresses the wide swath from a consolidation perspective and usually just infinibox ssa for those ultra high performance environments but the beauty of it is it looks feels runs and operates as that one single simple environment that's set it and forget it and just let it run okay so you can consolidate with with confidence uh let's end with the the independent analyst perspective eric you know how do you see this offering what do you think it means for the market is this a new category is it an extension to an existing space how do you look at that uh so i don't see it as a new category i mean it clearly falls into the current definition of afas i think it's more important from the point of view of the customer base that likes this architecture likes the availability the functionality the flexibility that it brings to the table and they can leverage it with tier zero workloads which was something that in the past they didn't have that latency consistency to do that you know i'll just make one one final comment on the software side as well so the reason software is eating the world mark andreessen is basically because of the flexibility the ease of use and the economics and if you take a look at how this particular vendor infinidat designed their product with a software-based definition they were able to swap out underneath and create a different set of characteristics with this new platform because of the flexibility in the software design and that's critical one if you think about how software is dominating so today for 2021 68 of the revenue in the external storage market that's the size of the software defined storage market that's going to be going to almost 80 by 2024 so clearly things are moving in the direction of systems that are defined in a software-defined manner yeah and data is eating software which is why you're going to need ultra low latency um okay we got to wrap it eric you've just published a piece uh this summer called enterprise storage vendor infinidat expands total available market opportunities with all flash system introduction i'm sure they can get that on your website here's a little graphic that shows you how to get that but so guys thanks so much for coming on the cube congratulations on the progress and uh we'll be watching thanks steve thanks very much dave thank you as always a pleasure all right thank you for watching this cube conversation everybody this is dave vellante and we'll see you next time [Music] you

Published Date : Nov 9 2021

SUMMARY :

the market segment if you would

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
eric bergenerPERSON

0.99+

john von neumannPERSON

0.99+

2012DATE

0.99+

infinidatORGANIZATION

0.99+

januaryDATE

0.99+

2025DATE

0.99+

phil bullingerPERSON

0.99+

davidPERSON

0.99+

two sitesQUANTITY

0.99+

davePERSON

0.99+

stevePERSON

0.99+

londonLOCATION

0.99+

last weekDATE

0.99+

24QUANTITY

0.99+

10 yearsQUANTITY

0.98+

2024DATE

0.98+

todayDATE

0.98+

11 monthsQUANTITY

0.98+

100 availabilityQUANTITY

0.98+

steinhartPERSON

0.97+

50 increaseQUANTITY

0.97+

oneQUANTITY

0.97+

this yearDATE

0.97+

last fallDATE

0.96+

third attributeQUANTITY

0.96+

infiniboxORGANIZATION

0.96+

twoQUANTITY

0.96+

ericPERSON

0.96+

1950sDATE

0.95+

earlier this yearDATE

0.94+

firstQUANTITY

0.93+

one particular siteQUANTITY

0.92+

eight nine 10 billion dollarQUANTITY

0.92+

dave vellantePERSON

0.92+

finidet ken steinhardtPERSON

0.91+

kenPERSON

0.91+

mark andreessenPERSON

0.9+

philPERSON

0.89+

last six monthsDATE

0.88+

2024 24DATE

0.87+

one aspectQUANTITY

0.87+

5 billionQUANTITY

0.87+

singleQUANTITY

0.86+

50sTITLE

0.86+

lot of peopleQUANTITY

0.86+

2021 68DATE

0.86+

one directionQUANTITY

0.85+

every yearQUANTITY

0.85+

one thingQUANTITY

0.83+

a lot of companiesQUANTITY

0.83+

a lot of peopleQUANTITY

0.82+

infinidadORGANIZATION

0.8+

over 80 of the primary revenuesQUANTITY

0.8+

1990sDATE

0.79+

summerDATE

0.79+

one single simpleQUANTITY

0.78+

InfinidatORGANIZATION

0.76+

iosTITLE

0.76+

zero latencyQUANTITY

0.76+

one final commentQUANTITY

0.75+

almost 80QUANTITY

0.73+

every single missionQUANTITY

0.72+

Isabelle Guis, Reza Morakabati & John Gallagher | Commvault Connections 2021


 

>>mhm mhm. Mhm Welcome back to convert connections 2021. This is the power panel. My name is Dave vellante joined by Reza more like a body was the ceo of calm vault. Isabel geese is the CMO of calm vault and john Gallagher he leads global enterprise infrastructure at sing Creon. And folks welcome to the cube. Thanks for coming on. >>Thank you. >>Thank you. >>So john we heard you this morning. You know, great job you guys are in the industrial logistics business. So supply chains at all a hot topic today. It's got to be challenging. Maybe you could talk about what you're seeing there, but specifically how are you thinking about data management in the context of your overall IT strategy. >>Okay, thank you. So in terms of data management, Synchrotron has 100 sites globally. So if we were to rewind by say 10 years we had data residing a lot out of those remote sites. Uh so over the last few years were basically consolidated a lot of that data and also centralist. So we've brought that into our data centers that we now have, which is a very, very centralist model. So that, that makes it a lot easier to understand where all of that data resides. >>So in the decision pie, as it relates to data, it sounds like cost efficiency is pretty, pretty ranks pretty highly. How does that impact your data management strategy and approach? I mean, is is that like the number one consideration? Is that one of many factors. How should we think about that? >>I would say cost is one of many factors. So obviously cost is key, but you don't want to introduce unnecessary risks. So you've got to keep costs at the forefront. But that's just one of the factors. Obviously data protection is one of the factors ensuring that data is protected and safe. And also understanding, is that exactly where that data resides, making sure that data is encrypted. So I would say that cost is just one of the factors. >>So Isabel good to see you again. I wonder if you could talk about how you're seeing your customers and what they're thinking about, how they're thinking differently about data management today, Are they changing the way they manage data giving given the escalation of ransomware that comprise the called the forced march to digital over the last 18, 19 months, but you've got new threats, new business dynamics, how is that affecting organizations? >>It does, it does affect them a lot. It's um we see a lot more. Actually, I asked a lot of virtual coffee talks with our customers so they can share best practices and a lot of C IOS network ending end with sizzles and they have a readiness plan because they know the question is not if they're going to have an attack, but when and how to recover from it is critical. So all the security team is really looking at the prevention but they know that if they can't stop it all, then they have a plan of end of to the data team for recovery. I see a lot more thoughtfulness because not all data is created equal. So which one is in the cloud and you can recover which one you need fast for minimum business was sorry, minimum business disruption and you keep on prime and which one you cannot lose and you have a go. So we see a lot more planning, a lot more collaboration across all verticals. We have also new services that help customers before the attacks to design and plan and also helping them post attack to recover so very much and to end and as we've seen the king out right now it's all about the people enabling them to do the business while you're risking the business too. So >>All right, thank you for that. So Reza and the fact that your Ceo is C I O. Uh so you must have some interesting conversations there but and you can be a sort of tap Sanjay's brain, how did you handle this kind of thing? And and uh that's a nice collaboration I bet. But what advice can you give to other, see IOS or grappling with cyber threats, data volumes and just the ongoing pressure to do more with less that never changes does it? >>It doesn't. And you're absolutely right. And I obviously as part of my job attracted benchmarks about budgets and everything else that before the pandemic used to track about like 3% growth year over year which is a hard to kind of do a whole lot with them. Um What what I can tell you is not for C I O not two areas the areas of investments are not created equal and from my perspective the biggest areas of investment for somebody like me in my position should be data and protecting the data. So that means that you have to find ways of on the budget side, find ways of shifting money whether you reallocate resources, whether you reform or a really organized differently, automate simplified etcetera. My background is operation so when you talk about people process technology outside of things, I leave the technology to the people that are really good at it and I focus on people and process side and for me that's about again efficiencies and finding ways that you can reorganize, you probably have the people that do the work that you want them to do and you just have to think about reorganizing them differently. And the last thing I said is prioritize prioritize initiatives across the board and it is like partner in crime in these things and we don't always say yes to her and what she wants because we need to be transparent so where we put our money >>so rest, I want to stay with you for me, I want to talk about data sprawl was interesting john during your session this morning I was sort of laying down some of my thoughts because I I feel like data sprawl, it's like social change. You can't fight it. You can maybe, you know for a period of time control it. But data is is out of control. So how do you address data sprawl in an organization? Both from a management perspective there's obviously risk. Somebody said this morning we used to keep, I think it was the C. I. A. In New Jersey. We used to keep everything forever. But that's risky. So how do you deal with that result from an organizational and management perspective? >>Yeah. You again, I'm gonna have to agree with you. As as I said in in a morning session, I like it's a natural phenomenon for a company to go through it. I've seen it in companies that are 150% people and I've seen it in companies that have tens of thousands of people. It's like a foundation onto what what entropy is in thermodynamics. It's the natural order of events. If you don't apply structure, organization data is going to go haywire and everything else. The only way. The best way that I know when the pendulum is here and everybody is doing the wrong thing is to push the pedal on the other side at least for a while to centralize, pick a few of your brightest people that know the data in and out, put them in a team and say you're responsible for making sense out of these things. Identify sources of truth for us and architect them differently. But but start with executive level metrics and board level metrics and push them down. >>So I see. I I agree with that with that. I think the people who have the data context are in the best position to add value as to whether it's data quality and how to get the most out of that data. But the problem is uh john I'd love to pick your brain on this. Especially your urine mia. You got all these different regulations and data silos, which I believe are a byproduct of how we organize. Uh, but but anyway, you have a lot of the considerations to deal with whether it's G. D. P. R. Or or or or data sovereignty etcetera. How do you approach that? >>So one of the first approaches we took when we moved over to con vault with our data protection was to reduce the number of products we used for the data protection. So we had six products through various acquisitions that we, we've done over the last 10-15 years. We've now reduced that six products down to one single product. So it means that all of your data is managed through a sort of single pane, which definitely gives you a much better insight. And also just going back to the costs that you mentioned in the previous question. Obviously going down from six products to one product, we managed to strip around $500,000 out of our costs over three years. We also moved data like I said into the center and allowed us to also concentrate the teams. So also the teams became more efficient because less people were dealing with that data as well. But yes you are right around GDP are there is definitely compliance to be considered and you just have to make sure you're up to date on all of those compliance regulations. >>So it's interesting resident here you talk about you know Isabelle, she's got needs but I would say Isabel that you probably know in your team, you know the marketing data better than anybody but there's got to be Federated governance, you've got to enforce policy in this data sprawl world. So anyway this is sort of a side but Sanjay Isabelle talk today about as a service growing like crazy and given your background I wonder if you can share any insights about how and why you think customers are going to be looking towards SAs I mean the whole world is becoming SAs ified you had some data on that this morning from, from Gardner. What are your thoughts? >>Yeah, no, absolutely, you're right experience this percent coming from cell phones and yes angie mentioned in the keynote by I think 2025 85% of business will be delivered through SAAS apps and that's very simple look at the world today the market dynamics of business changes. You mentioned the supply chain is you were talking you know all the line of business people of the business executives have to change fast. And the fastest way to do that is SAS because it has speed agility and you get the value faster problem being then it becomes very complex or I. T. Because you have workloads in multiple clouds on premise multiple apps and and what convo stands for and what everybody should look at is being able to enable all this innovation but at the same time removing the complexity for I. T. To protect this data to recover it and that's really where you know we're focusing our attention that is unavoidable. It's all about business and gT but it doesn't mean that you should compromise on data management. Yeah. >>Yeah I think you know we gotta we have to wrap here but I think the model, you know again it's about you coming from salesforce, we've contextualized our operational systems. You know, whether it's you know the sales cloud, the logistics, cloud, it's the lines of business actually have a good handle on this. And where I see the role of calm vault is that that notion of Federated governance, you've got to have centralized policy but you've got to programmatically and automate that out to the lines of business and I think that is kind of where the where the future is headed. Uh And I think that's really kind of controlled strategy. I'm hearing a lot on automation cloud like services and pushing that out. Um And so I see a new era in data coming and you guys talked a lot about this but but Isabelle will give you the last word. Put a put a bumper sticker on the on the panel for us. >>Well absolutely. I mean you said it's not left for no workload, sorry, it should be left behind and that's why you know you need a single architecture. I think businesses is changing fast and it's exciting. Uh And as long as you know you got a great I. T. Team with a great plan to have your back as a business leader. Every company should really embrace um all the change and innovation. So thank you day for for giving me the last world >>go. Thank you guys. I really appreciate you coming on the cube has been a fun day. We got more here that convulsed connections, keep it right there. We're gonna come back right after this short break my nose and I are going to wrap up and summarize the day. Yeah

Published Date : Nov 1 2021

SUMMARY :

Isabel geese is the CMO of calm vault and john So john we heard you this morning. So that, that makes it a lot easier to understand where all So in the decision pie, as it relates to data, it sounds like cost efficiency is pretty, but you don't want to introduce unnecessary risks. So Isabel good to see you again. So which one is in the cloud and you can recover which one you need fast for minimum I O. Uh so you must have some interesting conversations there but and you can be a sort of tap So that means that you have to so rest, I want to stay with you for me, I want to talk about data sprawl was interesting john during your that know the data in and out, put them in a team and say you're responsible for making sense are in the best position to add value as to whether it's data quality and how to get the most out of that data. And also just going back to the costs that you mentioned in the previous question. I mean the whole world is becoming SAs ified you had some data on that this morning from, You mentioned the supply chain is you were talking you know all You know, whether it's you know the sales cloud, the logistics, So thank you day for for giving me the last world I really appreciate you coming on the cube has been a fun day.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
john GallagherPERSON

0.99+

six productsQUANTITY

0.99+

oneQUANTITY

0.99+

100 sitesQUANTITY

0.99+

SynchrotronORGANIZATION

0.99+

six productsQUANTITY

0.99+

New JerseyLOCATION

0.99+

Reza MorakabatiPERSON

0.99+

IsabelPERSON

0.99+

John GallagherPERSON

0.99+

Dave vellantePERSON

0.99+

one productQUANTITY

0.99+

IsabellePERSON

0.99+

Sanjay IsabellePERSON

0.99+

Isabelle GuisPERSON

0.99+

johnPERSON

0.99+

Isabel geesePERSON

0.99+

RezaPERSON

0.99+

2025DATE

0.99+

BothQUANTITY

0.99+

around $500,000QUANTITY

0.98+

SanjayPERSON

0.98+

over three yearsQUANTITY

0.98+

2021DATE

0.98+

IOSTITLE

0.98+

angiePERSON

0.97+

C IOSTITLE

0.97+

3%QUANTITY

0.97+

todayDATE

0.97+

two areasQUANTITY

0.96+

pandemicEVENT

0.95+

85%QUANTITY

0.95+

first approachesQUANTITY

0.95+

10 yearsQUANTITY

0.94+

this morningDATE

0.93+

tens of thousands of peopleQUANTITY

0.92+

SAASTITLE

0.92+

I. T.ORGANIZATION

0.9+

GardnerPERSON

0.89+

yearsDATE

0.86+

single architectureQUANTITY

0.84+

150% peopleQUANTITY

0.82+

sing CreonORGANIZATION

0.81+

one single productQUANTITY

0.79+

single paneQUANTITY

0.74+

factorsQUANTITY

0.71+

last 10-15 yearsDATE

0.69+

lastDATE

0.67+

monthsQUANTITY

0.6+

18QUANTITY

0.55+

SASORGANIZATION

0.46+

C.ORGANIZATION

0.41+

CeoORGANIZATION

0.38+

19DATE

0.37+

Jagjit Dhaliwal, UiPath & Jim Petrassi, Blue Cross Blue Shield, IL, TX, MT, OK, & NM | UiPath FORWAR


 

>>from the bellagio Hotel >>in Las Vegas. >>It's the >>cube covering >>Ui Path forward. >>Four brought to >>you by Ui Path. >>Welcome back to Las Vegas. The cube is here. We've been here for two days covering Ui Path Forward for lisa martin here with David Monty. We've talked about automation and many industries. Now this segment is going to focus on automation and healthcare. We've got two guests joining us Jim Petrosea Cto of Blue Cross, Blue Shield and Gadget. Dhaliwal. The global C. I. O. Industry lead at you. I pass guys welcome to the program. Thank you. So let's start unpacking from the CTO level and the ceo level the agenda for automation. Jim let's start with you. What does that look like >>for us. It's actually pretty strategic and part of as we think about digital and what digital transformation means, it actually plays a pretty key role. Um There are a lot of processes that can be very manual within a big organization like Blue cross and Blue shield and to be able to streamline that and take away kind of what I would call the mundane work. Right? The the you know, going through a spreadsheet and then typing it into the screen, there are a lot of processes like that that are legacy. But what if you could take that away um and actually create a better work experience for the people that work there right? And and focus on higher value type uh type things and it's really key. And it really It goes down to our our business folks right? There are a lot of things we can drive with automation. We started a program um in 2019. Um that's been quite successful. We now have 250 box, we measure what we call annualized efficiency gains. So how much efficiency are we getting by these bots? So the bots are doing um this repetitive work that people would do. Um And what we're finding is, you know, we've got about $11 million in any wise efficiency gain through the process and we're just getting started. Um But we're all we're not stopping there too though, we're enabling citizen developers. So we're saying, hey business, if you want to automate, you know, parts of your job, we're gonna help you do that. So we've got about 60 people that were training. Um We run bad Ethan's where they come together and they actually create bots uh And it's really really creating some some impact and buzz in our business >>anywhere from your lens, where does automation fit within the C. I. O. S. Agenda? And how do you work together in unison with the C. T. O. To help roll this out across the enterprise? >>Yeah, no, definitely. And in fact as a part of introduction, I can actually share that. How I'm wearing a Ceo had within your path since I'm just joining join path and I'm actually now helping a client ceos in their automation strategy but I was a deputy ceo in my prior role at L. A. County where actually I ran the automation strategy. So if we look at from our organization perspective B complex as L. A County which is such a Federated organization. From a Ceo perspective, the way we look at the strategy is it's always driven by the business goals of the city or a county and we typically drive into three different areas. One is how we can transform our operational processes so that we can save the tax dollars. It's all about doing more with the less dollars. And then second is about how we can transform our residents experience because end of the day it is all about how we can improve the quality of life for our residents. So we've got 10 million people for L. A. County, the largest populous county in us. So it was an uphill task to serve that such a diverse population need and that the third area is about how to transform the new business models because as we are moving away from a government centric approach to the residents centric approach, you really need to come up with a new digital solutions. And Ceo is in the center of all these three elements when you look at it. So it's a very appear to us to keep keep improving your efficiency and then at a time keep adding the new digital solutions and that's where automation strategy is kind of a horizontal strategy which enables all these components. So what I hear from >>that is alignment with the business. Yeah. Right. Change management. Absolutely. That's like really fundamental and then see IOS this this agent of transformation uh you can see or she has a horizontal purview across the organization now now jim the cto role is the automation at blue cross blue shield lead by you or you there to make sure the technology plugs into your enterprise architecture. What's your shoulder? >>You know? Uh my my role is really to drive uh what I'll call technology enabled business change. Right. So I actually uh started our our automation journey uh at hc sc and I did that by partnering with our business. Um There was actually a lot of buzz around automation and there were actually some small pockets of it, none of it was enterprise scale. Um Right. And we really wanted to go big in this and and working with the business sponsors, they saw value in it. Um and we've you know, we've generated um a lot of uh efficiency, better quality of work because of it but but I very closely had a partner with our business, we have a committee that is lead of business folks that I facilitate. So I view my role as an enabler, um we have to communicate the change management pieces is huge. Uh the education just having a common vernacular on what is automation mean, Right, because everybody interpreted it differently um and then being able to do it at an enterprise scale is quite challenging. Um You know, I I really enjoyed um one of the key notes, I don't know if you had a chance to see shankar by Duncan from the hidden brain, right? But he talked a lot about the brain aspect and how do you get people to change? And and that's a large part of it. There's a lot about technology, but there's really a lot about being a change agent um and and really working very closely with your business, >>how does one measure? I'm hearing a lot time saved. Our saved. How does one measure that and quantify the dollar impact, which by the way, I'm on record as saying the soft dollars are way bigger. And but when you're talking to the, you know, the bottom line CFO and it's all about, you know, the cash flow, whatever is, how do you measure that? >>I can take it. So we, what we do is as we define these use cases right? We we go through an actual structure product process where we we gather them. Um we then rate them and we actually prioritize them based on those that are going to have the greatest impact. Um and we can tell based on, you know, what is the manual effort today. So we understand there are X number of people that do this X number of days and we think this body can take that some load off of them. Right? Um So we we go in with the business case. Um And then the Ui Path platform actually allows us to measure well, how much is that pot running? Right. So we can actually sit there and say, well we wanted that thing to run 10 hours a day and it did and it's generated this kind of efficiency because otherwise the human would have had to do that work. >>So the business case is kind of redeploying >>human. It really is is really maximizing human capital and make and and you know really using because the bots do repetitive stuff really well. They don't do higher level thinking and and we don't view it as replacing people, we view it as augmenting and actually making them more efficient and more effective at what, how do you get the dollars out of that? Well, a couple of ways. Right. And so one of the things we've we've done is we we create and measure the efficiency our business users and financed by the way is one of our bigger ones. And the CFO is one of the sponsors of the program, um can decide how to reinvest it in a lot of cases it is actually cost avoidance as we grow, literally being able to grow without adding staff. I mean that's very measurable. Um in some cases it is actually taking, you know cost out um in in certain cases, but a lot of times that's just through attrition, right? You don't back fill positions, you let it happen naturally. Um and and then there's just things that happen to your business that you have to respond to give you a great example, state of texas, um passes what's the equivalent of the no surprise attack. But they did it there before the federal government did it. Um but it requires a lot of processes to be put in place, because now you have providers and payers having to deal with disputes, right? It actually generates a boatload of work. And we thought there might be, you know, 5000 of these in the first year, where there were 21,000 in the first year. And so far this year we're doubling that amount, right. We were able to use automation to respond to that without having to add a bunch of stuff. If we had to add staff for that, it would have literally been, you know, maybe hundreds of people, right? And but now, you know, there's, you can clearly put a value on it and it's millions of dollars a year, that we would have otherwise had to expect. >>The reason I'm harping on this lease is because I've been through a lot of cycles, as you know, and after the dot com boom, the the cost avoidance meant not writing the check to the software company, right? And that's what nick Carr wrote this, i. T matter. And then, and then, you know, post the financial crisis, we've entered uh a decade plus of awareness on the impact of technology. And I wonder if it's, I think this, I think this the cycle is changing I think. And I wonder if you have an opinion here where people, I think organizations are going to look at Technology completely different than they did like in the early 2000s when it was just easy to cut. >>No, I think the other point I will add to it. I agree with the gym. So we typically look at our away but it doesn't always have to be the cost. Right? If you look from the outcomes of the value, there are other measures also right? If you look at the how automation was able to help in the Covid generate. It was never about costs at that time. It was about a human lives. So you always may not be able to quantify it what you look at. Okay. What how are we maximizing the value or what kind of situations where we are and where we may not even have a human power to do that work. And we are running against the time. It could be the compliance needs. I'll give example of our covid use case which was pretty big success uh within L. A. County we deployed bots for the covid contact tracing program. So we were actually interviewing all the people who were testing positive so that we actually can keep track of them and then bring back that data within our HR so that our criminologists actually can look at the trends and see how we are doing as a county as compared to other counties and nationally. And we were in the peak, we were interviewing about 5000 people a day And we had to process that data manually into our nature and we deployed 15 members to do that. And they were doing like about 600 interviews a day. So every day we had a backlog of 2500 interviews. So it is not about a cost saving or a dollar value here because nobody planned for these unplanned events and now we don't have a time and money to find more data entry operators and parts were able to actually clear up all the backlog. So the value which we were able to bring it is way beyond the cost element. >>I I believe that 100% and I've been fighting this battle for a long time and it's easier to fight now because we're in this economic cycle even despite the pandemic, but I think it can be quantified. I honestly believe it can be tied to the income statement or in the case of a public sector, it could be tied to the budget and the mission how that budget supports the mission of the company. But I really believe it. And and I've always said that those soft factors are dwarf the cost savings, but sometimes, you know, sometimes the CFO doesn't listen, you know, because he or she has to cut. I think automation could change that >>for public sector. We look at how we can do more about it. So it's because we don't look at bottom line, it's about the tax dollars, we have limited dollars, but how we can maximize the value which we are giving to residents, it is not about a profit for us. We look at the different lens when it comes to the commercial >>Side, it's similar for us. So as a as a health care pair, because we're a mutual right? Our members and we have 17 million of them are really the folks that own the company and we're very purpose driven. Our our purpose is to do everything in our power to stand by members in sickness and in health. So how do you get the highest quality, cost effective health care for them? So if automation allows you to be more effective and actually keep that cost down, that means you can cover more people and provide higher quality care to our members. So that's really the driver for mission driven, >>I was gonna ask you as a member as one of your 17 million members, what are some of the ways in which automation is benefiting me? >>Um you know, a number of different ways. First off, you know, um it lowers our administrative costs, right? So that means we can actually lower our rights as as we go out and and and work with folks? That's probably the the the the bottom line impact, but we're also automating processes uh to to make it easier for the member. Right? Uh the example I used earlier was the equivalent of no surprises. Right. How do we take the member out of the middle of this dispute between, you know, out of network providers and the payer and just make it go away. Right, and we take care of it. Um but that that creates potentially administrative burden on our side, but we want to keep their costs down and we do it efficiently using it. So there's a number of use cases that we've we've done across, you know, different parts of our business. We automate a lot of our customer service, right? When you call um there's bots in the background that are helping that that agent do their job. And what that means is you're on the show, you're on the phone a lot shorter of a period of time. And that agent can be more concise and more accurate in answering your question. >>So your employee experience is dramatically improved, as is the member experience? >>Yes, they go hand in hand. They do go hand, unhappy members means unhappy employees, 100% >>mentioned scale before, you said you can't scale in this particular, the departmental pockets. Talk about scale a little bit. I'm curious as to how important cloud is to scale. Is it not matter. Can you scale without cloud? What are the other dimensions of scale? >>Well, you know, especially with my CTO had, we're we're pushing very heavily to cloud. We view ourselves as a cloud first. We want to do things in a cloud versus our own data centers, partially because of the scale that it gives us. But because we're healthcare, we have to do it very securely. So. We are very meticulous about guarding our data, how we encrypt information um, not only in our data center but in the cloud and controlling the keys and having all the controls in place. You know, the C. So and I are probably the best friends right now in the company because we have to do it together and you have to take that that security mind set up front. Right cloud first. Put security first with it. Um, so we're moving what we can to the cloud because we think it's just going to give us better scale as we grow and better economics overall, >>Any thoughts on that? I think a similar thoughts but if we look from L. A. county because of the sheer volume itself because the data which we are talking about. We had 40 departments within the county. Each department is serving a different business purpose for the resident beit voting or B justice or being social services and all and the amount of data which we are generating for 10 million residents and the amount of duplicate asi which it comes out because it's a very government centering model. You have a different systems and they may not be talking to each other. The amount of diplomacy and identity delicacy which we are creating and as we are enabling the interoperability between these functions to give us seamless experience keeping security in mind so fully agree on that because the end of the day we have to ensure that customer guarantee but it's a sheer volume that as and when we are adding these data sets and the patient's data as well as the residents data and now we have started adding a machine data because we have deployed so many IOT solutions so the data which is coming from those machines, the logs and all its exponential so that's where the scale comes into picture and how we can ensure that we are future ready for the upscale which we need and that's where cloud ability definitely helps a lot. >>What do you mean by future ready? >>So if you look at from a future smart city or a smart community perspective, imagine when machines are everywhere machines and IOT solutions are deployed, beat even healthcare, your bad information, you're even patient information, everything is interconnected and amount of data which is getting generated in that your automobile they're going to start talking to entertainment or we have to potentially track a single resident might be going same person going to the justice or maybe same person might be having a mental health issues, A same person might be looking for a social services, how we're going to connect those dots and what all systems they are touching. So all that interconnections needs to happen. So that exponential increase of data is a future readiness, which I'm talking about. Are we future ready from a technology perspective? Are we future ready from the other ecosystem perspective and how and how we're gonna manage those situations? Uh, so those are the things which we >>look at it and it's a it's a multiplier to, right? We all have this influx of information and you need to figure out what to do with it. Right. This is where artificial intelligence, machine learning is so important. But you also have interoperability standards that are coming. So now we're we have this massive data that each of our organizations have. But now you have interoperability which is a good thing for the member saying now I need to be able to share that data. Yeah, I wanted to ask you about >>that because a lot of changes in health care, um, are meaningful use. You have to show that to get paid but the standards weren't mature. Right? And so now that's changing what role does automation play in facilitating those standards. >>So, you know, we're big, big supporters of the fire standard that's out there um to in order to be able to support the standards and and create a P. I. S. And and pull together the information. What what will happen sometimes in the background is there's actually um artificial intelligence, machine learning models that create algorithms right? The output of that though often has to be active. Now a person can do something with that information or a vodka. Right? So when you start taking the ideal of artificial intelligence and now you have a robotic process that can use that to pull together the information and assimilated in a way to make it higher quality. But now it's available. It's kind of in the background. You don't see it but it's there helping. >>What are some of the things that you see? I know we're out of time but I just have a couple more questions. Some of the things that you see here we are you I path forward for we're in person. This is a bold company that's growing very quickly. Some of the announcements that were made, what are what are some of your reaction to that? And how do you see it helping move blue crush blue shield forward even >>faster. Well you know a lot of the announcements in terms of some of the features that that they've added around their robotics processing are great right? The fact that they're in the cloud and and some of the capabilities and and and better ability to to support that the process mining is key. Right. In order for abouts to be effective, you have to understand your process and you just don't want to necessarily automate the bad practices. Right? So you want to take a look at those processes to figure out how you can automate things smartly. Um and some of their capabilities around that are very interesting. We're going to explore that quite a bit but but I think they're the ambition here is beyond robotics. Right. It's actually creating um you know, applications that actually are using bots in the background which is very intriguing and has a lot of potential potentially to drive even more digital transformation. This can really affect all of our workers and allow us to take digital solutions out to the market a lot faster >>and to see what was going to ask you, you are here for four weeks at UI Path, you got to meet a lot of your colleagues, which is great. But what about this company attracted you to leave your former role and come over here to the technology vendor side. >>Well, I think I was able to achieve the similar role within L. A. County, able to establish the automation practice and achieve the maturity, able to stand up things and I feel that this is the same practitioner activity which I can actually take it back to the other clients ceos because of one thing which I really like about your hypothesis. RP is just a small component of it. I really want to change that mindset that we have to start looking ui path as an end to end full automation enterprise solution and it is not only the business automation, it's the idea automation and it's a plus combination and whether we are developing a new industry solutions with our partners to help the different industry segments and we actually helping Ceo in the center of it because Ceo is the one who is driving the automation, enabling the business automation and actually managing the automation ceo and the governess. So CEO is in left and center of it and my role is to ensure that I actually help those Ceos to make successful and get that maturity and you will path as a platform is giving that ability of length and breath and that's what is really fascinating me and I'm really looking forward that how that spectrum is changing that we are getting matured in a process mining area and how we are expanding our horizons to look at the whole automation suit, not just the R. P. Product and that's something which I'm really looking forward and seeing that how we're going to continue expanding other magic quadrants and we're actually going to give the seamless experience so the client doesn't have to worry about okay for this, I have to pick this and further, I have to pick something else >>that's seamless experience is absolutely table stakes these days. Guys, we're out of time. But thank you so much for joining. David me, talking about automation and health care. Your recommendations for best practices, how to go about doing that and and the change management piece. That's a critical piece. We appreciate your time. >>Thanks for having. Thank >>you. Our pleasure for day Volonte. I'm lisa martin live in las Vegas. The cubes coverage of you a path forward for continues next. Mhm. Mhm mm.

Published Date : Oct 7 2021

SUMMARY :

Now this segment is going to focus on automation and healthcare. So we're saying, hey business, if you want to automate, you know, parts of your job, And how do you work together in unison with the C. T. And Ceo is in the center of all these three elements when you look at it. uh you can see or she has a horizontal purview across the organization now the brain aspect and how do you get people to change? you know, the cash flow, whatever is, how do you measure that? Um and we can tell based on, you know, what is the manual effort today. of processes to be put in place, because now you have providers and payers having to deal with disputes, And then, and then, you know, post the financial crisis, we've entered uh a not be able to quantify it what you look at. sometimes the CFO doesn't listen, you know, because he or she has to cut. don't look at bottom line, it's about the tax dollars, we have limited dollars, So how do you get the highest quality, cost effective health care for them? out of the middle of this dispute between, you know, out of network providers and the payer and Yes, they go hand in hand. mentioned scale before, you said you can't scale in this particular, So and I are probably the best friends right now in the company because we have to do it together mind so fully agree on that because the end of the day we have to ensure that customer guarantee but they're going to start talking to entertainment or we have to potentially track a single resident We all have this influx of information and you need You have to show that to get paid but the standards weren't mature. So when you start taking the ideal of artificial intelligence and now you have a Some of the things that you see here we are you I path forward for we're in person. In order for abouts to be effective, you have to understand your process and you just But what about this company attracted you to leave that we are getting matured in a process mining area and how we are expanding our horizons to But thank you so much for joining. Thanks for having. The cubes coverage of you a path forward for continues next.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David MontyPERSON

0.99+

Blue CrossORGANIZATION

0.99+

15 membersQUANTITY

0.99+

21,000QUANTITY

0.99+

40 departmentsQUANTITY

0.99+

UiPathORGANIZATION

0.99+

2019DATE

0.99+

lisa martinPERSON

0.99+

DavidPERSON

0.99+

17 millionQUANTITY

0.99+

two daysQUANTITY

0.99+

Las VegasLOCATION

0.99+

100%QUANTITY

0.99+

2500 interviewsQUANTITY

0.99+

JimPERSON

0.99+

5000QUANTITY

0.99+

Jim PetroseaPERSON

0.99+

four weeksQUANTITY

0.99+

secondQUANTITY

0.99+

las VegasLOCATION

0.99+

two guestsQUANTITY

0.99+

nick CarrPERSON

0.99+

L. A. CountyLOCATION

0.99+

Jim PetrassiPERSON

0.99+

Blue crossORGANIZATION

0.99+

Each departmentQUANTITY

0.99+

OneQUANTITY

0.99+

third areaQUANTITY

0.99+

DhaliwalPERSON

0.99+

shankarTITLE

0.99+

Jagjit DhaliwalPERSON

0.99+

L. A. CountyLOCATION

0.99+

L. A.LOCATION

0.99+

FourQUANTITY

0.99+

L. A CountyLOCATION

0.99+

IOSTITLE

0.99+

10 million peopleQUANTITY

0.99+

this yearDATE

0.99+

Blue shieldORGANIZATION

0.99+

Blue Cross Blue ShieldORGANIZATION

0.99+

10 hours a dayQUANTITY

0.99+

eachQUANTITY

0.99+

three elementsQUANTITY

0.99+

early 2000sDATE

0.98+

250 boxQUANTITY

0.98+

oneQUANTITY

0.98+

10 million residentsQUANTITY

0.98+

jimPERSON

0.98+

DuncanPERSON

0.98+

CeoORGANIZATION

0.98+

17 million membersQUANTITY

0.98+

FirstQUANTITY

0.98+

todayDATE

0.98+

about $11 millionQUANTITY

0.97+

texasLOCATION

0.97+

VolontePERSON

0.96+

about 600 interviews a dayQUANTITY

0.96+

hundreds of peopleQUANTITY

0.96+

singleQUANTITY

0.95+

CeosORGANIZATION

0.95+

about 5000 people a dayQUANTITY

0.95+

MTLOCATION

0.94+

pandemicEVENT

0.94+

first yearQUANTITY

0.93+

UI PathORGANIZATION

0.93+

about 60 peopleQUANTITY

0.93+

OKLOCATION

0.91+

NMLOCATION

0.91+

bellagio HotelORGANIZATION

0.91+

millions of dollars a yearQUANTITY

0.91+

hc scORGANIZATION

0.86+

three different areasQUANTITY

0.85+

EthanORGANIZATION

0.85+

Blue Shield and GadgetORGANIZATION

0.79+

firstQUANTITY

0.77+

a decadeQUANTITY

0.76+

one thingQUANTITY

0.7+

CTOORGANIZATION

0.7+

Ui PathTITLE

0.69+

IL, TX,LOCATION

0.68+

Ui PathORGANIZATION

0.67+

coupleQUANTITY

0.58+

Purna Doddapaneni, Bain & Company | UiPath FORWARD IV


 

>>from the bellagio hotel >>in Las Vegas, it's the cube covering Ui Path forward. Four brought to you >>by Ui Path. Welcome back from the bellagio in Las Vegas. The Cubans live at Ui Path forward for I'm lisa martin here with Dave Volonte. We're gonna be talking about roadblocks to automation and how to navigate around them, joining us next as Pernando Panini expert associate partner at bain and company per night. Welcome to the program. >>Thanks lisa. Happy to be here. >>Talk to us about some of the use cases that bain is working on with you I Path and then we'll dig into some of those roadblocks that you guys have uncovered. >>Yes. Uh I started a few months ago where we're working with Brandon who's the product lead on the Ui part side. We wanted to understand what's the state of citizen development and what are the blockers and how we should Both from the product side. But also on the automation journey side we need to dig deeper and understand where each of the clients and the employees are going through the journey together >>and if you look at it from the citizen developer perspective, what are some of those roadblocks? >>There are a few. So when like if you before we go to the roadblocks there are three main concerns or I would say critical groups that are involved in being successful with automation. The organization or bu leaders, the I. T. And employees. So each of the groups have different perceptions on like misconceptions or perceptions on benefits of automation and how to go up go about it. The blockers that we have seen where like a three sets of blockers. The first is cognitive where employees are unaware of automation on the benefits of automation and the second one is more organizational where organization leaders and how they feel about automation or how the how they think about employees when we introduce automation to them. One part of that is there is a misconception without nation leaders that employees are fearful of job loss when you introduce automation. What we have seen in our research is it's completely the opposite of employees are eager to adopt automation have given an opportunity, they are willing to upscale themselves and they are willing to save the time so that they can spend that on critical value added activities for um for their customers in the process. And a third blocker that we have seen is more on the product side where the some of the employees that we talked to as much as progress has been made by RPF vendors and local local vendors. It's still these tools are not intuitive user friendly for business users. They still feel they need to go through some training programs and have a better user friendly interface is >>what's the entry point she would organization first time I ever heard of Arpaio Years and years and years ago was at a CFO conference. Okay so that's cool. It seems like it forward for there's a lot more C. I. O presence here and that. Is that relatively new or did I just miss it before? >>It is relatively new. So like when we looked at like in the past few years the empty point has been someone in finance or I. T. Has heard about R. P. A. The benefits of head. They went and bought a handful of licenses and then they went and implemented it but it's just a handful of processes. It's not organizational wide. It has been mostly on a smaller sub scale of processes. And projects now that like organizations are realizing employees are asking and we are like slowly growing up with automation ceo es it's now it's intersecting with the C XL level of if it has to intersect with your or if you want to reinvent your business through automation, it has to come from the sea X level and that's where we're seeing more and more. See IOS are being involved in decisions on automation journeys, the technologies they have to buy and adopt for the business processes. >>So I. T. Can be an enabler of course. Also sometimes it can be a blocker. Um and you know, certainly from security standpoint governance etcetera. And so one of the things that we heard today in the keynotes was you don't want to automate the C I. O. He or she owns this application portfolio and everybody wants to do new projects because that's the fun stuff we heard from one CFO. Yeah. You add up all the NPV from the new projects. It's bigger than the valuation of the company. Right. But the C i O is stuck having to manage the infrastructure and all the processes around the existing application portfolio. One of things I heard today was don't automate an application or a process that you're trying to retire because we never get rid of stuff in it. So I wonder should automation like an enterprise wide automation? Should there be kind of an application rationalization exercise or a business process rationalization coincident with that >>initiative? Absolutely. I think that was one of the blockers that we have seen. Like some of the misconceptions and some of the blockers when I looked at it for them, they consider like you're bringing all these tools you're asking business users to like who haven't had haven't been trained in technology or programming, You're asking them to build these automation ins So one they have to manage with the all the applications and the tools for all that happens. And to manage these automation is after business users have either left the company or moved on. So it is essential for them to think through and provide a streamline tools it on on two aspects. one it needs to be as as you started off, it needs to be an enabler to provide them the specific tools that they can, they have already blessed. They've curated it which are ready for business consumption. A second part I can also do is providing collaboration platforms so that business users can learn from each other and from it so that they can one are developing the right processes with the right methodology that is governed by I. T. And no security or data governance issues. Come through. >>One of the things that you mentioned in terms of the three roadblocks ceo uncovered was that you were surprised that the results of the research showed that in fact employees are really wanting to adopt automation. In fact I think the stat is um 86% of employees want automation but only 30% of leaders are giving them the opportunity to use that. That's a big gap. Why do you think that is >>so a few things. Right. I mean as we talked about the three constituents that you have right one is automation leaders. If you consider from them. Their view is their employees are not capable of adopting or building on the automation is using these tools and they need technical skills. But the all the automation vendors have made progress and if you look at the tools today are much more user friendly and business users are willing to adopt. The second part as we talked about is like the fear of job loss from the employee standpoint. Whereas employees are looking at it as an opportunity for them to up skill but also eliminate the pain points that they have today in the day to day activities using the automation tools. And for them it is like this is helping them spend the time with the customers where it matters on critical value added activities versus going through reparative process of the journey. And the third part we talked about earlier with I. T. I. T. Has this notion that they need to build and develop anything technical. Business users will not be able to build or manage and they're also worried about the governance, the security and the third part which you brought up earlier is that tool sprawl, It's like we need to manage like this volume of tools that are coming in which is only adding to their plate of already busy busy workforce. >>I have one of those. It depends questions and it's a good consultant I'm sure you say well it depends but are there patterns best practice or even more than best pressures? Are there sort of play books if you will? And patterns? I'm sure it's situational. But are you seeing patterns emerge, you can say okay this sort of category should approach it this way. Here's another one in a different, maybe it's a department bottoms up top down, can you help us sort of squint through that? >>Yeah. So in terms of approaches like at least up till now the prevalent thing that is happening is like C. O. Es went and buy some licenses they talk about like opportunities that they have. So it's more of a top down driven uh like ceo driven agenda. What we're seeing now especially with citizen automation or democratisation of automation is there's a new approach of including employees into the journey and bringing the bottoms up approach. So there's a happy path where you marry up the top down approach with bottoms up and one you will find opportunities which are organizational wide with the bu leaders and they are ones which are on the long tail of opportunities which employees feel the pain but I. T. Or C. O. He doesn't have the time to come and implement or automate these activities. Um considering like one part we have seen which is increasingly helpful for people who have done this properly is including employees. And one thing we talked yesterday is invest in employees. They consider automation as investment in employees rather than something they're doing to employees. So it's kind of collaborating with employees to make progress which seems to be helping evangelize and also benefit with automation. How >>Have the events of the last 18 months impacted this as well, we've seen so much acceleration and the mandate for automation. What are some of the things that you've seen? >>Sure. So for us like even before the pandemic we've seen in our research so like more than close to 50% of the organizations that they started the automation journey were unable to achieve the savings or targets that they set themselves for whatever the success factors are. Which which hard. A few reasons one they didn't have the organizational support, not they were taking the end to end journey or a customer journey to figure out like what are these big opportunities that they can go through and they haven't included employees and to figure out what are the major pain points to go through the journey. One thing it was clear was with covid, no one expected this kind of disruption in a pan and a pandemic. There are a lot of offshore centres or like pretty much different geography is got disconnected from the work that's being done. You still need to support your customers, there is still a higher demand, what do you do? It's not like you can scale up your employees in a pandemic, that's where like we have seen increasing push towards automation and technology to see that can help and support and scale in a pandemic environment uh and also help your customers in the journey. >>So has in your opinion has automation become a mandate? Uh As a result of the pandemic >>I would say. Yeah I I would consider it's more of like now it's become a I would say uh business won a competitive differentiator to say like one I needed to keep my lights on and resiliency but also the companies have done really well they saw the advantage and they whether the pandemic better with the customers now they use that as a platform to create a competitive differentiation against their peers and push things forward. >>one of the things we heard of today and the keynotes is you got to think about my words, the life cycle, you don't just put in the bot and then just leave it alone. You really have to think through that. And that seems to me to be where you would help customers think through how to get the most return out of their investment. You I passed product company I think it's great. And so you talk about the value layer that you guys bring. >>So for us it's it's like when we talked to mostly be bringing from the business side of the house to understand what are the key drivers that you need to work on. I mean even before we talk about technology, we talk about, let's understand from the customer standpoint what is your customer journey into end and look through that journey lens and let's take the process and to end, let's look at redesigning process and making it more optimal and streamlined and where technology fits in. That's when we talk about like if it is an RPG or if it's a UI Path platform that can support, let's go through that journey versus taking the tool itself as the solution and trying to find every nail that you can hurt, which usually is not sustainable to your point. Like we need to think through the whole life cycle, make sure this is going to last. Or if you are retiring. Like in the ceo panel that was a discussion where that we need to think through when we are going to retire and make sure like we are in that journey versus building all these automation zor bringing all these tools and leaving them alone for I. T. To manage long term. >>No. Again the last 18 months. Again, question about the the um reactions catalyzed facilitated thinking about those three roadblocks. The cognitive roadblocks the organizational roadblocks since particularly what I'm interested in this question and product, what are some of the conversations that you've seen or trends that you seem to help those organizations better understand how to collaborate with each other so that what they're not doing is putting in our P. A point tools but really starting to build the right part of the nomination and and journey into their digital transformation plans. Yeah. >>I mean in a way to again, I'll go back to the three concerns that we talked earlier, right? It's it can only go so far and automate so much because they haven't seen the business lens of like how the processes are what they have to do and to end, which is where you need to involve the business leaders who can give you that view from the business side and employees who are seeing the work day to day and where they can eliminate the pain points. So the organisms that are successful, they are creating a collaborative environment between the three groups to push things forward. You >>have to have that collaboration that's critical. Otherwise, that's probably one of the road blockers as well. >>Yeah, absolutely. >>Where does automation fit? I mean you're obviously heavily into automation, but let's think about the bane portfolio, the boardroom discussions. Where does automation fit? I mean there's security, there's how do we embed ai into our business? How do we sas if I our business um how do we do transform digitally? Where's automation fit in that hole discourse? >>So I think the automation is like at the heart of digital transformation, the part which we have seen where the gap is is not taking the business angle and actually thinking through the process and to end versus picking up a tool and trying to go solve a problem or find a problem to solve. And that's where we think in our discussions with boardrooms, it's more of let's think through how you want to reimagine your company or how you want to be more competitive looking into the future and like walk back from that standpoint and then started part from, I mean, the way we call it the future back like where you are today and now, like let's go forward and to what your end status and where technology broadly a digital tools and where automation fits in the process. >>How do you see what you i path is talking about at this conference? The announcements from yesterday? There's a lot of people here which is fantastic. How do you see what they're announcing? The vision that they set out a couple years ago that they're now delivering on. How is that a facilitator of organizations removing those roadblocks? Because as you said automation is a huge competitive differentiator these days and If we've learned nothing in the last 19 months you gotta you gotta be careful because there's always a competitor in the rear view mirror who might be smaller faster more agile ready to take your place. >>Yeah so like a few things that we've seen in the product roadmap that you talked about is they are providing the collaboration platform or tools where the I. T. Business owners can work through. Like for automation hub is what they talked at length yesterday is that's the platform where business users can provide their ideas. Like you provide process mining tools which can capture the process and the business users understand the process and they are the ones who are putting in an opportunity on the road map. So you have now a platform where all the ideas are being catalogued and once you implement they're being tracked on the automation hub so that that is providing a platform for everyone to collaborate together. The second one which Brandon talked yesterday is the tool itself for Studio X. When we're talking about citizen developers, employees trying to use and make it more user friendly. Is that where the Studio X which is providing that you are interface? Which is easy intuitive for business users to build basic automation is and try to take that long tail of opportunities that we talked about. So all these tools are coming together as one platform play, which you ipod has been talking about all through the conference and that is critical for everyone to collaborate to make a progress versus only thinking it's an easy job to implement the automation opportunities. That >>collaboration is business critical these days. Right. Thank you for joining David me and the program talking about some of the roadblocks that you've uncovered, but also some of the ways that organizations in any industry can navigate around them and really empower those employees who want automation in their jobs. We appreciate your insights. >>Happy to be here. Thanks for having us. You're welcome >>for day Volonte. I'm lisa martin live in las Vegas at UI Path forward for we'll be right back with our next guest. Yeah. >>Yeah. Mm. Mhm

Published Date : Oct 6 2021

SUMMARY :

Four brought to you We're gonna be talking about roadblocks to automation and how to navigate around them, Happy to be here. Talk to us about some of the use cases that bain is working on with you I Path and then we'll dig But also on the automation journey side we need to dig deeper and understand where of the employees that we talked to as much as progress has been made by RPF Is that relatively new or did I just miss it before? the C XL level of if it has to intersect with your or if you And so one of the things that we heard today in the keynotes was you don't want to automate the one it needs to be as as you started off, One of the things that you mentioned in terms of the three roadblocks ceo uncovered was that you were surprised the governance, the security and the third part which you brought up earlier is that tool sprawl, But are you seeing patterns emerge, you can say okay this sort feel the pain but I. T. Or C. O. He doesn't have the time to come What are some of the things that you've seen? the end to end journey or a customer journey to figure out like what are these big opportunities that they can go through advantage and they whether the pandemic better with the customers now they use that as one of the things we heard of today and the keynotes is you got to think about my words, as the solution and trying to find every nail that you can hurt, which usually is not sustainable to The cognitive roadblocks the organizational roadblocks since particularly what I'm interested in this question and product, So the organisms that are successful, they are creating a collaborative environment between the three groups to Otherwise, that's probably one of the road blockers as well. portfolio, the boardroom discussions. I mean, the way we call it the future back like where you are today and now, like let's go forward and to what your How do you see what you i path is talking about at this conference? on the automation hub so that that is providing a platform for everyone to collaborate together. program talking about some of the roadblocks that you've uncovered, but also some of the ways that organizations in any Happy to be here. with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolontePERSON

0.99+

DavidPERSON

0.99+

lisaPERSON

0.99+

Studio X.TITLE

0.99+

Las VegasLOCATION

0.99+

Studio XTITLE

0.99+

IOSTITLE

0.99+

yesterdayDATE

0.99+

lisa martinPERSON

0.99+

eachQUANTITY

0.99+

second partQUANTITY

0.99+

las VegasLOCATION

0.99+

todayDATE

0.99+

BrandonPERSON

0.99+

three groupsQUANTITY

0.99+

FourQUANTITY

0.99+

two aspectsQUANTITY

0.99+

86%QUANTITY

0.99+

C. O. EsPERSON

0.99+

third partQUANTITY

0.99+

bainORGANIZATION

0.99+

oneQUANTITY

0.99+

three concernsQUANTITY

0.99+

UiPathTITLE

0.98+

30%QUANTITY

0.98+

firstQUANTITY

0.98+

C. O.PERSON

0.98+

BothQUANTITY

0.98+

Ui PathLOCATION

0.98+

One partQUANTITY

0.98+

Ui PathORGANIZATION

0.98+

pandemicEVENT

0.98+

three constituentsQUANTITY

0.98+

three setsQUANTITY

0.97+

C i OTITLE

0.97+

first timeQUANTITY

0.97+

OneQUANTITY

0.97+

Pernando PaniniPERSON

0.96+

I PathORGANIZATION

0.96+

UiLOCATION

0.95+

one partQUANTITY

0.95+

three roadblocksQUANTITY

0.95+

third blockerQUANTITY

0.93+

VolontePERSON

0.93+

second oneQUANTITY

0.91+

last 18 monthsDATE

0.91+

three main concernsQUANTITY

0.91+

CubansPERSON

0.9+

one platformQUANTITY

0.9+

years agoDATE

0.88+

couple years agoDATE

0.87+

I. T. I. T.PERSON

0.86+

few months agoDATE

0.84+

past few yearsDATE

0.79+

I. T.PERSON

0.79+

last 19 monthsDATE

0.79+

Bain & CompanyORGANIZATION

0.78+

bellagioORGANIZATION

0.76+

C I.TITLE

0.75+

One thingQUANTITY

0.72+

ArpaioORGANIZATION

0.71+

blockersQUANTITY

0.71+

more than close to 50%QUANTITY

0.69+

YearsDATE

0.68+

yearsDATE

0.64+

ipodORGANIZATION

0.63+

Protect Against Ransomware & Accelerate Your Business with HPE's Cloud Operational Experience


 

>>Okay, okay, we're back, you're watching the cubes, continuous coverage of HBs Green Lake announcement. One of the things that we said on the Cuban. We first saw Green Lake was let's watch the pace at which H P E delivers new servants is what's that cadence like? Because that's a real signal as to the extent that the company's leading into the cloud and today we're covering that continued expansion. We're here with Tom Black, who was the general manager of HPC storage and Omar assad, who's the storage platform lead for cloud data services at Hewlett Packard Enterprise gentlemen welcome. It's good to see you. >>Thanks Dave. Thanks for having us today. Good to see you. >>Happy to be here. Dave. >>So obviously a lot has changed globally, but when you think of things like cyber threats, ransomware, uh, the acceleration of business transformation, uh, these are new things, a lot of it is unknown a lot of it was forced upon us tom what are you guys doing to address these trends? How are you helping customers? >>Sure, thanks for the question. So if you think back to what we launched in early May, kind of the initial cloud transformation of what was our traditional storage business. Um, we really focused on one key theme. Very customer and customer driven theme that the cloud operational model has one and that customers want that operational model, whether they're operating their workload in the cloud or whether they're operating that workload in their own facility or Nicolo kind of the same thing. So that was kind of our true north and that's what we launched out of the gate in May. But we did allude in May to the fact that we would have an ongoing series of new services coming out on the uh H B Green Lake edge to cloud platform. And just really excited today to be talking about somewhat that expansion looks like um we will continue uh through this month and through the quarters ahead to really add more and more services in that vein of focusing on bringing that true cloud services model to our customer. So we're really excited today to unveil kind of, we've entered the data protection as a service market with HP Green Lake. So this is really our expansion into a very top of mind topic and set of problems and solutions or headaches and aspirins, to quote an old friend um that Ceos faces, they think about how to manage data through its life cycle in their organization. >>When I talked to see IOS during the pandemic. Not that we're out yet, but really in the throes of it and asked them about things like business resilience that they said, you know, we really had to rethink our disaster recovery strategy. It was it was sort of geared toward a fire or a hurricane and we we just didn't even imagine this type of disaster if you will. So we really needed to rethink it. So when I, I see your disaster recovery as a service and capabilities like that. Is that the Xarelto acquisition? >>Yes. Dave thanks you. So we're super happy to have the Xarelto team now as part of our family. Um, just a brilliant team, a well respected technology, uh, kind of a blue chips at our customers and partners that really appreciate what zero has to offer. Um, as we looked at the data protection as a service market, one of the hardest problems is really in that disaster recovery space, I think Omar's gonna talk a little bit more about today. Um, but sort of really does bring the leading industry, what's called continuous data protection um, capability into our green lake platform. Um, we've just recently closed the acquisition and we're working on kind of integration plan as we speak now that we can actually talk to each other post close. Um, but you'll uh, you'll continue to see, you know, some really exciting milestones each and every quarter as we march forward with certain now as part of the family. >>So we all talk about how data is, is so important. We certainly learned during the pandemic that that if you weren't a digital business, you were out of business and a digital business is a data business. So things like backup data protection as a service become increasingly critical. I know you have some capabilities there maybe you could share with us. >>Absolutely. So you know, one of the things that we noticed was as we took the storage business through its transformation and we started can work you know, with the launch of the electron 90 and the six K platform. We really really brought the cloud operational model to our customers. So one of the things that you know, feedback that was coming loud and clear to us is that as we look at the storage portfolio where we look at file block and object, which are now being transformed into a cloud operational experience, data protection, disaster recovery coming back into business after a disaster snapshot management. All of those capabilities, we still have to rely on our partner technologies in order to do that now. It's not bad that we have great partners in the data protection world, but what we're really focused on is that cloud operational model and cloud operational experience and to and as tom mentioned through the data management life cycle. So as a result of that, we talked to a lot of our customers, we talked to a bunch of partners and one of the things that was coming back was that yes, there are many data protection backup offerings on the market. But that true as a service experience that is completely integrated to the services experience of the storage that the customers is experiencing that is not there. So what we looked at was especially to the largest ecosystem, which is the VM ware ecosystems. So we're launching data protection as a service or backup as a service for our VM ware customers offered from data services, cloud console as a SAAS portal. 100% SAs service, nothing to install. No media servers, no application servers, no catalog servers, no backup targets, no patching, no expansion, no capacity planning. None of that is needed. All that's needed is sign on click. Give your V center credentials and off you go, that's it. That is it three clicks and you're in business. So currently, you know, in our, in our analysis we offer five x faster recovery from any of the competitive offerings that there there there are 3.5 better de doop ratios. But for our customers is as simple as this. VM is protected as this many dollars per gig per month. That's it. No backup target, no media server, no catalogs are nothing nothing to manage total Turkey off of the portal. So that's the cadence of services that if you promise and this is one of the first ones when it comes to data management that is coming out into the open. >>So you may have just answered this question, but I want to pose it and get you maybe just summarize it because tom was talking earlier about the customer mandate for cloud in a cloud operational model. So I want you to explain to the audience how you're making that real >>actually can I start that one should be the test was monday morning. Getting ready for this chat with you Dave they got me on console and I'm not kidding three clicks, I got back up and running off the lab VM ware instance so I'll pass it off to you the real answer. But if I could do it three clicks >>as well as a convenience of this service, even tom can be your back, you might be able to do with this. Uh again, you know, a very important question the when you, when you look at the cloud operational model as you abstracts the hardware and and take the management model up into a SAS service, it gives our customers that access to that continuous delivery access that we have. We're going to continue to make the service medal better in the cloud model and automatically customers get the value of it without even reinstalling or going through a patch cycle or an upgrade cycle. But as we get into this cloud operational model, one of the things that was missing was uh if you if you if you if you start to talk about applications, how our application workloads going to be deployed, how are they going to be protected and how are they going to be expanded? So what we did was we, we expanded our info site offerings by merging them into the data services, cloud console and we're releasing a new service called app insects. It is going to be available to our customers at the end of the month. Uh It is, nothing has to change. They don't have to install any sort of agents or or host modifications, nothing like that. If their customers of electra nimble primary boxes and they're using info site and data services, cloud console, they will automatically get app insights. What Athens sites does is it really teases apart all that data that we have been collecting within foresight and now with the acquisition of HPV cloud physics, we're merging them together and relating the operational stacked top to bottom. So discovering all the way from your application usage, network usage, storage, use it. IOP usage VM values cross, collaborating them and presenting that to a customer from an app or an outcome perspective all in the data services, cloud console. So what this does for our customers is it really really transforms not only their operational experience but also buying experience. Because if you remember in one of the earlier releases of data services cloud console we released this application called, you know, intelligent intent based provisioning in which you just describe your workload and we go ahead and we provision that app insights and info site, feed that information directly into that and cloud physics generates and results and displays those analytics back to us to your partner of record and to the H. B. So we can all come together on a common data driven discussion point with our customers to continue to make their journey better >>tom where's all the boxes, traditional storage is changing. I've actually been waiting for this day for a long, long time. We've certainly seen glimpses of it from the cloud players, but they don't have, you know, super rich portfolio storage portfolio. They're growing now, but this is a really good strong example of a company with a large storage portfolio. That's, I mean I haven't heard the word three power once today. Right. And so what that says to me, that's an indication that you're thinking like a cloud player, can you maybe talk >>to that? Sure. Yeah, we're just tremendously excited about this transformation and really the reception we've got in the market from analysts, from partners, from customers because you're right, you haven't heard us talk about a box at all today. It's really about a block service, a file on the object service, a backup and recovery service, disaster recovery service. That that's that is the the language, if you will of the business problems of our customers not, do they need to pick this widget or that widget. And how many apps can I get here and there? And which did the h a cage protection scheme be that, is that, is our job to manage underneath are true North, which is the cloud operational model. And so that's going to be really how we we've set our course and how we will uh kind of deliver products solutions offers into the market underneath that umbrella, Ultimately, um getting our customers wherever their data is Dave to be able to interact at that service level instead of at that infrastructure box >>level, you've got my attention wherever the data. So that's the north star here is this is, you know, you're not done today obviously, but you've got a vision to bring that to the cloud across clouds on prem out to the edge. That's the abstraction layer that you're gonna build, your hiding all that complexity. That's correct. And that's cloud. The definition of cloud is changing. >>Yeah, >>it's no longer started, it's no longer a remote set of services. Somewhere up in the cloud. It's expanding on prem hybrid across clouds edge >>everywhere. You're exactly right. Dave it is, cloud is more about the experience and the outcome. It gives a customer than actually where the compute or storage is. We've chosen to take a very customer an agnostic position of whether it's, you know, data in your premise, data in your cloud. We're going to help you manage that data and deliver, you know, that data to workloads and analytics, uh, wherever the, wherever the compute needs to be, where the data needs to be. Again, technologies like Xarelto giving instability and move data across clouds from facilities and clouds back and forth. So it's a really exciting new day for HP. Green Lake were just so super happy to bring these technologies out and really continue to follow on the course of doing what we said, we would do >>the new mindset starts there, I guess it's obviously knew certainly new technologies, uh, you're talking about machine intelligence is a metadata challenge. Absolutely. Big time, you know, long term that North Star that we talked about and applying that machine intelligence, all the experience that you gather data that you're gathering is, I think ultimately how customers want you to solve this problem >>in the middle of info site data services, cloud console and the instrumentation that is already shipping on our appliances, both in edge appliances and the data center appliances were collecting more than a trillion data points over the period of a quarter. Right at the end of the day. So it's harnessing that at the back end to cross relate and then using the cloud physics accusation. What we're doing is we can now simulate these things on behalf of our customers into the future timeline. So at the end of the day, it's really about listening to the customer and what outcomes that they want to achieve with their data storage is there we provide excellent persistence layers where customers can store their data safely. But at the end of the day it's customers choice, They can store their data out of the edge in compute servers, commodity servers, X 86 servers, they can have their data in the data center which they are privately owned or their data can be in a service provider or it can be in a hyper secular. The infrastructure of the persistence layer is independent from the data services. Cloud console data services. Cloud console provides our customers with a SAS based industry leading metadata rich management experience, which then allows you to draw conclusions. So services like cloud physics services like uh enforce it, provide the analytics and richness of the metadata, backup and recovery service allows us to index our customers data and add a rich metadata to that and then combine that with xylitol, which is our disaster recovery as a service offering. Going to start over here. That gives the customer a very simple slider as to where they want their protection levels to be, they want their protection to be instant or they want their protection to be lazy eight hours window. But the thing is at the end of the day, it's about choice without managing the complexities of the hardware >>underneath because programmable completely right I come in, what I'm hearing is file object blocks of your multi protocol. I got a full stack so data data reduction, my snaps might replicate whatever whatever I need it in there as a service. I can I can access latency sensitive storage if I need to or I can push it out to cheaper stores. I could push it out to the cloud, presumably I could someday I air gap it uh and it's all done as infrastructure as code and then different protection levels where I see this going. It really gets exciting is you're now a data company and you're bringing ai machine intelligence and driving data products, data services for your customers who are going to monetize that at their end of the value >>chain. That's right. That's right. And safely insecurity. Keeping in mind that was their toes technology. We can give you, you know, small second recovery points to protect against ransomware. So all of that operational elegance, all those insights and intelligence to help you build a more agile, um you know, workloads centric organization, but then to do it safely and securely against ransomware, that's kind of the storm, if you will. That's brewing. And we're just really excited to be at the eye of it. >>I'm excited to. This is uh I've been waiting for this day for a long time and we're not talking about envy, Emmy and Atomic Rights and I love that stuff by the way and I'm sure it's all under the covers, but that's not what drives business value guys. Thanks so much for coming on the Cuban. David. >>Thanks for having us. It's been great. Thank you. >>All right. We're seeing a transformation all through the stack and keep it right there. This is Dave Volonte for the Cuban. Our coverage of HBs Green Lake announcements right back mm mhm

Published Date : Sep 28 2021

SUMMARY :

One of the things that we said Good to see you. Happy to be here. So that was kind of our true north and that's what we launched out of the gate in May. Is that the Xarelto acquisition? market, one of the hardest problems is really in that disaster recovery space, I think Omar's gonna talk a little bit that if you weren't a digital business, you were out of business and a digital business is a data business. So one of the things that you know, So I want you to explain to the audience how you're making that real actually can I start that one should be the test was monday morning. one of the things that was missing was uh if you if you if you if you start to talk about but they don't have, you know, super rich portfolio storage portfolio. And so that's going to be really how we we've set our course and how So that's the north star here is this is, It's expanding on prem hybrid across clouds edge We're going to help you manage that data and deliver, you know, that machine intelligence, all the experience that you gather data that you're gathering is, So at the end of the day, it's really about listening to the customer and what outcomes that I could push it out to the cloud, presumably I could someday I air gap it uh against ransomware, that's kind of the storm, if you will. Emmy and Atomic Rights and I love that stuff by the way and I'm sure it's all under the covers, Thanks for having us. This is Dave Volonte for the Cuban.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tom BlackPERSON

0.99+

DavePERSON

0.99+

Dave VolontePERSON

0.99+

MayDATE

0.99+

DavidPERSON

0.99+

XareltoORGANIZATION

0.99+

IOSTITLE

0.99+

eight hoursQUANTITY

0.99+

100%QUANTITY

0.99+

early MayDATE

0.99+

HPORGANIZATION

0.99+

H P EORGANIZATION

0.99+

three clicksQUANTITY

0.99+

tomPERSON

0.99+

one key themeQUANTITY

0.99+

oneQUANTITY

0.99+

OmarPERSON

0.98+

Omar assadPERSON

0.98+

todayDATE

0.98+

OneQUANTITY

0.98+

bothQUANTITY

0.98+

HP Green LakeORGANIZATION

0.98+

TurkeyLOCATION

0.97+

North StarORGANIZATION

0.97+

Hewlett Packard EnterpriseORGANIZATION

0.97+

monday morningDATE

0.96+

firstQUANTITY

0.96+

fiveQUANTITY

0.95+

HBsORGANIZATION

0.95+

pandemicEVENT

0.94+

CeosORGANIZATION

0.94+

this monthDATE

0.93+

EmmyPERSON

0.91+

HPEORGANIZATION

0.91+

more than a trillion data pointsQUANTITY

0.91+

3.5 better deQUANTITY

0.9+

threeQUANTITY

0.89+

xylitolORGANIZATION

0.85+

secondQUANTITY

0.85+

first onesQUANTITY

0.84+

eachQUANTITY

0.83+

zeroORGANIZATION

0.82+

SASORGANIZATION

0.82+

endDATE

0.81+

Green LakeORGANIZATION

0.81+

HPC storageORGANIZATION

0.78+

six KQUANTITY

0.77+

electraORGANIZATION

0.77+

HBs Green LakeORGANIZATION

0.74+

CubanOTHER

0.73+

onceQUANTITY

0.72+

NicoloORGANIZATION

0.7+

Atomic RightsORGANIZATION

0.7+

H B Green LakeORGANIZATION

0.7+

AthensLOCATION

0.68+

gigQUANTITY

0.63+

CloudCOMMERCIAL_ITEM

0.62+

monthDATE

0.61+

a quarterQUANTITY

0.61+

XareltoTITLE

0.61+

thingsQUANTITY

0.59+

envyORGANIZATION

0.57+

HPVOTHER

0.56+

86OTHER

0.44+

consoleTITLE

0.36+

electronTITLE

0.32+

Holger Mueller and Dion Hinchcliffe


 

>>we're back, we're assessing the as a service space. H. P. S. Green Lake announcements, my name is Dave balanta, you're watching the cube die on Hinchcliffe is here along with Holger muller, these are the constellation kids, extraordinary analysts guys. Great to see you again. I mean it super experienced. You guys, you deal with practitioners, you deal your technologist, you've been following this business for a long time. Diane, We spoke to Holger earlier, I want to start with you uh when you look at this whole trend to as a service, you see a lot of traditional enterprise companies, hard traditionally hardware companies making that move for for a lot of obvious reasons are they sort of replicating in your view, a market that you know well and sas what's your take on how they're doing generally that trend and how HP is >>operating well. Hp has had a unique heritage. They're coming at the whole cloud story and you know the Hyper Scaler story from a different angle than a lot of their competitors and that's mostly a good thing because most of the world is not yet on the cloud, They actually came from H. P. S original world, their line of servers and networks and so on. Um and and so they bring a lot of credibility saying we really understand the world you live in now but we want to take you to that that as a service future. Uh and and you know, since we understand you so well and we also understand where this is going and we can adapt that to that world. Have a very compelling story and I think that with green like you know, was first started about four years ago, it was off to the side uh you know, with all the other offerings now it's it's really grown up, it's matured a lot and I think you know, as we talked about the announcements, we'll see that a lot of key pieces have fallen into place to make it a very compelling hybrid cloud option for the enterprise. >>Let's talk about the announcement. Was there anything in particular that stood out the move to data management? I think it's pretty interesting is a tam expansion strategy. What's your take on the >>announcement? Well, the you know, the unified analytics uh story I think is really important now. That's the technology piece where they say, they say we can give you a data fabric, you can access your data outside of its silos. It doesn't address a lot of the process and cultural issues around data ownership inside the enterprise, but it's you know, having in the actual platform and as you articulating it as a platform, that's one of the things that was also evident, they were getting better and better at saying this is a hybrid cloud platform and it has all the pieces that you would expect, especially the things like being able to bring your data from wherever it is to wherever people needed to be. Uh you know, that's the Holy Grail, so really glad to see that component in particular. I also like the cloud adoption framework saying we understand how to take you from this parochial world of servers that you have and do a cloud date of hybrid world and then maybe eventually get you get you to a public cloud. We understand all the steps and all the components uh I think that's uh you know, I have a study that fully in depth but it seems to have all the moving parts >>chime in anything stand out to, you >>know, I think it's great announcements and the most important things H. P. S and transformation and when you and transformation people realize who you've been, the old and they're here. Maybe the mass of the new but an experienced technology but I will not right away saying oh it's gonna happen right. It's going to happen like this is gonna be done, it's ready, it's materials ready to use and so on. So this is going to give more data points, more proof points, more capabilities that HB is moving away from whatever they were before. That's not even say that to a software services as a service as you mentioned provider. It's >>been challenging, you look at the course of history for companies that try to go from being a hardware company to a software company, uh HP itself, you know, sort of gave up on that IBM you could say, you know semi succeeded but they've they've struggled what's different >>That will spend 30 billion, >>30 >>four. Exactly. So and of course Cisco is making that transition. I mean every traditional large companies in that transition. What about today? Well, first of all, what do you think about HP es, prospects of doing so? And are there things today in the business that make that, you know more faster, whether it's containers or the cloud itself or just the scale of the internet? >>I mean it's fascinating topic, right? And I think many of the traditional players in the space failed because they wanted to mimic the cloud players and they simply couldn't muster up the Capex, which you need to build up public cloud. Right? Because if you think of the public cloud players then didn't put it up for the cloud offering, they put it up because they need themselves right, amazon is an online retailer google as a search and advertising giant Microsoft is organic load from from from office, which they had to bring to the cloud. So it was easier for them to do that. So no wonder they failed. The good news is they haven't lost much of their organic load. Hp customers are still HP customer service, celebrity security in their own premises and now they're bringing the qualities of the cloud as a service, the pay as you go capabilities to the on premise stack, which helps night leader to reduce complexity and go to what everybody in the post pandemic world wants to get to, which is I only pay for what I use and that's super crucial because business goes up and down. We're riding all the waves in a much, much faster way than ever before. Right before we had seven year cycles, it was kind of like cozy almost now we're down to seven weeks, sometimes seven days, sometimes seven hour cycles. And I don't want to pay for it infrastructure, which was great for how my business was two years ago. I want to pay for it as I use it now as a pivot now and I'm going to use >>Diane. How much of this? Thank you for that whole girl. How much of this is what customers want and need versus sort of survival tactics on the vendors >>part. So I think that there, if you look at where customers want to go, they know they have to go cloud, they had to go as a service. Um, and that they need to make multiple steps to get there. And for the most part, I see green light is being a, a highly credible market response to say, you know, we understand IT better, we helped build you guys up over the last 30 years. We can take you the rest of the way, here's all the evidence and the proof points, which I think a lot of the announcements provide uh, and they're very good on cloud native, but the area where the story, um, you may not be the fullest strength it needs to be is around things like multi cloud. So when I talked to almost any large organization C I O. They have all the clouds need to know, how do I make all this fit together? How do I reconcile that? So for the most part, I think it's closely aligned with actual customer requirements and customer needs. I think these have additional steps to go >>is that, do you feel like that's a a priority? In other words, they got to kind of take a linear path. They got to solve the problem for their core customer base or is it, do you feel like that's not even necessarily an aspiration? And it seems like customers, I want them to go. There is what I'm >>inferring that you're, so I do. Well let's go back to the announcement specifically. So there's there are two great operational announcements, one around the cloud physics and the other one around info site. It gives a wealth of data, you know, full stack about how things are operating, where the needs are, how you might be able to get more efficiencies, how you can shut down silicon, you're not using a lot of really great information, but all that has to live with a whole bunch of other consoles and everybody is really craving the single piece of glass. That's what they want is they want to reduce complexity as holder was saying and say, I want to be able to get my arms around my data center and all of my cloud assets. But I don't want to have to check each cloud. I want it in one place. So uh, but it's great to see those announcements position them for that next step. They have these essential components that are that look, you know, uh, they look best to breed in terms of their capabilities are certainly very modern now. They have to get the rest of that story. >>Hope you were mentioning Capex. I added it up I think last year the big four include Alibaba, spent 100 billion on the Capex and generally the traditional on prem players have been defensive around cloud. Not everything is moving to the cloud, we all know that. But I, I see that as a gift in a way that the companies like HP can build on top of into Diane's point that, you know, extend cross clouds out to the edge, which is, you know, a trillion dollar opportunity, which is just just massive. What are your thoughts on HBs opportunities there and chances of maybe breaking away from the pack >>I think definitely well there's no matter pack left, like there's only 23, it's a triumvirate of maybe it's a good thing from a marketing standpoint. There's not a long list of people who give me hardware in my data center. But I think it increases their chances, right? Like I said, it's a transformation, there's more credibility, there's more data point, there's more usage. I can put more workloads on this. And I see, I also will pay attention to that and look at that for the transformation. No question. >>Yeah. And speaking of C. I. O. S. What are you hearing these days? What's their reaction to this whole trend toward as a service? Do they, do they welcome it? Do they feel like okay it's a wait and see. Uh I need more proof points. What's the sentiment? >>Well, you have to divide the Ceo market basically two large groups. One is the the ones that are highly mature. They tend to be in larger organizations are very sophisticated consumers of everything. They see the writing on the wall and that for most things certainly not everything as a service makes the most sense for all the reasons we know, agility and and and speed, you know, time to value scalability, elasticity, all those great things. Uh And then you have the the other side of the market which they really crave control. They have highly parochial worlds that they've built up um that are hard to move to the cloud because they're so complex and intertwined because they haven't had that high maturity. They have a lot of spaghetti architecture. They're not really ready to move the cloud very quickly. So the the second audience though is the largest one and it's uh you know, the hyper scales are probably getting a lot of the first ones. Um, but the bigger markets, really the second one where the folks that need a lot of help and they have a lot of legacy hardware and software that they need to move and that H P. E understands very well. And so I think from that standpoint they're well positioned to take advantage of an untapped market are relatively untapped market in comparison. Hey, >>in our business we all get pulled in different directions because it would get to eat. But what are some of the cool things you guys are working on in your research that you might want people to know about? >>Uh, I just did a market overview for enterprise application platforms. I'm a strong believer that you should not build all your enterprise software yourself, but you can't use everything that you get from your typical SAs provider. So it's focusing on the extent integration and build capabilities. Bill is very, very important to create the differentiation in the marketplace and all the known sauce players basically for their past. Right? My final example is always to speak in cartoons, right? The peanuts, right? There's Linus of this comfort blanket. Right? The past capability of the SARS player is the comfort blanket, right? You don't fit 100% there or you want to build something strategic or we'll never get to that micro vertical. We have a great enterprise application, interesting topic. >>Especially when you see what's happening with Salesforce and Service now trying to be the platform platforms. I have to check that out. How about >>Diane? Well and last year I had a survey conducted a survey with the top 100 C IOS and at least in my view about what they're gonna do to get through this year. And so I'm redoing that again to say, you know, what are they gonna do in 2022? Because there's so many changes in the world and so, you know, last year digital transformation, automation cybersecurity, we're at the top of the list and it'll be very interesting. Cloud was there too in the top five. So we're gonna see what, how it's all going to change because next year is the year of hybrid work where we're all we have to figure out how half of our businesses are in the office and half are at home and how we're gonna connect those together and what tools we're gonna make, that everybody's trying to figure >>out how to get hybrid. Right, so definitely want to check out that research guys. Thanks so much for coming to the cubes. Great to see you. >>Thanks. Thanks Dave >>Welcome. Okay and thank you for watching everybody keep it right there for more great content from H. P. S. Green Lake announcement. You're watching the cube. Mm this wasn't

Published Date : Sep 26 2021

SUMMARY :

I want to start with you uh when you look at this whole trend to as Uh and and you know, since we understand you so well and we also understand where Was there anything in particular that stood out the move to data management? and cultural issues around data ownership inside the enterprise, but it's you know, That's not even say that to a software services as a service as you mentioned provider. that make that, you know more faster, whether it's containers or the cloud itself the qualities of the cloud as a service, the pay as you go capabilities to the on premise stack, Thank you for that whole girl. to say, you know, we understand IT better, we helped build you guys up over the last 30 years. is that, do you feel like that's a a priority? They have these essential components that are that look, you know, uh, they look best to breed in terms you know, extend cross clouds out to the edge, which is, you know, a trillion dollar opportunity, But I think it increases their chances, What's their reaction to sense for all the reasons we know, agility and and and speed, you know, time to value scalability, But what are some of the cool things you guys are I'm a strong believer that you should not build all your enterprise software yourself, but you can't use everything Especially when you see what's happening with Salesforce and Service now trying to be the platform platforms. to say, you know, what are they gonna do in 2022? Thanks so much for coming to the cubes. Okay and thank you for watching everybody keep it right there for more great content from H. P. S.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Dave balantaPERSON

0.99+

AlibabaORGANIZATION

0.99+

IBMORGANIZATION

0.99+

DianePERSON

0.99+

amazonORGANIZATION

0.99+

DavePERSON

0.99+

HPORGANIZATION

0.99+

30 billionQUANTITY

0.99+

seven daysQUANTITY

0.99+

last yearDATE

0.99+

100 billionQUANTITY

0.99+

2022DATE

0.99+

100%QUANTITY

0.99+

Holger MuellerPERSON

0.99+

Dion HinchcliffePERSON

0.99+

next yearDATE

0.99+

seven hourQUANTITY

0.99+

googleORGANIZATION

0.99+

OneQUANTITY

0.99+

each cloudQUANTITY

0.99+

second audienceQUANTITY

0.98+

second oneQUANTITY

0.98+

todayDATE

0.98+

oneQUANTITY

0.98+

23QUANTITY

0.98+

Holger mullerPERSON

0.98+

seven weeksQUANTITY

0.98+

two years agoDATE

0.98+

seven yearQUANTITY

0.98+

HpORGANIZATION

0.97+

HolgerPERSON

0.97+

this yearDATE

0.97+

two large groupsQUANTITY

0.95+

SARSORGANIZATION

0.94+

halfQUANTITY

0.94+

C IOSTITLE

0.94+

firstQUANTITY

0.94+

one placeQUANTITY

0.94+

HP esORGANIZATION

0.92+

last 30 yearsDATE

0.91+

HinchcliffePERSON

0.91+

single piece of glassQUANTITY

0.9+

LinusPERSON

0.9+

CapexORGANIZATION

0.88+

H. P. S. Green LakePERSON

0.88+

H. P. S. Green LakeORGANIZATION

0.88+

HBORGANIZATION

0.87+

SalesforceORGANIZATION

0.87+

about four years agoDATE

0.85+

two great operational announcementsQUANTITY

0.83+

H. P. SORGANIZATION

0.82+

fourQUANTITY

0.81+

top fiveQUANTITY

0.8+

first onesQUANTITY

0.78+

Hyper ScalerTITLE

0.75+

pandemicEVENT

0.73+

businessesQUANTITY

0.7+

ServiceORGANIZATION

0.66+

top 100QUANTITY

0.65+

O.PERSON

0.62+

BillPERSON

0.59+

DiaORGANIZATION

0.59+

dollarQUANTITY

0.57+

CeoORGANIZATION

0.53+

wavesEVENT

0.53+

H P. EORGANIZATION

0.52+

oreQUANTITY

0.48+

H.TITLE

0.38+

Sean Knapp, Ascend.io & Jason Robinson, Steady | AWS Startup Showcase


 

(upbeat music) >> Hello and welcome to today's session, theCUBE's presentation of the AWS Startup Showcase, New Breakthroughs in DevOps, Data Analytics, Cloud Management Tools, featuring Ascend.io for the data and analytics track. I'm your host, John Furrier with theCUBE. Today, we're proud joined by Sean Knapp, CEO and founder of Ascend.io and Jason Robinson who's the VP of Data Science and Engineering at Steady. Guys, thanks for coming on and congratulations, Sean, for the continued success, loves our cube conversation and Jason, nice to meet you. >> Great to meet you. >> Thanks for having us. >> So, the session today is really kind of looking at automating analytics workloads, right? So, and Steady as a customer. Sean, talk about the relationship with the customer Steady. What's the main product, what's the core relationship? >> Yeah, it's a really great question. when we work with a lot of companies like Steady we're working hand in hand with their data engineering teams, to help them onboard onto the Ascend platform, build these really powerful data pipelines, fueling their analytics and other workloads, and really helping to ensure that they can be successful at getting more leverage and building faster than ever before. So we tend to partner really closely with each other's teams and really think of them even as extensions of each other's own teams. I watch in slack oftentimes and our teams just go back and forth. And it's like, as if we were all just part of the same company. >> It's a really exciting time, Jason, great to have you on as a person cutting your teeth into this kind of what I call next gen data as intellectual property. Sean and I chat on theCUBE conversation previous to this event where every company is a data company, right? And we've heard that cliche. >> Right. >> But it's true, right? It's going to, it's getting more powerful with the edge. You seeing more diverse data, faster data, small, big, large, medium, all kinds of different aspects and patterns. And it's becoming a workflow kind of intellectual property paradigm for companies, not so much. >> That's right. >> Just the tech it's the database is you can, it's the data itself, data in flight, it's moving around, it's got value. What's your take-- >> Absolutely. >> On this trend? >> Basically, Steady helps our members and we have a community of members earn more income. So we want to help them steady their financial lives. And that's all based on data, so we have a web app, you could go to the iOS Store, you could go to the Google Play Store, you can download the app. And we have a large number of members, 3 million plus, who are actively using this. And we also have a very exciting new product called income passport. And this helps 1099 and mixed wage earners verify their income, which is very important for different government benefits. And then third, we help people with emergency cash grants as well as awards. So all of that is built on a bedrock of data, so if you're using our apps, it's all data powered. So what you were mentioning earlier from pipelines that are running it real time to yeah, anything, that's a kind of a small data aggregation, we do everything from small to real-time and large. >> You guys are like a multiple sided marketplace here, you've got it, you're a FinTech app, as well as the future of work and with virtual space-- >> That's right. >> Happening now, this is becoming, actually encapsulates kind of the critical problems that people trying to solve right now, you've got multiple stakeholders. >> That's right. >> In the data. >> Yes, we absolutely do. So we have our members, but we also, within the company, we have product, we have strategy, we have a growth team, we have operations. So data engineering and data science also work with a data analytics organization. So at Steady we're very much a data company. And we have a data organization led by our chief data officer and we have data engineering and data science, which are my teams, but also that business insights and analytics. So a lot of what we're building on the data engineering side is powering those insights and analytics that the business stakeholders use every day to run the organization. >> Sean, I want to get your thoughts on this because we heard from Emily Freeman in the keynote about how this revolution in DevOps or for premiering her talk around how, it's not just one persona anymore, I'm a release engineer, I'm this kind of engineer, you're seeing now all engineering, all developers are developers. You have some specialty, but for the most part, the team makeups are changing. We touched on this in our cube conversation. The journey of data is not just the data people, the data folks. It's like there's, they're developers too. So the confluence of data science, data management, developing, is changing the team and cultural makeup of companies. Could you share your thoughts on this dynamic and how it impacts customers? >> Absolutely, I think the, we're finding a similar trend to what we saw a number of years ago, when we talked about how software was eating the world and every company was now becoming a software company. And as a result, we saw this proliferation and expansion of what the software roles look like and thought of a company pulled through this entire new era of DevOps. We were finding that same pattern now emerging around data as not only is every company a software company, every company is a data company and data really is that field, that oil that fuels the business and in doing so, we're finding that as Jason describes it's pervasive across the team, it is no longer just one team that is creating some insights and reports around operational analytics, or maybe a team over here doing data science or machine learning. It is expensive. And I think the really interesting challenges that start to come with this too, are so many data teams are so over capacity. We did a recent study that highlighted that 96% of data teams are at, or over capacity, only 4% had spare capacity. But as a result, the net is being cast even wider to pull in people from even broader and more adjacent domains to all participate in the data future of their organization. >> Yeah, and I think I'd love to get your guys react to this conversation with Andy Jassy, who's now the CEO of Amazon, but when he was the CEO of AWS last year, I talked with him about how the old guard and new guard are thinking around team formations. Obviously team capacity is growing and challenged when you've got the right formula. So that's one thing, right? But what if you don't have the right formula? If you're in the skills gap, problem, or team formation side of it, where you maybe there was two years ago where the mandate came down? Well, we got to build a data team even in two years, if you're not inquisitive. And this is what Andy and I were talking about is the thinking and the mindset of that mission and being open to discovering and understanding the changes, because if you were deciding what your team was two, three years ago, that might have changed a lot. So team capacity, Sean, to your point, if you got it right, and that's a challenge in and of itself, but what if you don't have it, right? What do you guys think about this? >> Yeah, I think that's exactly right. Basically trying to see, look and gaze into the crystal ball and see what's going to happen in a year or two years, even six months is quite difficult. And if you don't have it right, you do spend a lot of time because of the technical debt that you've amassed. And we certainly spend quite a bit of time with technical debt for things we wanted to build. So, deconvolving that, getting those ETLs to a runnable state, getting performance there, that's what we spend a bit of time on. And yeah, it's something that it's really part of the package. >> What do you guys see as the big challenge on teams? The scaling challenge okay. Formation is one thing, Sean, but like, okay, getting it right, getting it formed properly and then scaling it, what are the big things you're seeing? >> One of the, I think the overarching management themes in general, it is the highest out by the highest performing teams are those where the individual with the context and the idea is able to execute as far and as fast and as efficiently as possible, and removing a lot of those encumbrances and put it a slightly different way. If DevOps was all basically boiled down to, how do we help more people write more software faster and safely data ops would be very similarly, how do we enable more people to do more things with data faster and safely? And to do that, I think the era of these massive multi-year efforts around data are gone and hopefully in the not too distant future, even these multi-quarter efforts around data are gone and we get into a much more agile, nimble methodology where smaller initiatives and smaller efforts are possible by more diverse skillsets across the business. And really what we should be doing is leveraging technology and automation to ensure that people are able to be productive and efficient and that we can trust our data and that systems are automated. And these are problems that technology is good at. And so in many ways, how in the early days Amazon would described as getting people out of the muck of DevOps. I think we're going to do the same thing around getting people out of the muck of the data and get them really focused on the higher level aspects. >> Yeah, we're going to get into that complexity, heavy lifting side muck, and then the heavy lifting taking away from the customers. But I want to go back to real quick with Jason while we're on this topic. Jason, I was just curious, how much has your team grown in the recent year and how much could've, should've grown, what's the status and how has Ascend helped you guys? What's the dynamic there? ' Cause that's their value proposition. So, take us through that. >> Absolutely, so, since the beginning of the year data engineering has doubled. So, we're a lean team, we certainly use the agile mindset and methodologies, but we have gone from, yeah, we've essentially doubled. So a lot of that is there's just so much to do and the capacity problem is certainly there. So we also spend a lot of time figuring out exactly what the right tooling is. And I was mentioning the technical debt. So you have those, there's the big O notation of whatever that involves technical debt. And when you're building new things, you're fixing old things. And then you're trying to maintain everything. That scaling starts to hit hard. So even if we continue to double, I mean, we could easily add more data engineers. And a lot of that is, I mean, you know about the hiring cycles, like, a lot of of great talent, but it's difficult to make all of those hires. So, we do spend quite a bit of time thinking about exactly what tools data engineering is using day-to-day. And what I mentioned, were technologies on the streaming side all the way to like the small batch things, but, like something that starts as a small batch getting grow and grow and grow and take, say 15 hours, it's possible, I've seen it. But, and getting that back down and managing that complexity while not overburdening people who probably don't want to spend all their waking hours building ETLs, maintaining ETL, putting in monitoring, putting in alerting, that I think is quite a challenge. >> It's so funny because you mentioned 18 hours, you got to kind of being, you didn't roll your eyes, but you almost did, but this is, but people want it yesterday, they want real time, so there's a lot of demand-- >> Yes. >> On the minds of the business outcome side of it. So, I got to ask you, because this comes up a lot with technical debt, and now we're starting to see that come into the data conversation. And so I always curious, is there a different kind of technical debt with data? Because again, data is like software, but it's a little bit of more elusive in the sense it's always changing. So is there, what kind of technical debt do you see in the data side that's different than say software side? >> Absolutely, now that's a great question. So a lot of thinking about your data and structuring your data and how you want to use that data going into a particular project might be different from what happens after stakeholders have a new considerations and new products and new items that need to be built. So thinking about how that, let's say you have a document store, or you have something that you thought was going to be nice and structured, how that can evolve and support those particular products can essentially, unless you take the time and go through and say, well, let's architect it perfectly so that we can handle that. You're going to make trade-offs and choices, and essentially that debt builds up. So you start cutting corners, you start changing your normalization. You start essentially taking those implicit schema that then tend to build into big things, big implicit schema. And then of course, with implicit schema, you're going to have a lot of null values, you're going to have a lot of items to deal with. So, how do you deal with that? And then you also have the opportunity to create keys and values and oops, do we take out those keys that were slightly misspelled? So, I could go on for hours, but basically the technical debt certainly is there with on data. I see a lot of this as just a spectrum of technical debt, because it's all trade-offs that you made to build a product, and the efficiency has start to hit you. So, the 15 hour ETL, I was mentioning, basically you start with something and you were building things for stakeholders and essentially you have so much complex logic within that. So for the transforms that you're doing from if you're thinking of the bronze, silver, gold, kind of a framework, going from that bronze to a silver, you may have a massive number of transformations or just a few, just to lightly dust it. But you could also go to gold with many more transformations and managing that, managing the complexity, managing what you're spending for servers day after day after day. That's another real challenge of that technical debt stuff. >> That's a great lead into my next question, for Sean, this is the disparate system complexity, technical debt and software was always kind of the belief was, oh yeah, I'll take some technical debt on and work it off once I get visibility and say, unit economics or some sort of platform or tool feature, and then you work it off as fast as possible. I was, this becomes the art and science of technical debt. Jason, what you're saying is that this can be unwieldy pretty quickly. You got state and you got a lot of different inter moving parts. This is a huge issue, Sean, this is where it's, technical debt in the data world is much different architecturally. If you don't get it right, this is a huge, huge issue. Could you aluminate why that is and what you guys are doing to help unify and change some of those conditions? >> Yeah, absolutely. When we think about technical debt and I'll keep drawing some parallels between DevOps and data ops, 'cause I think there's a tremendous number of similarities in these worlds. We used to always have the saying that "Your tech debt grows manually across microservices, "but exponentially within services." And so you want that right level of architecture and composibility if you will, of your systems where you can deploy changes, you can test, you can have high degrees of competence and the roll-outs. And I think the interesting part in the data side, as Jason highlighted, the big O-notation for tech debt in the data ecosystem, is still fairly exponential or polynomial in nature. As right now, we don't have great decomposition of the components. We have different systems. We have a streaming system, we have a databases, we have documents, doors and so on, but how the whole data pipeline data engineering part works generally tends to be pretty monolithic in nature. You take your whole data pipeline and you deploy the whole thing and you basically just cross your fingers, and hopefully it's not 15 hours, but if it is 15 hours, you go to sleep, you wake up the next morning, grab a coffee and then maybe it worked. And that iteration cycle is really slow. And so when we think about how we can improve these things, right? This is combinations of intelligent systems that do instantaneous schema detection, and validation, excuse me, it's combinations of things that do instantaneous schema detection and validation. It's things like automated lineage and dependency tracking. So you know that when you deploy code, what piece of data it affects, it's things like automated testing on individual core parts of your data pipelines to validate that you're getting the expected output that you need. So it's pulling a lot of these same DevOps style principles into the data world, which is really designed to going back to how do you help more people build more things faster and safely really rapid iterations for rapid feedback. So you know if there's breaks in the system much earlier on. >> Well, I think Sean, you're onto something really big there. And I think this is something that's emerging pretty quickly in the cloud scale that I called, 2.0, whatever, what version we're in, is the systems thinking mindset. 'Cause you mentioned the model that that was essentially a silo or subsystem. It was cohesive in it's own way, but now it's been monolithic. Now you have a broken down set of decomposed sets of data pieces that have to work together. So Jason, this is the big challenge that everyone, not really people are talking about, I think most these guys are, and you're using them. What are you unifying? Because this is a systems operating systems thinking, this is not like a database problem. It's a systems problem applied to data where databases are just pieces of it, what's your thoughts? >> That's absolutely right. And I would, so Sean touched on composibility of ETL and thinking about reusable components, thinking about pieces that all fit together, because as you're building something as complex as some of these ETS are, we do think about the platform itself and how that lends to the overarching output. So one thing, being able to actually see the different components of an ETL and blend those in and you as the dry principal, don't repeat yourself. So you essentially are able to take pieces that one person built, maybe John builds a couple of our connectors coming in, Sean also has a bunch of transforms and I just want this stuff out, so I can use a lot of what you guys have already built. I think that's key, because a lot of engineering and data engineering is about managing complexity. So taking that complexity and essentially getting it out fast and getting out error free, is where we're going with all of the data products we're building. >> What are some of the complexity that you guys have that you're dealing with? Can you be specific and share what these guys are doing to solve that problem for you? That's, this is a big problem everyone's having, I'm seeing that all over the place. >> Absolutely, so I could start at a couple of places. So I don't know if you guys are on the three Vs, four Vs or five Vs, but we have all of those. And if you go to that five, four or five V model, there is the veracity piece, which you have to ask yourself, is it true? Is it accurate when? So change happens throughout the pipeline, change can come from web hooks, change can come from users. You have to make sure that you're managing that complexity and what we we're building, I mentioned that we are paying down a lot of tech debt, but we're also building new products. And one pretty challenging, quite challenging ETL that we're building is something going from a document store to an analytical application. So in that document store, we talked about flexible schema. Basically, you don't really know exactly what you're going to get day to day, and you need to be able to manage that change through the whole process in a way that the ultimate business users find value. So, that's one of the key applications that we're using right now. And that's one that the team at Ascend and my team are working hand in hand going through a lot of those challenges. And it's, I also watch the slack just as Sean does, and it's a very active discussion board. So it is essentially like they're just partnering together. It's fabulous, but yeah-- >> And you're seeing kind of a value on this too, I mean, in terms of output what's the business results? >> Yes, absolutely. So essentially this is all, so yes, the fifth V value. So, getting to that value is essentially, there were a few pieces of the, to the value. So there's some data products that we're building within that product and their data science, data analytics based products that essentially do things with the data that help the user. There's also the question of exactly the usage and those kinds of metrics that people in ops want to understand as well as our growth team. So we have internal and external stakeholders for that. >> Jason, this is a great use case, a great customer, Sean, you guys are automating. For the folks watching, who were seeing their peer living the dream here and the data journey, as we say, things are happening. What's the message to customers that you guys want to send because you guys are really cutting your teeth into a whole another level of data engineering, data platform. That's really about the systems view and about cloud. What's the pitch, Sean? What should people know about the company? >> Absolutely, yeah, well, so one, I'd say even before the pitch, I would encourage people to not accept the status quo. And in particular, in data engineering today, the status quo is an incredibly high degree of pain and discomfort. And I think the important part of why Ascend exists and why we're so helpful for our customers, there is a much more automated future of how we build data products, how we optimize those and how we can get a larger cohort of builders into the data ecosystem. And that helps us get out of the muck as we talked about before and put really advanced technology to work for more people inside of our companies to build these data products, leveraging the latest and greatest technologies to drive increased business value faster. >> Jason, what's your assessment of these guys, as people are watching might say, hey, you know what, I'm going to contact them, I need this. How would you talk about Ascend into your peers? >> Absolutely, so I think just thinking about the whole process has been a great partnership. We started with a POC, I think Ascend likes to start with three use cases, I think we came out with four and we went through the ones that we really cared about and really wanted to bring value to the company with. So we have roadmaps for some, as we're paying down technical debt and transitioning, others we can go directly to. And I think that thinking about just like you're saying, John, that systems view of everything you're building, where that makes sense, you can actually take a lot of that complexity and encapsulate it in a way that you can essentially manage it all in that platform. So the Ascend platform has the composibility piece that we touched on. It also, not only can you compose it, but you can drill into it. And my team is super talented and is going to drill into it. So basically loves to open up each of those data flows each of the components therein and has the control there with the combination of Spark Sequel, PI Spark SQL Scala and so on. And I think that the variety of connections is also quite helpful. So thinking about the dry principle from a systems perspective is extremely useful because it's dry, you often get that in a code review, right? I think you can be a little bit more dry here. >> Yeah. >> But you can really do that in the way that you're composing your systems as well. >> That's a great, great point. One quick thing for the folks that they're watching that are trying to figure this out, and a lot of architecture is going on. A lot of people are looking at different solutions. What things have you learned that you could give them a tip like to avoid like maybe some scar tissue or tips of the trade, where you can say, hey, this way, be careful, what's some of the learnings? Could you give a few pointers to folks out there, if they're kicking tires on the direction, what's the wrong direction? What's the right direction look like? >> Absolutely, I think that, I think it through, and I don't know how much time we have that, that feels like a few days conversation as far as ways to go wrong. But absolutely, I think that thinking through exactly where want to be is the key. Otherwise it's kind of like when you're writing a ticket on Jarrah, if you don't have clear success criteria, if you don't know where you going to go, then you'll end up somewhere building something and it might work. But if you think through your exact destination that you want to be at, that will drive a lot of the decisions as you think backwards to where you started. And also I think that, so Sean also mentioned challenging the status quo. I think that you really have to be ready to challenge the status quo at every step of that journey. So if you start with some particular service that you had and its legacy, if it's not essentially performing what you need, then it's okay to just take a step back and say, well, maybe that's not the one. So I think that thinking through the system, just like you were saying, John, and also I think that having a visual representation of where you want to go is critical. So hopefully that encapsulates a lot of it, but yes, the destination is key. >> Yeah, and having an engineering platform that also unifies the multiple components and it's agile. >> That's right. >> It gets you out of the muck and on the last day and differentiate heavy lifting is a cloud plan. >> Absolutely. >> Sean, wrap it up for us here. What's the bumper sticker for your vision, share your founding principles of the company. >> Absolutely, for us, we started the company as a former in recovery and CTO. The last company I founded, we had nearly 60 people on our data team alone and had invested tremendous amounts of effort over the course of eight years. And one of the things that I've learned is that over time innovation comes just as much from deciding what you're no longer going to do as what you're going to do. And focusing heavily around, how do you get out of that muck? How do you continue to climb up that technology stack? Is incredibly important. And so really we are excited to be a part of it and taking the industry is continuing to climb higher and higher level. We're building more and more advanced levels of automation and what we call our data awareness into the automated engine of the Ascend platform that takes us across the entire data ecosystem, connecting and automating all data movement. And so we have a very exciting vision for this fabric that's emerging over time. >> Awesome, Sean, thank you so much for that insight, Jason, thanks for coming on customer of Ascend.io. >> Thank you. >> I appreciate it, gentlemen, thank you. This is the track on automating analytic workloads. We here at the end of us showcase, startup showcase, the hottest companies here at Ascend.io, I'm John Furrier, with theCUBE, thanks for watching. (upbeat music)

Published Date : Sep 22 2021

SUMMARY :

and Jason, nice to meet you. So, and Steady as a customer. and really helping to ensure great to have you on as a person kind of intellectual property the database is you can, So all of that is built of the critical problems that the business and cultural makeup of companies. and data really is that field, that oil but what if you don't have it, right? that it's really part of the package. What do you guys see as and the idea is able to execute as far grown in the recent year And a lot of that is, I mean, that come into the data conversation. and essentially you have so and then you work it and you basically just cross your fingers, And I think this is something and how that lends to complexity that you guys have and you need to be able of exactly the usage that you guys want to send of builders into the data ecosystem. hey, you know what, I'm going and has the control there in the way that you're that you could give them a tip of where you want to go is critical. Yeah, and having an and on the last day and What's the bumper sticker for your vision, and taking the industry is continuing Awesome, Sean, thank you This is the track on

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AndyPERSON

0.99+

JasonPERSON

0.99+

SeanPERSON

0.99+

Emily FreemanPERSON

0.99+

Sean KnappPERSON

0.99+

Jason RobinsonPERSON

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

Andy JassyPERSON

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

15 hoursQUANTITY

0.99+

AscendORGANIZATION

0.99+

last yearDATE

0.99+

96%QUANTITY

0.99+

eight yearsQUANTITY

0.99+

15 hourQUANTITY

0.99+

iOS StoreTITLE

0.99+

18 hoursQUANTITY

0.99+

Google Play StoreTITLE

0.99+

Ascend.ioORGANIZATION

0.99+

SteadyORGANIZATION

0.99+

yesterdayDATE

0.99+

six monthsQUANTITY

0.99+

fiveQUANTITY

0.99+

thirdQUANTITY

0.99+

Spark SequelTITLE

0.99+

twoDATE

0.98+

TodayDATE

0.98+

a yearQUANTITY

0.98+

two yearsQUANTITY

0.98+

two years agoDATE

0.98+

todayDATE

0.98+

fourQUANTITY

0.98+

JarrahPERSON

0.98+

eachQUANTITY

0.97+

theCUBEORGANIZATION

0.97+

three years agoDATE

0.97+

oneQUANTITY

0.97+

3 million plusQUANTITY

0.97+

4%QUANTITY

0.97+

one thingQUANTITY

0.96+

one teamQUANTITY

0.95+

three use casesQUANTITY

0.94+

one personQUANTITY

0.93+

nearly 60 peopleQUANTITY

0.93+

one personaQUANTITY

0.91+